venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Compressed Predictive Information Coding Abstract Unsupervised learning plays an important role in many fields, such as machine learning, data compression, and neuroscience. Compared to static data, methods for extracting low-dimensional structure for dynamic data are lagging. We developed a novel information-theoretic framework, Compressed Predictive Information Coding (CPIC), to extract predictive latent representations from dynamic data. Predictive information quantifies the ability to predict the future of a time series from its past. CPIC selectively projects the past (input) into a low dimensional space that is predictive about the compressed data projected from the future (output). The key insight of our framework is to learn representations by balancing the minimization of compression complexity with maximization of the predictive information in the latent space. We derive tractable variational bounds of the CPIC loss by leveraging bounds on mutual information. The CPIC loss induces the latent space to capture information that is maximally predictive of the future of the data from the past. We demonstrate that introducing stochasticity in the encoder and maximizing the predictive information in latent space contributes to learning more robust latent representations. Furthermore, our variational approaches perform better in mutual information estimation compared with estimates under the Gaussian assumption commonly used. We show numerically in synthetic data that CPIC can recover dynamical systems embedded in noisy observation data with low signal-to-noise ratio. Finally, we demonstrate that CPIC extracts features more predictive of forecasting exogenous variables as well as auto-forecasting in various real datasets compared with other state-of-the-art representation learning models. Together, these results indicate that CPIC will be broadly useful for extracting low-dimensional dynamic structure from high-dimensional, noisy timeseries data. 1 INTRODUCTION Unsupervised methods play an important role in learning representations that provide insight into data and exploit unlabeled data to improve performance in downstream tasks in diverse application areas Bengio et al. (2013); Chen et al. (2020); Grill et al. (2020); Devlin et al. (2018); Brown et al. (2020); Baevski et al. (2020); Wang et al. (2020). Prior work on unsupervised representation learning can be broadly categorized into generative models such as variational autoencoders(VAEs) (Kingma & Welling, 2013) and generative adversarial networks (GAN) (Goodfellow et al., 2014), discriminative models such as dynamical components analysis (DCA) (Clark et al., 2019), contrastive predictive coding (CPC) (Oord et al., 2018), and deep autoencoding predictive components (DAPC) (Bai et al., 2020). Generative models focus on capturing the joint distribution between representations and inputs, but are usually computationally expensive. On the other hand, discriminative models emphasize capturing the dependence of data structure in the low-dimensional latent space, and are therefore easier to scale to large datasets. In the case of time series, some representation learning models take advantage of an estimate of mutual information between encoded past (input) and the future (output) (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Oord et al., 2018). Although previous models utilizing mutual information extract low-dimensional representations, they tend to be sensitive to noise in the observational space. DCA directly makes use of the mutual information between the past and the future (i.e., the predictive information (Bialek et al., 2001)) in a latent representational space that is a linear embedding of the observation data. However, DCA operates under Gaussian assumptions for mutual information estimation. We propose a novel representation learning framework which is not only robust to noise in the observation space but also alleviates the Gaussian assumption and is thus more flexible. We formalize our problem in terms of data generated from a stationary dynamical system and propose an information-theoretic objective function for Compressed Predictive Information Coding (CPIC). Instead of leveraging the information bottleneck (IB) objective directly as in Creutzig & Sprekeler (2008) and Creutzig et al. (2009), where the past latent representation is directly used to predict future observations, we predict the compressed future observations filtered by the encoder. It is because that in the time series setting, future observations are noisy, and treating them as labels is not insightful. Specifically, our target is to extract latent representation which can better predict future underlying dynamics. Since the compressed future observations are assumed to only retain the underlying dynamics, better compression thus contributes to extracting better dynamical representation. In addition, inspired by Clark et al. (2019) and Bai et al. (2020), we extend the prediction from single input to a window of inputs to handle high order predictive information. Moreover, instead of directly estimating the objective information with Gaussian assumption (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Clark et al., 2019; Bai et al., 2020), we developed variational bounds and a tractable end-to-end training framework based on the neural estimator of mutual information studied in Poole et al. (2019). Note that our inference first leverages the variational boundary technique for self-supervised learning on the time series data. Since it alleviates the Gaussian assumption, it is applicable to a much larger class of dynamical systems. In CPIC, we also demonstrate that introducing stochasticity into either a linear or nonlinear encoder robustly contributes to numerically better representations in different tasks. In particular, we illustrate that CPIC can recover trajectories of a chaotic dynamical system embedded in highdimensional noisy observations with low signal-to-noise ratios in synthetic data. Furthermore, we conduct numerical experiments on four real-world datasets with different goals. In two neuroscience datasets, monkey motor cortex (M1) and rat dorsal hippocampus (HC), compared with the state-ofthe-art methods, we show that the latent representations extracted from CPIC have better forecasting accuracy for the exogenous variables of the monkey’s future hand position for M1, and for the rat’s future position for HC. In two other real datasets, historical hourly weather temperature data (TEMP) and motion sensor data (MS), we show that latent representations extracted by CPIC have better forecasting accuracy of the future of those time series than other methods. In summary, the primary contributions of our paper are as follows: • We developed a novel information-theoretic self-supervised learning framework, Compressed Predictive Information Coding (CPIC), which extracts low-dimensional latent representation from time series. CPIC maximizes the predictive information in the latent space while minimizing the compression complexity. • We introduced the stochastic encoder structure where we encode inputs into stochastic representations to handle uncertainty and contribute to better representations. • Based on prior works, we derived the variational bounds of the CPIC’s objective function and a tractable, end-to-end training procedure. Since our inference alleviates the Gaussian assumption common to other methods, it is applicable to a much larger class of dynamical systems. Moreover, to the best of our knowledge, our inference is the first to leverage the variational boundary technique for self-supervised learning on time series data. • We demonstrated that, compared with the other unsupervised based methods, CPIC more robustly recovers latent dynamics in dynamical system with low signal-to-noise ratio in synthetic experiments, and extracts more predictive features for downstream tasks in various real datasets. 2 RELATED WORK Mutual information (MI) plays an important role in estimating the relationship between pairs of variables. It is a reparameterization-invariant measure of dependency: I(X,Y ) = Ep(x,y) [ log p(x|y) p(x) ] (1) It is used in computational neuroscience (Dimitrov et al., 2011), visual representation learning (Chen et al., 2020), natural language processing (Oord et al., 2018) and bioinformatics (Lachmann et al., 2016). In representation learning, the mutual information between inputs and representations is used to quantify the quality of the representation and is also closely related to reconstruction error in generative models (Kingma & Welling, 2013; Makhzani et al., 2015). Estimating mutual information is computationally and statistically challenging except in two cases: discrete data, as in Tishby et al. (2000) and Gaussian data, as in Chechik et al. (2005). However, these assumptions both severely constrain the class of learnable models (Alemi et al., 2016). Recent works leverage deep learning models to obtain both differentiable and scalable MI estimation (Belghazi et al., 2018; Nguyen et al., 2010; Oord et al., 2018; Alemi et al., 2016; Poole et al., 2019; Cheng et al., 2020). In terms of representation learning in time series, Wiskott & Sejnowski (2002); Turner & Sahani (2007) targeted slowly varying features, Creutzig & Sprekeler (2008) utilized the information bottleneck (IB) method (Tishby et al., 2000) and developed an information-theoretic objective function. Creutzig et al. (2009) proposed an alternative objective function based on a specific state-space model. Recently, Oord et al. (2018) proposed CPC to extract dynamic information based on an autoregressive model on representations and contrastive loss on predictions. Clark et al. (2019); Bai et al. (2020) proposed unsupervised learning approach to extract low-dimensional representation with maximal predictive information(PI). All of the above unsupervised representation learning models, except for CPC, assume the data to be Gaussian, which may be not realistic, especially when applied to neuroscience datasets (O’Doherty et al., 2017; Glaser et al., 2020), given the nonGaussianity of neuronal activity. Here, we leverage recently introduced neural estimation of mutual information to construct upper bounds of the CPIC objective and develop an end-to-end training procedure. CPIC enables generalization beyond the Gaussian case and autoregressive models. Recently, deep encoder networks are leveraged to model nonlinear relations between latent representations and observed data in time series (Chen et al., 2020; Bai et al., 2020; He et al., 2020). However, use of complicated nonlinear encoders induced hinders computational efficiency (Wang et al., 2019). CPIC proposes an efficient representation learning framework for time series that encodes data with maximal predictive information. We also note that there exists several works on the time series modeling from generative modeling perspective. Initially, Fabius & Van Amersfoort (2014) leveraged the recurrent neural network with variational autoencoder to model time series data. Frigola et al. (2014) proposed variational Gaussian-process state-space model. Meng et al. (2021) proposed variational structured Gaussian-process regression network which can efficiently handle more complicated relationships in time series. Most generative modeling inference would depend on the length of time series, while the inference of CPIC depends on the window size T , which is more scalable for long time series. 3 COMPRESSED PREDICTIVE INFORMATION CODING The main intuition behind Compressed Predictive Information Coding (CPIC) is to extract low dimensional representations with minimal compression complexity and maximal dynamical structure. Specifically, CPIC first discards low-level information that is not relevant for dynamic prediction and noise that is more local by minimizing compression complexity (i.e., mutual information) between inputs and representations to improve model generalization. Second, CPIC maximizes the predictive information in the latent space of compressed representations. Compared with Clark et al. (2019); Bai et al. (2020), CPIC first utilizes stochastic encoder to handle uncertainty of representations, which contributes to more robust representations, and also relieves the Gaussian assumption by constructing bounds of mutual information based on neural estimations. In more detail, instead of employing a deterministic linear mapping function as the encoder to compress data as in Clark et al. (2019), CPIC takes advantage of a stochastic linear or nonlinear mapping function. Given inputs, the stochastic representation follows Gaussian distributions, with means and variances encoded from any neural network structure. A nonlinear CPIC utilizes a stochastic nonlinear encoder which is composed of a nonlinear mean encoder and a linear variance encoder, while a linear CPIC utilizes a stochastic linear encoder which is composed of a linear mean encoder and a linear variance encoder. Note that stochastic representations conditioned on inputs are parameterized as a conditional Gaussian distribution, but the marginal distribution of the representation is a mixture of Gaussian distribution, which is widely recognized as universal approximator of densities. On the other hand, avoiding the Gaussian assumption on mutual information (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Clark et al., 2019; Bai et al., 2020), CPIC leverages neural estimations of mutual information. Specifically, we propose differentiable and scalable bounds of the CPIC objective via variational inference, which enables end-to-end training. Formally, let X = {xt}, xt ∈ RN be a stationary, discrete time series, and let Xpast = (x−T+1, . . . , x0) and Xfuture = (x1, . . . , xT ) denote consecutive past and future windows of length T. Then both past and future data are compressed into past and future representations denoted as Ypast = (y−T+1, . . . , y0) and Yfuture = (y1, . . . , yT ) with embedding dimension size Q. Similar to the information bottleneck (IB) (Tishby et al., 2000), the CPIC objective contains a trade-off between two factors. The first seeks to minimize the compression complexity and the second to maximize the predictive information in the latent (representation) space. Note that when the encoder is deterministic the compression complexity is deprecated and when the encoder is stochastic the complexity is measured by the mutual information between representations and inputs. In the CPIC objective, the trade-off weight β > 0 dictates the balance between the compression and predictive information terms: min ψ L, where L ≡ β(I(Xpast;Ypast) + I(Xfuture;Yfuture))− I(Ypast;Yfuture) (2) where ψ refer to the model parameters which encode inputs X to latent variables Y . Larger β promotes a more compact mapping and thus benefits model generalization, while smaller β leads to more predictive information in the latent space on training data. This objective function is visualized in Figure 1, where inputs X are encoded into latent space as Y via tractable encoders and the dynamics of Y are learned in a model-free manner. The encoder p(Y |X) could be implemented by fitting deep neural networks (Alemi et al., 2016) to encode data X . Instead, CPIC takes an approach similar to VAEs (Kingma & Welling, 2013), in that it encodes data into stochastic representations. In particular, CPIC employs a stochastic encoder (genc in Figure 1) to compress input xt into yt as yt|xt ∼ N (µt, diag(σ2t )) , (3) for each time stamp t. The mean of yt is given by µt = gEncoderµ (xt), whereas the variance arises from σt = gEncoderσ (xt). Encoders gEncoderµ and g Encoder σ can be any nonlinear mapping and is usually modeled using neural network architectures. We use a twolayer perceptron with ReLU activation function (Agarap, 2018) for a nonlinear mapping. In terms of a linear CPIC, we specify the mean of representation as µt = uTxt. In both linear and nonlinear CPIC setting, if σt = 0, the stochastic encoder reduces to a deterministic encoder. We extend single input to multiple inputs in the CPIC framework in terms of a specified window size T . The selection of window size is discussed in Appendix A. Due to the stationary assumption, the relation between past/future blocks of input data X(−T ), X(T ) ∈ RN×T and encoded data Y (−T ), Y (T ) ∈ RQ×T are equivalent, pX(−T ),Y (−T ) = pX(T ),Y (T ). Note that −T and T indexes to past and future T data. Without loss of generality, the compression relation can be expressed as Y (T ) = gEncoderµ (X(T )) + ξ(T ), where ξ(T ) ∈ N (0, blockdiag(diag(σ21), . . . , diag(σ2T )) and noise standard deviation σt = gEncoderσ (xt). 4 VARIATIONAL BOUNDS OF COMPRESSED PREDICTIVE INFORMATION CODING In CPIC, since data X are stationary, the mutual information between the input data and the compressed data for the past is equivalent to that for the future I(X(−T );Y (−T )) = I(X(T );Y (T )). Therefore, the objective of CPIC can be rewritten as minL = βI(X(T );Y (T ))− I(Y (−T );Y (T )) . (4) We developed the variational upper bounds on mutual information for the compression complexity I(X(T );Y (T )) and lower bounds on mutual information for the predictive information I(Y (−T );Y (T )). 4.1 UPPER BOUNDS OF COMPRESSION COMPLEXITY In the section, we derived a tractable variational upper bound (VUB) depending on a single sample and a leave-one-out upper bound (L1Out) (Poole et al., 2019) depending on multiple samples. Theorem 1 By introducing a variational approximation r(y(T )) to the marginal distribution p(y(T )), a tractable variational upper bound of mutual information I(X(T );Y (T )) is derived as IVUB(X(T );Y (T )) = EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] . Theorem 2 By utilizing a Monte Carlo approximation for variational distribution r(y(T )), the L1Out upper bound of mutual information I(X(T );Y (T )) is derived as IL1Out(X(T );Y (T )) = E [ 1 S ∑S i=1 [ log p(y(T )i|x(T )i)1 S−1 ∑ j ̸=i p(y(T )i|x(T )j) ]] , where S is the sample size. The derivation details are in Appendix B and C. In practice, the L1Out bound depends on the sample size S and may suffer from numerical instability. Thus, we would like to choose the sample size S as large as possible. In general scenarios where p(y(T )|x(T )) is intractable, Cheng et al. (2020) proposed a variational version of VUB and L1Out by using a neural network to approximate the condition distribution p(y(T )|x(T )). Since the conditional distribution p(y(T )|x(T )) is parameterized as a known stochastic/deterministic encoder in CPIC, those variational versions are not taken into consideration. 4.2 LOWER BOUNDS OF PREDICTIVE INFORMATION For the predictive information (PI), we derived lower bounds of I(Y (−T );Y (T )) using results in Agakov (2004); Alemi et al. (2016); Poole et al. (2019). In particular, we derived tractable unnormalized Barber and Agakov (TUBA) (Barber & Agakov, 2003) lower bounds depending on a single sample and an infoNCE lower bound (Oord et al., 2018) depending on multi samples. All derivation details are discussed in Appendix D, E and F. Theorem 3 We derived a lower bound on predictive information (PI) I(Y(-T); Y(T)) as IV LB(Y (−T );Y (T )) = H(Y (T )) + Ep(y(−T ),y(T ))[log q(y(T )|y(−T ))], where q(y(T )|y(−T )) is a variational conditional distribution. However, this lower bound requires a tractable decoder for the conditional distribution q(y(T )|y(−T )) (Alemi et al., 2016). Alternatively we derived a TUBA lower bound (Barber & Agakov, 2003) which is free of the parametrization of decoder. Theorem 4 By introducing a differentiable critic function f(x, y) and a baseline function a(y(T )) defined in Appendix E, the TUBA lower bound of predictive information is derived as ITUBA(Y (−T ), Y (T )) = Ep(y(−T ),y(T ))[f̃(y(−T ), y(T ))]−log ( Ep(y(−T ))p(y(T ))[ef̃(y(−T ),y(T ))] ) where f̃(y(−T ), y(T )) = f(y(−T ), y(T ))− log(a(y(T ))). Different forms of the baseline function lead to different neural estimators in the literature such as MINE (Belghazi et al., 2018) and NWJ (Nguyen et al., 2010). On the other hand, all TUBA based estimators have high variance due to the high variance of f(x, y). Oord et al. (2018) proposed a low-variance MI estimator based on noise-contrastive estimation called InfoNCE. Moreover, there exists other differentiable mutual information estimator including SMILE (Song & Ermon, 2019) and Echo noise estimator (Brekelmans et al., 2019). Theorem 5 In the CPIC setting, the InfoNCE lower bound of predictive information is derived as IinfoNCE(Y (−T );Y (T )) = E [ 1 S S∑ i=1 log ef(y(−T )i,y(T )i) 1 S ∑S j=1 e f(y(−T )i,y(T )j) ] (5) The expectation is over S independent samples from the joint distribution: p(y(−T ), y(T )) following Markov Chain rule in Figure 1 such as p(y((−T ), y(T )) =∫ p(x(−T ), x(T ))p(y(−T )|x(−T ))p(y(T )|x(T ))dx(−T )x(T ). 4.3 VARIATIONAL BOUNDS OF CPIC We propose two classes of upper bounds of CPIC based on whether the bounds depend on a single sample or multiple samples. According to the uni-sample and multi-sample bounds derived in Section 4.1 and Section 4.2, we name the first class as uni-sample upper bounds, which take the VUB upper bound of mutual information for the complexity of data compression I(X(T ), Y (T )) and the TUBA as the lower bound of predictive information in equation 14. Thus we have LUNI = βKL(p(y(T )|x(T )), r(y(T )))− ITUBA(Y (−T ), Y (T )) . (6) Notice that by choosing different baseline functions, the TUBA lower bound would be equivalent to different mutual information estimator such as MINE and NWJ. The second class is named as multi-sample upper bound, which take advantage of the noise-contrastive estimation approach. The multi-sample upper bound is expressed as LMUL = βIL1Out(X(T );Y (T ))− IinfoNCE(Y (−T );Y (T )) . (7) Two main differences exist between these classes of upper bounds. First, the performance of multisample upper bound depend on batch size while uni-sample upper bounds do not, so when computational budgets do not allow large batch size in training, uni-sample upper bounds may be preferred in training. Secondly, multi-sample upper bound has lower variance than uni-sample upper bounds. Thus, they have different strengths and weaknesses depending on the context. We evaluated the performance of those variational bounds of CPIC in terms of the reconstruction performance in synthetic experiments in Appendix G, and find that with sufficiently large batch size, the multi-sample upper bound would outperform most of the uni-sample upper bounds. Thus, without further specification, we choose the multi-sample upper bound as the variational bounds of CPIC objective in this work. Furthermore, we classify the upper bounds into stochastic and deterministic versions by whether we employ a deterministic or stochastic encoder. Notice that when choosing the deterministic encoder, the compression complexity term (first term) in equation 6 and equation 7 are constant. 5 NUMERICAL EXPERIMENTS In this section, we demonstrate the superior performance of CPIC in both synthetic and real data experiments. We first examine the reconstruction performance of CPIC in noisy observations of a dynamical system (the Lorenz Attractor). The results show CPIC better recovers the latent trajectories from noisy high dimensional observations. Moreover, we demonstrate that maximizing the predictive information(PI) in the compressed latent space is more effective than maximizing PI between latent and observation space as in Creutzig & Sprekeler (2008); Creutzig et al. (2009), and also demonstrate the benefits of the stochastic representation over the deterministic representation. Secondly, we demonstrate better predictive performance of the representation evaluated by linear forecasting. The motivation for using linear forecasting models is that good representations contribute to disentangling complex data in a linearly accessible way (Clark et al., 2019). Specifically, we extract latent representations and then conduct forecasting tasks given the inferred representations on two neuroscience datasets and two other real datasets. The two neuroscience datasets are multineuronal recordings from the hippocampus (HC) while rats navigate a maze (Glaser et al., 2020) and multi-neuronal recordings from primary motor cortex (M1) during a reaching task for monkeys (O’Doherty et al., 2017). The two other real datasets are multi-city temperature data (TEMP) from 30 cities over several years (Gene, 2017) and 12 variables from an accelerater, gyroscope, and gravity motion sensor (MS) recording human kinematics (Malekzadeh et al., 2018). The forecasting tasks for the neuroscience data sets is to predict the future of the relevant exogenous variables from the past neural data, while the forecasting task for the other datasets is to predict the future of those time-series from their past. The results illustrate that CPIC has better predictive performance on these forecasting tasks compared with existing methods. 5.1 SYNTHETIC EXPERIMENT WITH NOISY LORENZ ATTRACTOR The Lorenz attractor is a 3D time series that are realizations of the Lorenz dynamical system (Pchelintsev, 2014). It describes a three dimensional flow generated as: dx dt = σ(y − x), dy dt = f1(ρ− z)− y, dz dt = xy − γz . (8) Lorenz sets the values σ = 10, ρ = 8/3 and γ = 28 to exhibit chaotic behavior, as done in recent works (She & Wu, 2020; Clark et al., 2019; Zhao & Park, 2017; Linderman et al., 2017). We simulated the trajectories from the Lorenz dynamical system and show them in the left-top panel in Figure 2. We then mapped the 3D latent signals to 30D lifted observations with a random linear embedding in the left-middle panel and add spatially anisotropic Gaussian noise on the 30D lifted observations in the left-bottom panel. The noises are generated according to different signal-to-noise ratios (SNRs), where SNR is defined by the ratio of the variance of the first principle components of dynamics and noise as in Clark et al. (2019). Specifically, we utilized 10 different SNR levels spaced evenly on a log (base 10) scale between [-3, -1] and corrupt the 30D lifted observations with noise corresponding to different SNR levels. Details of the simulation are available in Appendix G Finally, we deploy different variants of CPICs to recover the true 3D dynamics from different corrupted 30D lifted observations with different SNR levels, and compare the accuracy of recovering the underlying Lorenz attractor time-series. We aligned the inferred latent trajectory with the true 3D dynamics with optimal linear mapping due to the reparameterization-invariant measure of latent trajectories. We validated the reconstruction performance based on theR2 regression score of the extracted vs. true trajectories. We first compare the reconstruction performance on different variational bounds of CPIC with the latent dimension size Q = 3 and the time window size T = 4, and find that multi-sample upper bound outperforms uni-sample upper bounds for almost all of the 10 SNR levels. Thus, we recommend the multi-sample upper bound for CPIC in practice and use that for further results. We also find that, compared to DCA (Clark et al., 2019) and CPC (Oord et al., 2018) CPIC is more robust to noise and thus better extracts the true latent trajectory from the noisy high dimensional observations. The detailed results are reported in Appendix H In order to demonstrate the benefits of introducing stochasticity in the encoder and maximizing the predictive information in latent space, we considered four variants of CPICs: with stochastic or deterministic encoder, and with predictive information in latent space or between latent and observation space. All four variants of CPIC models utilize the latent dimension size Q = 3 and the time window size T = 4. For each model and each SNR level, we run 100 replicates with random initializations. We show the aligned latent trajectories inferred from corrupted lifted observation for high, intermediate and low SNR (0.001, 0.01, 0.1) levels of noise with the median R2 scores across 100 replicates in Figure 2. The point-wise distances between the recovered dynamics and the ground-truth dynamics are encoded in the colors from blue to red, corresponding to short to long distance. For high SNR (SNR = 0.1, topright), all models did a good job of recovering the Lorenz dynamics though the stochastic CPIC with predictive information on latent space had larger R2 than others. For intermediate SNR (SNR = 0.008, middle-right), we see that stochastic CPICs performs much bet- ter than the deterministic CPICs. Finally, as the SNR gets lower (SNR = 0.001, bottom-right) all methods perform poorly, but we note that, numerically, considering predictive information in latent space is much better than that between latent and observation space. To more thoroughly characterize the benefits of stochastic encoding and PI in the latent space, we examined the mean of R2 scores for the four variants on each level of SNR across N = 10 and N = 100 replicates in the top row of Figure 3. It shows that the CPIC with stochastic representations and PI in latent space robustly outperforms other variants on average. We also report the best R2 scores for the four variants in the sense that we report the R2 score for the model with the smallest training loss across N runs. The bottom row of Figure 3 shows that CPIC with stochastic representation and PI in latent space achieves better reconstruction and robustness to noise than other variants, especially when the number of runs N is small. Even when N is large, stochastic CPIC with PI in latent space greatly outperforms others when the noise level is high. We note that in the case of high-dimensional noisy observations with large numbers of samples common in many modern real-world time series datasets, CPICs robustness to noise and capacity to achieve good results in a small number of runs is a clear advantage. Moveover, we displayed the quantile anaylsis of the R2 scores in Appendix I with consistent result. 5.2 REAL EXPERIMENTS WITH DIVERSE FORECASTING TASKS In this section, we show that latent representations extracted by stochastic CPIC perform better in the downstream forecasting tasks on four real datasets. We compared stochastic CPIC with contrastive predictive coding (CPC) (Oord et al., 2018), PCA, SFA (Wiskott & Sejnowski, 2002), DCA (Clark et al., 2019) and deterministic CPIC. As for CPC, we use a linear encoder for fair comparison. In addition, we compared the result from CPCs and CPICs with nonlinear encoder in which the linear mean encoder is replaced by a multi-layer perceptron. For each model, we extract the latent representations (conditional mean) and conduct prediction tasks on the relevant exogenous variable at a future time step for the neural datasets. For example, for the M1 dataset, we extract a consecutive 3-length window representation of multi-neuronal spiking activity to predict the monkey’s arm position in a future time step which is lag time stamps away. The details of experiments are available in Appendix J. Neuroscientists often want to interpret latent representations of data to gain insight into the processes that generate the observed data. Thus, we used linear regression 1 to predict exogenous variables, with the intuition that a simple (i.e., linear) prediction model will only be sensitive to the structure in the data that is easiest to interpret as in (Yu et al., 2008; Pandarinath et al., 2018; Clark et al., 2019). Furthermore, the neuroscience data sets (M1 and HC) present extremely challenging settings for prediction of the exogenous variables due to severe experimental undersampling of neurons due to technical limitations, as well as sizeable noise magnitudes. For these tasks, R2 regression score is used as the evaluation metric to measure the forecasting performance. Four datasets are split into 4:1 train and test data and the forecasting task considered three different lag values (5, 10, and 15). For DCA and deterministic/stochastic CPICs, we took three different window sizes T = 1, 2, 3 and report the best R2 scores. Table 1 reports all R2 scores and demonstrates that our stochastic CPIC outperforms all other models except for the case for Temp data with forecasting at lag 15. 6 CONCLUDING REMARKS We developed a novel information-theoretic framework, Compressed Predictive Information Coding, to extract representations in sequential data. CPIC balances the maximization of the predictive information in latent space with the minimization of the compression complexity of the latent representation. We leveraged stochastic representations by employing a stochastic encoder and developed variational bounds of the CPIC objective function. We demonstrated that CPIC extracts more accurate low-dimensional latent dynamics and more useful representations that have better forecasting performance in diverse downstream tasks in four real-world datasets. Together, these results indicate that CPIC will yield similar improvements in other real-world scenarios. Moreover, we note that in most real datasets, using nonlinear CPIC would lead to better representation in terms of prediction performance than linear CPIC. 1https://scikit-learn.org/stable/modules/linear model.html A SELECTION OF WINDOW SIZE Selecting optimal window size T is important for the downstream use of the dynamics. Poor selection of T may cause aliasing artifacts. In general, we nee to select it by cross validation. Furthermore, we can make plots of the predictive information as a function of both window size T and the embedding dimension Q as diagnostic tools. B DERIVATION OF IV UB Directly estimating the compression complexity is intractable, because I(X(T );Y (T )) := EX(T ) [ KL(p(y(T )|x(T )), p(y(T ))) ] in which the population distribution p(y(T )) is unknown. Thus we introduce a variational approximation to the marginal distribution of encoded inputs p(y(T )), denoted as r(y(T )). Due to the non-negativity of the Kullback-Leibler (KL) divergence, the variational upper bound (VUB) is derived as I(X(T );Y (T )) = EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] − KL(p(y(T )), r(y(T ))) ≤ EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] = IVUB(X(T );Y (T )) . (9) C DERIVATION OF IL1Out Generally, learning r(y(T )) was recognised as the distribution density estimation problem (Silverman, 2018), which is challenging. In this setting, the variational distribution r(y(T )) is assumed to be learnable, and thus estimating the variational upper bound is tractable. In particular, Alemi et al. (2016) fixed r(y(T )) as a standard normal distribution, leading to high-bias in MI estimation. Recently, Poole et al. (2019) utilized a Monte Carlo approximation for variational distribution. In our case, with S sample pairs (x(T )i, y(T )i)Si=1, ri(y(T )) = 1 S−1 ∑ j ̸=i p(y(T )|x(T )j) ≈ p(y(T )) and the L1Out is derived as below: IL1Out(X(T );Y (T )) = E [ 1 S S∑ i=1 [ log p(y(T )i|x(T )i) 1 S−1 ∑ j ̸=i p(y(T )i|x(T )j) ]] . (10) D DERIVATION OF IV LB Similar to Agakov (2004), we replace the intractable conditional distribution p(y(T )|y(−T )) with a tractable optimization problem over a variational conditional distribution q(y(T )|y(−T )). It yields a lower bound on PI due to the non-negativity of the KL divergence: I(Y (−T );Y (T )) ≥ H(Y (T )) + Ep(y(−T ),y(T ))[log q(y(T )|y(−T ))] (11) where H(Y ) is the differential entropy of variable Y and this bound is tight if and only if q(y(T )|y(−T )) = p(y(T )|y(−T )), suggesting that the second term in equation 11 equals the negative conditional entropy −H(Y (T )|Y (−T )). However the variational lower bound requires a tractable decoder for the conditional q(y|x). Alternatively, by considering an energy-based variational family for conditional distribution The conditional expectation in equation 11 can be estimated using Monte Carlo sampling based on the encoded data distribution p(y(−T ), y(T )). And encoded data are sampled by introducing the augmented data x(−T ) and x(T ) and marginalizing them out as p(y((−T ), y(T )) = ∫ p(x(−T ), x(T ))p(y(−T )|x(−T ))p(y(T )|x(T ))dx(−T )x(T ) (12) according to the Markov chain proposed in Figure 1. E DERIVATION OF ITUBA According to Poole et al. (2019), by considering an energy-based variational family to express and conditional distribution q(y(T )|y(−T )): q(y(T )|y(−T )) = p(y(T ))e f(y(T ),y(−T )) Z(y(−T )) (13) where f(x, y) is a differentiable critic function, Z(y(−T )) = Ep(y(T ) [ ef(y(T ),y(−T )) ] is a partition function, and introducing a baseline function a(y(T )), we derived a tractable TUBA lower bound (Barber & Agakov, 2003) of the predictive information as: I(Y (−T ), Y (T )) ≥ Ep(y(−T ),y(T ))[f̃(y(−T ), y(T ))]− log ( Ep(y(−T ))p(y(T ))[ef̃(y(−T ),y(T ))] ) = ITUBA(Y (−T ), Y (T )) (14) where f̃(y(−T ), y(T )) = f(y(−T ), y(T ))− log(a(y(T ))) is treated as an updated critic function. Notice that different choices of baseline functions lead to different mutual information estimators. When a(y(T )) = 1, it leads to mutual information neural estimator (MINE) (Belghazi et al., 2018); when a(y(T )) = Z(y(T )), it leads to the lower bound proposed in Donsker & Varadhan (1975) (DV) and when a(y(T )) = e, it recovers the lower bound in Nguyen et al. (2010) (NWJ) also known as f-GAN (Nowozin et al., 2016) and MINE-f (Belghazi et al., 2018). In general, the critic function f(x, y) and the log baseline function a(y) are usually parameterized by neural networks (Oord et al., 2018; Belghazi et al., 2018): Oord et al. (2018) used a separable critic function f(x, y) = hθ(x) T gθ(y), while Belghazi et al. (2018) used a joint critic function f(x, y) = fθ(x, y), and Poole et al. (2019) claimed that joint critic function generally performs better than separable critic function but scale poorly with batch size. F DERIVATION OF IinfoNCE The derivation of infoNCE in our CPIC setting is trivial by treating Y (−T ) and Y (T ) as the input and output in the infoNCE formula from the CPC setting (Oord et al., 2018). G DETAILS OF SIMULATION In this section, we first generated the 3D latent signals according to the Lorenz dynamic system 8 denoted as X ∈ R3×T . We calculated the largest eigenvalue of the covariance matrix of X as dynamic variance denoted as σ2dynamics, and the noise variance is σ 2 noise = σ 2 dynamics/SNR where SNR is signal-to-noise ratio. Then we randomly generate a semi orthogonal matrix V ∈ R30×3. Then we generated the true 30D signal V X embedded with additive spatially structured white noise, where the noise subspace Vnoise is generated with median principle angles with respect to dynamics subspaces V . The noise covariance is generated via Σnoise with the largest eigenvalue σ2noise, and then we generate the noisy signal at the nth dimension by [Ynoisy]n ∼ N (vTnX,Σnoise), n = 1, . . . 30. H MODEL COMPARISON IN TERMS OF R2 REGRESSION SCORE IN THE NOISY LORENZ ATTRACTOR EXPERIMENT In this section, theR2 regression scores for CPC, DCA, deterministic & stochastic CPICs (three unisample upper bounds in terms of NWJ, MINE, TUBA, and one multi-sample upper bound) for all ten different SNRs are reported in Table 2. It shows that stochastic CPIC with multi-sample upper bound outperforms other approaches in majority of SNRs. It also shows that that CPIC is most robust to the noisy data and thus detect best latent trajectories from noisy observation compared with CPC and DCA. We also show the aligned latent trajectories inferred from corrupted lifted observation for high, intermediate and low SNR (0.001, 0.01, 0.1) levels of noise with the median R2 scores across 100 replicates for PCA and DCA (as the extension of Figure 2) in Figure 4. The point-wise distances between the recovered dynamics and the ground-truth dynamics are encoded in the colors from blue to red, corresponding to short to long distance. It show that stochastic CPIC outperforms both PCA and DCA. I COMPARISON ON R2 SCORES OF LATENT DYNAMICS REGRESSION FOR NOISY LORENZ ATTRACTOR IN TERMS OF QUANTILE ANALYSIS We displayed the medium performance (with the inter-quantile range as the error bars) of R2 scores of latent dynamics regression for noisy Lorenz attractor in Figure 5. J DETAILS OF REAL-WORLD EXPERIMENTS The four real data are Monkey motor cortical dataset (M1), Rat hippocampal data (HC), Temperature dataset (Temp) and Accelerate dataset (MS). J.1 MONKEY MOTOR CORTICAL DATASET O’Doherty et al. (2017) released multi-electrode spiking data for both M1 and S1 for two monkeys during a continuous grid-based reaching task. We used M1 data from the subject “Indy” (specifically, we used the file “indy 20160627 01.mat”). We discarded single units with fewer than 5,000 spikes, leaving 109 units. We binned the spikes into non-overlapping bins , square-root transformed the data and mean-centered the data using a sliding window 30 s in width. 𝑁 = 10 𝑁 = 100 J.2 RAT HIPPOCAMPAL DATA Glaser et al. (2020) released the original data. The data consist of 93 minutes of extracellular recordings from layer CA1 of dorsal hippocampus while a rat chased rewards on a square platform. We discarded single units with fewer than 10 spikes, leaving 55 units. We binned the spikes into nonoverlapping 50 ms bins, then square-root transformed the data. J.3 TEMPERATURE DATASET The temperature dataset consists of hourly temperature data for 30 U.S. cities over a period of 7 years from OpenWeatherMap.org. We downsampled the data by a factor of 24 to obtain daily temperatures. J.4 ACCELEROMETER DATASET Malekzadeh et al. (2018) released accelerometer data which records roll, pitch, yaw, gravity x, y, z, rotation x, y, z and acceleration x, y, z for a total of 12 kinematic variables. The sampling rate is 50 Hz. We used the file “sub 19.csv” from “A DeviceMotion data.zip”. J.5 FORECASTING TASK The forecasting task is the same in Clark et al. (2019). We use the extracted consecutive 3-length window representation of endogenous data to forecast the future relevant exogenous variables at log n. In M1 and HC, the endogenous variables are processed spiking data, and the exogenous variables are location data. In Temp and MS, we assume endogenous variables and exogenous variables are the same, 30 U.S. cities’ hourly temperature for Temp data and 12 kinematic variables for MS data.
1. What is the focus and contribution of the paper regarding time series forecasting? 2. What are the strengths and weaknesses of the proposed approach, particularly in its formulation and computational bounds? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper's notation, terminology, and experimental setup?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a way to optimize time series forecasting model through adapted information bottleneck loss. In particular, it is proposed to learn low dimensional representations of the input time series (using variational encoders) via maximizing the mutual information between representations of the input and target and minimizing mutual information between input and representation. The approach is termed compressed predictive information coding (CPIC) and the authors propose two different computable bounds, based on the existing estimators of mutual information, that can be used to train the models. Further, authors explore the empirical performance of the proposed bounds, first on the artificially generated data with clear structure, to check the ability of the representations learned with CPIC to reconstruct data. The second experimental evaluation is done on the real data where forecasting is made using linear regression on the learned representations. Strengths And Weaknesses The paper is proposing an information bottleneck type of the loss for a different from usual setup. The time series forecasting is a complicated task that is an active research area and novel approaches are always interesting. The paper uses existing estimators and bounds on mutual information, thus the main novelty of the paper is the formulation of the information bottleneck loss and finding methods that allow to compute it. The evaluation shows that the proposed method performs better than two existing approaches, though still with not very high performance. Clarity, Quality, Novelty And Reproducibility The writing of the paper has some drawbacks that make it harder to follow up the ideas. Some examples are in the following: use \citep when citations are inside of the sentence sequential data is not always dynamic and not always timeseries (using i.e. in this case is not justified) some terms are used without introduction, for example "low-level information" and "compression complexity" in section3 it would be nice to specify what activation functions are used in the encoders notation on page4, in the end, is not very nice: usually p(X(T), Y(T)) will denote a probability of having this tuple, not the probability distribution, which is meant there typo in paragraph after Theorem3: q(y(T)|y(-T)), not q(y|x) in section4.3 it is an upper bound on CPIC, not lower bound in section4.3 it was unclear why there is a one-sample bound and multiple-samples bound. It should be explained more precisely. in section5.2 it is mentioned that "predictions are conducted by linear regression to emphasize the structure learned by the unsupervised methods". It is a confusing formulation and it should be described in more details what is the reason to use linear regression. The novelty of the paper is about formulating information bottleneck loss for the time series forecasting task. The code for the experiments is not provided, reproducibility might be hard.
ICLR
Title Compressed Predictive Information Coding Abstract Unsupervised learning plays an important role in many fields, such as machine learning, data compression, and neuroscience. Compared to static data, methods for extracting low-dimensional structure for dynamic data are lagging. We developed a novel information-theoretic framework, Compressed Predictive Information Coding (CPIC), to extract predictive latent representations from dynamic data. Predictive information quantifies the ability to predict the future of a time series from its past. CPIC selectively projects the past (input) into a low dimensional space that is predictive about the compressed data projected from the future (output). The key insight of our framework is to learn representations by balancing the minimization of compression complexity with maximization of the predictive information in the latent space. We derive tractable variational bounds of the CPIC loss by leveraging bounds on mutual information. The CPIC loss induces the latent space to capture information that is maximally predictive of the future of the data from the past. We demonstrate that introducing stochasticity in the encoder and maximizing the predictive information in latent space contributes to learning more robust latent representations. Furthermore, our variational approaches perform better in mutual information estimation compared with estimates under the Gaussian assumption commonly used. We show numerically in synthetic data that CPIC can recover dynamical systems embedded in noisy observation data with low signal-to-noise ratio. Finally, we demonstrate that CPIC extracts features more predictive of forecasting exogenous variables as well as auto-forecasting in various real datasets compared with other state-of-the-art representation learning models. Together, these results indicate that CPIC will be broadly useful for extracting low-dimensional dynamic structure from high-dimensional, noisy timeseries data. 1 INTRODUCTION Unsupervised methods play an important role in learning representations that provide insight into data and exploit unlabeled data to improve performance in downstream tasks in diverse application areas Bengio et al. (2013); Chen et al. (2020); Grill et al. (2020); Devlin et al. (2018); Brown et al. (2020); Baevski et al. (2020); Wang et al. (2020). Prior work on unsupervised representation learning can be broadly categorized into generative models such as variational autoencoders(VAEs) (Kingma & Welling, 2013) and generative adversarial networks (GAN) (Goodfellow et al., 2014), discriminative models such as dynamical components analysis (DCA) (Clark et al., 2019), contrastive predictive coding (CPC) (Oord et al., 2018), and deep autoencoding predictive components (DAPC) (Bai et al., 2020). Generative models focus on capturing the joint distribution between representations and inputs, but are usually computationally expensive. On the other hand, discriminative models emphasize capturing the dependence of data structure in the low-dimensional latent space, and are therefore easier to scale to large datasets. In the case of time series, some representation learning models take advantage of an estimate of mutual information between encoded past (input) and the future (output) (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Oord et al., 2018). Although previous models utilizing mutual information extract low-dimensional representations, they tend to be sensitive to noise in the observational space. DCA directly makes use of the mutual information between the past and the future (i.e., the predictive information (Bialek et al., 2001)) in a latent representational space that is a linear embedding of the observation data. However, DCA operates under Gaussian assumptions for mutual information estimation. We propose a novel representation learning framework which is not only robust to noise in the observation space but also alleviates the Gaussian assumption and is thus more flexible. We formalize our problem in terms of data generated from a stationary dynamical system and propose an information-theoretic objective function for Compressed Predictive Information Coding (CPIC). Instead of leveraging the information bottleneck (IB) objective directly as in Creutzig & Sprekeler (2008) and Creutzig et al. (2009), where the past latent representation is directly used to predict future observations, we predict the compressed future observations filtered by the encoder. It is because that in the time series setting, future observations are noisy, and treating them as labels is not insightful. Specifically, our target is to extract latent representation which can better predict future underlying dynamics. Since the compressed future observations are assumed to only retain the underlying dynamics, better compression thus contributes to extracting better dynamical representation. In addition, inspired by Clark et al. (2019) and Bai et al. (2020), we extend the prediction from single input to a window of inputs to handle high order predictive information. Moreover, instead of directly estimating the objective information with Gaussian assumption (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Clark et al., 2019; Bai et al., 2020), we developed variational bounds and a tractable end-to-end training framework based on the neural estimator of mutual information studied in Poole et al. (2019). Note that our inference first leverages the variational boundary technique for self-supervised learning on the time series data. Since it alleviates the Gaussian assumption, it is applicable to a much larger class of dynamical systems. In CPIC, we also demonstrate that introducing stochasticity into either a linear or nonlinear encoder robustly contributes to numerically better representations in different tasks. In particular, we illustrate that CPIC can recover trajectories of a chaotic dynamical system embedded in highdimensional noisy observations with low signal-to-noise ratios in synthetic data. Furthermore, we conduct numerical experiments on four real-world datasets with different goals. In two neuroscience datasets, monkey motor cortex (M1) and rat dorsal hippocampus (HC), compared with the state-ofthe-art methods, we show that the latent representations extracted from CPIC have better forecasting accuracy for the exogenous variables of the monkey’s future hand position for M1, and for the rat’s future position for HC. In two other real datasets, historical hourly weather temperature data (TEMP) and motion sensor data (MS), we show that latent representations extracted by CPIC have better forecasting accuracy of the future of those time series than other methods. In summary, the primary contributions of our paper are as follows: • We developed a novel information-theoretic self-supervised learning framework, Compressed Predictive Information Coding (CPIC), which extracts low-dimensional latent representation from time series. CPIC maximizes the predictive information in the latent space while minimizing the compression complexity. • We introduced the stochastic encoder structure where we encode inputs into stochastic representations to handle uncertainty and contribute to better representations. • Based on prior works, we derived the variational bounds of the CPIC’s objective function and a tractable, end-to-end training procedure. Since our inference alleviates the Gaussian assumption common to other methods, it is applicable to a much larger class of dynamical systems. Moreover, to the best of our knowledge, our inference is the first to leverage the variational boundary technique for self-supervised learning on time series data. • We demonstrated that, compared with the other unsupervised based methods, CPIC more robustly recovers latent dynamics in dynamical system with low signal-to-noise ratio in synthetic experiments, and extracts more predictive features for downstream tasks in various real datasets. 2 RELATED WORK Mutual information (MI) plays an important role in estimating the relationship between pairs of variables. It is a reparameterization-invariant measure of dependency: I(X,Y ) = Ep(x,y) [ log p(x|y) p(x) ] (1) It is used in computational neuroscience (Dimitrov et al., 2011), visual representation learning (Chen et al., 2020), natural language processing (Oord et al., 2018) and bioinformatics (Lachmann et al., 2016). In representation learning, the mutual information between inputs and representations is used to quantify the quality of the representation and is also closely related to reconstruction error in generative models (Kingma & Welling, 2013; Makhzani et al., 2015). Estimating mutual information is computationally and statistically challenging except in two cases: discrete data, as in Tishby et al. (2000) and Gaussian data, as in Chechik et al. (2005). However, these assumptions both severely constrain the class of learnable models (Alemi et al., 2016). Recent works leverage deep learning models to obtain both differentiable and scalable MI estimation (Belghazi et al., 2018; Nguyen et al., 2010; Oord et al., 2018; Alemi et al., 2016; Poole et al., 2019; Cheng et al., 2020). In terms of representation learning in time series, Wiskott & Sejnowski (2002); Turner & Sahani (2007) targeted slowly varying features, Creutzig & Sprekeler (2008) utilized the information bottleneck (IB) method (Tishby et al., 2000) and developed an information-theoretic objective function. Creutzig et al. (2009) proposed an alternative objective function based on a specific state-space model. Recently, Oord et al. (2018) proposed CPC to extract dynamic information based on an autoregressive model on representations and contrastive loss on predictions. Clark et al. (2019); Bai et al. (2020) proposed unsupervised learning approach to extract low-dimensional representation with maximal predictive information(PI). All of the above unsupervised representation learning models, except for CPC, assume the data to be Gaussian, which may be not realistic, especially when applied to neuroscience datasets (O’Doherty et al., 2017; Glaser et al., 2020), given the nonGaussianity of neuronal activity. Here, we leverage recently introduced neural estimation of mutual information to construct upper bounds of the CPIC objective and develop an end-to-end training procedure. CPIC enables generalization beyond the Gaussian case and autoregressive models. Recently, deep encoder networks are leveraged to model nonlinear relations between latent representations and observed data in time series (Chen et al., 2020; Bai et al., 2020; He et al., 2020). However, use of complicated nonlinear encoders induced hinders computational efficiency (Wang et al., 2019). CPIC proposes an efficient representation learning framework for time series that encodes data with maximal predictive information. We also note that there exists several works on the time series modeling from generative modeling perspective. Initially, Fabius & Van Amersfoort (2014) leveraged the recurrent neural network with variational autoencoder to model time series data. Frigola et al. (2014) proposed variational Gaussian-process state-space model. Meng et al. (2021) proposed variational structured Gaussian-process regression network which can efficiently handle more complicated relationships in time series. Most generative modeling inference would depend on the length of time series, while the inference of CPIC depends on the window size T , which is more scalable for long time series. 3 COMPRESSED PREDICTIVE INFORMATION CODING The main intuition behind Compressed Predictive Information Coding (CPIC) is to extract low dimensional representations with minimal compression complexity and maximal dynamical structure. Specifically, CPIC first discards low-level information that is not relevant for dynamic prediction and noise that is more local by minimizing compression complexity (i.e., mutual information) between inputs and representations to improve model generalization. Second, CPIC maximizes the predictive information in the latent space of compressed representations. Compared with Clark et al. (2019); Bai et al. (2020), CPIC first utilizes stochastic encoder to handle uncertainty of representations, which contributes to more robust representations, and also relieves the Gaussian assumption by constructing bounds of mutual information based on neural estimations. In more detail, instead of employing a deterministic linear mapping function as the encoder to compress data as in Clark et al. (2019), CPIC takes advantage of a stochastic linear or nonlinear mapping function. Given inputs, the stochastic representation follows Gaussian distributions, with means and variances encoded from any neural network structure. A nonlinear CPIC utilizes a stochastic nonlinear encoder which is composed of a nonlinear mean encoder and a linear variance encoder, while a linear CPIC utilizes a stochastic linear encoder which is composed of a linear mean encoder and a linear variance encoder. Note that stochastic representations conditioned on inputs are parameterized as a conditional Gaussian distribution, but the marginal distribution of the representation is a mixture of Gaussian distribution, which is widely recognized as universal approximator of densities. On the other hand, avoiding the Gaussian assumption on mutual information (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Clark et al., 2019; Bai et al., 2020), CPIC leverages neural estimations of mutual information. Specifically, we propose differentiable and scalable bounds of the CPIC objective via variational inference, which enables end-to-end training. Formally, let X = {xt}, xt ∈ RN be a stationary, discrete time series, and let Xpast = (x−T+1, . . . , x0) and Xfuture = (x1, . . . , xT ) denote consecutive past and future windows of length T. Then both past and future data are compressed into past and future representations denoted as Ypast = (y−T+1, . . . , y0) and Yfuture = (y1, . . . , yT ) with embedding dimension size Q. Similar to the information bottleneck (IB) (Tishby et al., 2000), the CPIC objective contains a trade-off between two factors. The first seeks to minimize the compression complexity and the second to maximize the predictive information in the latent (representation) space. Note that when the encoder is deterministic the compression complexity is deprecated and when the encoder is stochastic the complexity is measured by the mutual information between representations and inputs. In the CPIC objective, the trade-off weight β > 0 dictates the balance between the compression and predictive information terms: min ψ L, where L ≡ β(I(Xpast;Ypast) + I(Xfuture;Yfuture))− I(Ypast;Yfuture) (2) where ψ refer to the model parameters which encode inputs X to latent variables Y . Larger β promotes a more compact mapping and thus benefits model generalization, while smaller β leads to more predictive information in the latent space on training data. This objective function is visualized in Figure 1, where inputs X are encoded into latent space as Y via tractable encoders and the dynamics of Y are learned in a model-free manner. The encoder p(Y |X) could be implemented by fitting deep neural networks (Alemi et al., 2016) to encode data X . Instead, CPIC takes an approach similar to VAEs (Kingma & Welling, 2013), in that it encodes data into stochastic representations. In particular, CPIC employs a stochastic encoder (genc in Figure 1) to compress input xt into yt as yt|xt ∼ N (µt, diag(σ2t )) , (3) for each time stamp t. The mean of yt is given by µt = gEncoderµ (xt), whereas the variance arises from σt = gEncoderσ (xt). Encoders gEncoderµ and g Encoder σ can be any nonlinear mapping and is usually modeled using neural network architectures. We use a twolayer perceptron with ReLU activation function (Agarap, 2018) for a nonlinear mapping. In terms of a linear CPIC, we specify the mean of representation as µt = uTxt. In both linear and nonlinear CPIC setting, if σt = 0, the stochastic encoder reduces to a deterministic encoder. We extend single input to multiple inputs in the CPIC framework in terms of a specified window size T . The selection of window size is discussed in Appendix A. Due to the stationary assumption, the relation between past/future blocks of input data X(−T ), X(T ) ∈ RN×T and encoded data Y (−T ), Y (T ) ∈ RQ×T are equivalent, pX(−T ),Y (−T ) = pX(T ),Y (T ). Note that −T and T indexes to past and future T data. Without loss of generality, the compression relation can be expressed as Y (T ) = gEncoderµ (X(T )) + ξ(T ), where ξ(T ) ∈ N (0, blockdiag(diag(σ21), . . . , diag(σ2T )) and noise standard deviation σt = gEncoderσ (xt). 4 VARIATIONAL BOUNDS OF COMPRESSED PREDICTIVE INFORMATION CODING In CPIC, since data X are stationary, the mutual information between the input data and the compressed data for the past is equivalent to that for the future I(X(−T );Y (−T )) = I(X(T );Y (T )). Therefore, the objective of CPIC can be rewritten as minL = βI(X(T );Y (T ))− I(Y (−T );Y (T )) . (4) We developed the variational upper bounds on mutual information for the compression complexity I(X(T );Y (T )) and lower bounds on mutual information for the predictive information I(Y (−T );Y (T )). 4.1 UPPER BOUNDS OF COMPRESSION COMPLEXITY In the section, we derived a tractable variational upper bound (VUB) depending on a single sample and a leave-one-out upper bound (L1Out) (Poole et al., 2019) depending on multiple samples. Theorem 1 By introducing a variational approximation r(y(T )) to the marginal distribution p(y(T )), a tractable variational upper bound of mutual information I(X(T );Y (T )) is derived as IVUB(X(T );Y (T )) = EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] . Theorem 2 By utilizing a Monte Carlo approximation for variational distribution r(y(T )), the L1Out upper bound of mutual information I(X(T );Y (T )) is derived as IL1Out(X(T );Y (T )) = E [ 1 S ∑S i=1 [ log p(y(T )i|x(T )i)1 S−1 ∑ j ̸=i p(y(T )i|x(T )j) ]] , where S is the sample size. The derivation details are in Appendix B and C. In practice, the L1Out bound depends on the sample size S and may suffer from numerical instability. Thus, we would like to choose the sample size S as large as possible. In general scenarios where p(y(T )|x(T )) is intractable, Cheng et al. (2020) proposed a variational version of VUB and L1Out by using a neural network to approximate the condition distribution p(y(T )|x(T )). Since the conditional distribution p(y(T )|x(T )) is parameterized as a known stochastic/deterministic encoder in CPIC, those variational versions are not taken into consideration. 4.2 LOWER BOUNDS OF PREDICTIVE INFORMATION For the predictive information (PI), we derived lower bounds of I(Y (−T );Y (T )) using results in Agakov (2004); Alemi et al. (2016); Poole et al. (2019). In particular, we derived tractable unnormalized Barber and Agakov (TUBA) (Barber & Agakov, 2003) lower bounds depending on a single sample and an infoNCE lower bound (Oord et al., 2018) depending on multi samples. All derivation details are discussed in Appendix D, E and F. Theorem 3 We derived a lower bound on predictive information (PI) I(Y(-T); Y(T)) as IV LB(Y (−T );Y (T )) = H(Y (T )) + Ep(y(−T ),y(T ))[log q(y(T )|y(−T ))], where q(y(T )|y(−T )) is a variational conditional distribution. However, this lower bound requires a tractable decoder for the conditional distribution q(y(T )|y(−T )) (Alemi et al., 2016). Alternatively we derived a TUBA lower bound (Barber & Agakov, 2003) which is free of the parametrization of decoder. Theorem 4 By introducing a differentiable critic function f(x, y) and a baseline function a(y(T )) defined in Appendix E, the TUBA lower bound of predictive information is derived as ITUBA(Y (−T ), Y (T )) = Ep(y(−T ),y(T ))[f̃(y(−T ), y(T ))]−log ( Ep(y(−T ))p(y(T ))[ef̃(y(−T ),y(T ))] ) where f̃(y(−T ), y(T )) = f(y(−T ), y(T ))− log(a(y(T ))). Different forms of the baseline function lead to different neural estimators in the literature such as MINE (Belghazi et al., 2018) and NWJ (Nguyen et al., 2010). On the other hand, all TUBA based estimators have high variance due to the high variance of f(x, y). Oord et al. (2018) proposed a low-variance MI estimator based on noise-contrastive estimation called InfoNCE. Moreover, there exists other differentiable mutual information estimator including SMILE (Song & Ermon, 2019) and Echo noise estimator (Brekelmans et al., 2019). Theorem 5 In the CPIC setting, the InfoNCE lower bound of predictive information is derived as IinfoNCE(Y (−T );Y (T )) = E [ 1 S S∑ i=1 log ef(y(−T )i,y(T )i) 1 S ∑S j=1 e f(y(−T )i,y(T )j) ] (5) The expectation is over S independent samples from the joint distribution: p(y(−T ), y(T )) following Markov Chain rule in Figure 1 such as p(y((−T ), y(T )) =∫ p(x(−T ), x(T ))p(y(−T )|x(−T ))p(y(T )|x(T ))dx(−T )x(T ). 4.3 VARIATIONAL BOUNDS OF CPIC We propose two classes of upper bounds of CPIC based on whether the bounds depend on a single sample or multiple samples. According to the uni-sample and multi-sample bounds derived in Section 4.1 and Section 4.2, we name the first class as uni-sample upper bounds, which take the VUB upper bound of mutual information for the complexity of data compression I(X(T ), Y (T )) and the TUBA as the lower bound of predictive information in equation 14. Thus we have LUNI = βKL(p(y(T )|x(T )), r(y(T )))− ITUBA(Y (−T ), Y (T )) . (6) Notice that by choosing different baseline functions, the TUBA lower bound would be equivalent to different mutual information estimator such as MINE and NWJ. The second class is named as multi-sample upper bound, which take advantage of the noise-contrastive estimation approach. The multi-sample upper bound is expressed as LMUL = βIL1Out(X(T );Y (T ))− IinfoNCE(Y (−T );Y (T )) . (7) Two main differences exist between these classes of upper bounds. First, the performance of multisample upper bound depend on batch size while uni-sample upper bounds do not, so when computational budgets do not allow large batch size in training, uni-sample upper bounds may be preferred in training. Secondly, multi-sample upper bound has lower variance than uni-sample upper bounds. Thus, they have different strengths and weaknesses depending on the context. We evaluated the performance of those variational bounds of CPIC in terms of the reconstruction performance in synthetic experiments in Appendix G, and find that with sufficiently large batch size, the multi-sample upper bound would outperform most of the uni-sample upper bounds. Thus, without further specification, we choose the multi-sample upper bound as the variational bounds of CPIC objective in this work. Furthermore, we classify the upper bounds into stochastic and deterministic versions by whether we employ a deterministic or stochastic encoder. Notice that when choosing the deterministic encoder, the compression complexity term (first term) in equation 6 and equation 7 are constant. 5 NUMERICAL EXPERIMENTS In this section, we demonstrate the superior performance of CPIC in both synthetic and real data experiments. We first examine the reconstruction performance of CPIC in noisy observations of a dynamical system (the Lorenz Attractor). The results show CPIC better recovers the latent trajectories from noisy high dimensional observations. Moreover, we demonstrate that maximizing the predictive information(PI) in the compressed latent space is more effective than maximizing PI between latent and observation space as in Creutzig & Sprekeler (2008); Creutzig et al. (2009), and also demonstrate the benefits of the stochastic representation over the deterministic representation. Secondly, we demonstrate better predictive performance of the representation evaluated by linear forecasting. The motivation for using linear forecasting models is that good representations contribute to disentangling complex data in a linearly accessible way (Clark et al., 2019). Specifically, we extract latent representations and then conduct forecasting tasks given the inferred representations on two neuroscience datasets and two other real datasets. The two neuroscience datasets are multineuronal recordings from the hippocampus (HC) while rats navigate a maze (Glaser et al., 2020) and multi-neuronal recordings from primary motor cortex (M1) during a reaching task for monkeys (O’Doherty et al., 2017). The two other real datasets are multi-city temperature data (TEMP) from 30 cities over several years (Gene, 2017) and 12 variables from an accelerater, gyroscope, and gravity motion sensor (MS) recording human kinematics (Malekzadeh et al., 2018). The forecasting tasks for the neuroscience data sets is to predict the future of the relevant exogenous variables from the past neural data, while the forecasting task for the other datasets is to predict the future of those time-series from their past. The results illustrate that CPIC has better predictive performance on these forecasting tasks compared with existing methods. 5.1 SYNTHETIC EXPERIMENT WITH NOISY LORENZ ATTRACTOR The Lorenz attractor is a 3D time series that are realizations of the Lorenz dynamical system (Pchelintsev, 2014). It describes a three dimensional flow generated as: dx dt = σ(y − x), dy dt = f1(ρ− z)− y, dz dt = xy − γz . (8) Lorenz sets the values σ = 10, ρ = 8/3 and γ = 28 to exhibit chaotic behavior, as done in recent works (She & Wu, 2020; Clark et al., 2019; Zhao & Park, 2017; Linderman et al., 2017). We simulated the trajectories from the Lorenz dynamical system and show them in the left-top panel in Figure 2. We then mapped the 3D latent signals to 30D lifted observations with a random linear embedding in the left-middle panel and add spatially anisotropic Gaussian noise on the 30D lifted observations in the left-bottom panel. The noises are generated according to different signal-to-noise ratios (SNRs), where SNR is defined by the ratio of the variance of the first principle components of dynamics and noise as in Clark et al. (2019). Specifically, we utilized 10 different SNR levels spaced evenly on a log (base 10) scale between [-3, -1] and corrupt the 30D lifted observations with noise corresponding to different SNR levels. Details of the simulation are available in Appendix G Finally, we deploy different variants of CPICs to recover the true 3D dynamics from different corrupted 30D lifted observations with different SNR levels, and compare the accuracy of recovering the underlying Lorenz attractor time-series. We aligned the inferred latent trajectory with the true 3D dynamics with optimal linear mapping due to the reparameterization-invariant measure of latent trajectories. We validated the reconstruction performance based on theR2 regression score of the extracted vs. true trajectories. We first compare the reconstruction performance on different variational bounds of CPIC with the latent dimension size Q = 3 and the time window size T = 4, and find that multi-sample upper bound outperforms uni-sample upper bounds for almost all of the 10 SNR levels. Thus, we recommend the multi-sample upper bound for CPIC in practice and use that for further results. We also find that, compared to DCA (Clark et al., 2019) and CPC (Oord et al., 2018) CPIC is more robust to noise and thus better extracts the true latent trajectory from the noisy high dimensional observations. The detailed results are reported in Appendix H In order to demonstrate the benefits of introducing stochasticity in the encoder and maximizing the predictive information in latent space, we considered four variants of CPICs: with stochastic or deterministic encoder, and with predictive information in latent space or between latent and observation space. All four variants of CPIC models utilize the latent dimension size Q = 3 and the time window size T = 4. For each model and each SNR level, we run 100 replicates with random initializations. We show the aligned latent trajectories inferred from corrupted lifted observation for high, intermediate and low SNR (0.001, 0.01, 0.1) levels of noise with the median R2 scores across 100 replicates in Figure 2. The point-wise distances between the recovered dynamics and the ground-truth dynamics are encoded in the colors from blue to red, corresponding to short to long distance. For high SNR (SNR = 0.1, topright), all models did a good job of recovering the Lorenz dynamics though the stochastic CPIC with predictive information on latent space had larger R2 than others. For intermediate SNR (SNR = 0.008, middle-right), we see that stochastic CPICs performs much bet- ter than the deterministic CPICs. Finally, as the SNR gets lower (SNR = 0.001, bottom-right) all methods perform poorly, but we note that, numerically, considering predictive information in latent space is much better than that between latent and observation space. To more thoroughly characterize the benefits of stochastic encoding and PI in the latent space, we examined the mean of R2 scores for the four variants on each level of SNR across N = 10 and N = 100 replicates in the top row of Figure 3. It shows that the CPIC with stochastic representations and PI in latent space robustly outperforms other variants on average. We also report the best R2 scores for the four variants in the sense that we report the R2 score for the model with the smallest training loss across N runs. The bottom row of Figure 3 shows that CPIC with stochastic representation and PI in latent space achieves better reconstruction and robustness to noise than other variants, especially when the number of runs N is small. Even when N is large, stochastic CPIC with PI in latent space greatly outperforms others when the noise level is high. We note that in the case of high-dimensional noisy observations with large numbers of samples common in many modern real-world time series datasets, CPICs robustness to noise and capacity to achieve good results in a small number of runs is a clear advantage. Moveover, we displayed the quantile anaylsis of the R2 scores in Appendix I with consistent result. 5.2 REAL EXPERIMENTS WITH DIVERSE FORECASTING TASKS In this section, we show that latent representations extracted by stochastic CPIC perform better in the downstream forecasting tasks on four real datasets. We compared stochastic CPIC with contrastive predictive coding (CPC) (Oord et al., 2018), PCA, SFA (Wiskott & Sejnowski, 2002), DCA (Clark et al., 2019) and deterministic CPIC. As for CPC, we use a linear encoder for fair comparison. In addition, we compared the result from CPCs and CPICs with nonlinear encoder in which the linear mean encoder is replaced by a multi-layer perceptron. For each model, we extract the latent representations (conditional mean) and conduct prediction tasks on the relevant exogenous variable at a future time step for the neural datasets. For example, for the M1 dataset, we extract a consecutive 3-length window representation of multi-neuronal spiking activity to predict the monkey’s arm position in a future time step which is lag time stamps away. The details of experiments are available in Appendix J. Neuroscientists often want to interpret latent representations of data to gain insight into the processes that generate the observed data. Thus, we used linear regression 1 to predict exogenous variables, with the intuition that a simple (i.e., linear) prediction model will only be sensitive to the structure in the data that is easiest to interpret as in (Yu et al., 2008; Pandarinath et al., 2018; Clark et al., 2019). Furthermore, the neuroscience data sets (M1 and HC) present extremely challenging settings for prediction of the exogenous variables due to severe experimental undersampling of neurons due to technical limitations, as well as sizeable noise magnitudes. For these tasks, R2 regression score is used as the evaluation metric to measure the forecasting performance. Four datasets are split into 4:1 train and test data and the forecasting task considered three different lag values (5, 10, and 15). For DCA and deterministic/stochastic CPICs, we took three different window sizes T = 1, 2, 3 and report the best R2 scores. Table 1 reports all R2 scores and demonstrates that our stochastic CPIC outperforms all other models except for the case for Temp data with forecasting at lag 15. 6 CONCLUDING REMARKS We developed a novel information-theoretic framework, Compressed Predictive Information Coding, to extract representations in sequential data. CPIC balances the maximization of the predictive information in latent space with the minimization of the compression complexity of the latent representation. We leveraged stochastic representations by employing a stochastic encoder and developed variational bounds of the CPIC objective function. We demonstrated that CPIC extracts more accurate low-dimensional latent dynamics and more useful representations that have better forecasting performance in diverse downstream tasks in four real-world datasets. Together, these results indicate that CPIC will yield similar improvements in other real-world scenarios. Moreover, we note that in most real datasets, using nonlinear CPIC would lead to better representation in terms of prediction performance than linear CPIC. 1https://scikit-learn.org/stable/modules/linear model.html A SELECTION OF WINDOW SIZE Selecting optimal window size T is important for the downstream use of the dynamics. Poor selection of T may cause aliasing artifacts. In general, we nee to select it by cross validation. Furthermore, we can make plots of the predictive information as a function of both window size T and the embedding dimension Q as diagnostic tools. B DERIVATION OF IV UB Directly estimating the compression complexity is intractable, because I(X(T );Y (T )) := EX(T ) [ KL(p(y(T )|x(T )), p(y(T ))) ] in which the population distribution p(y(T )) is unknown. Thus we introduce a variational approximation to the marginal distribution of encoded inputs p(y(T )), denoted as r(y(T )). Due to the non-negativity of the Kullback-Leibler (KL) divergence, the variational upper bound (VUB) is derived as I(X(T );Y (T )) = EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] − KL(p(y(T )), r(y(T ))) ≤ EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] = IVUB(X(T );Y (T )) . (9) C DERIVATION OF IL1Out Generally, learning r(y(T )) was recognised as the distribution density estimation problem (Silverman, 2018), which is challenging. In this setting, the variational distribution r(y(T )) is assumed to be learnable, and thus estimating the variational upper bound is tractable. In particular, Alemi et al. (2016) fixed r(y(T )) as a standard normal distribution, leading to high-bias in MI estimation. Recently, Poole et al. (2019) utilized a Monte Carlo approximation for variational distribution. In our case, with S sample pairs (x(T )i, y(T )i)Si=1, ri(y(T )) = 1 S−1 ∑ j ̸=i p(y(T )|x(T )j) ≈ p(y(T )) and the L1Out is derived as below: IL1Out(X(T );Y (T )) = E [ 1 S S∑ i=1 [ log p(y(T )i|x(T )i) 1 S−1 ∑ j ̸=i p(y(T )i|x(T )j) ]] . (10) D DERIVATION OF IV LB Similar to Agakov (2004), we replace the intractable conditional distribution p(y(T )|y(−T )) with a tractable optimization problem over a variational conditional distribution q(y(T )|y(−T )). It yields a lower bound on PI due to the non-negativity of the KL divergence: I(Y (−T );Y (T )) ≥ H(Y (T )) + Ep(y(−T ),y(T ))[log q(y(T )|y(−T ))] (11) where H(Y ) is the differential entropy of variable Y and this bound is tight if and only if q(y(T )|y(−T )) = p(y(T )|y(−T )), suggesting that the second term in equation 11 equals the negative conditional entropy −H(Y (T )|Y (−T )). However the variational lower bound requires a tractable decoder for the conditional q(y|x). Alternatively, by considering an energy-based variational family for conditional distribution The conditional expectation in equation 11 can be estimated using Monte Carlo sampling based on the encoded data distribution p(y(−T ), y(T )). And encoded data are sampled by introducing the augmented data x(−T ) and x(T ) and marginalizing them out as p(y((−T ), y(T )) = ∫ p(x(−T ), x(T ))p(y(−T )|x(−T ))p(y(T )|x(T ))dx(−T )x(T ) (12) according to the Markov chain proposed in Figure 1. E DERIVATION OF ITUBA According to Poole et al. (2019), by considering an energy-based variational family to express and conditional distribution q(y(T )|y(−T )): q(y(T )|y(−T )) = p(y(T ))e f(y(T ),y(−T )) Z(y(−T )) (13) where f(x, y) is a differentiable critic function, Z(y(−T )) = Ep(y(T ) [ ef(y(T ),y(−T )) ] is a partition function, and introducing a baseline function a(y(T )), we derived a tractable TUBA lower bound (Barber & Agakov, 2003) of the predictive information as: I(Y (−T ), Y (T )) ≥ Ep(y(−T ),y(T ))[f̃(y(−T ), y(T ))]− log ( Ep(y(−T ))p(y(T ))[ef̃(y(−T ),y(T ))] ) = ITUBA(Y (−T ), Y (T )) (14) where f̃(y(−T ), y(T )) = f(y(−T ), y(T ))− log(a(y(T ))) is treated as an updated critic function. Notice that different choices of baseline functions lead to different mutual information estimators. When a(y(T )) = 1, it leads to mutual information neural estimator (MINE) (Belghazi et al., 2018); when a(y(T )) = Z(y(T )), it leads to the lower bound proposed in Donsker & Varadhan (1975) (DV) and when a(y(T )) = e, it recovers the lower bound in Nguyen et al. (2010) (NWJ) also known as f-GAN (Nowozin et al., 2016) and MINE-f (Belghazi et al., 2018). In general, the critic function f(x, y) and the log baseline function a(y) are usually parameterized by neural networks (Oord et al., 2018; Belghazi et al., 2018): Oord et al. (2018) used a separable critic function f(x, y) = hθ(x) T gθ(y), while Belghazi et al. (2018) used a joint critic function f(x, y) = fθ(x, y), and Poole et al. (2019) claimed that joint critic function generally performs better than separable critic function but scale poorly with batch size. F DERIVATION OF IinfoNCE The derivation of infoNCE in our CPIC setting is trivial by treating Y (−T ) and Y (T ) as the input and output in the infoNCE formula from the CPC setting (Oord et al., 2018). G DETAILS OF SIMULATION In this section, we first generated the 3D latent signals according to the Lorenz dynamic system 8 denoted as X ∈ R3×T . We calculated the largest eigenvalue of the covariance matrix of X as dynamic variance denoted as σ2dynamics, and the noise variance is σ 2 noise = σ 2 dynamics/SNR where SNR is signal-to-noise ratio. Then we randomly generate a semi orthogonal matrix V ∈ R30×3. Then we generated the true 30D signal V X embedded with additive spatially structured white noise, where the noise subspace Vnoise is generated with median principle angles with respect to dynamics subspaces V . The noise covariance is generated via Σnoise with the largest eigenvalue σ2noise, and then we generate the noisy signal at the nth dimension by [Ynoisy]n ∼ N (vTnX,Σnoise), n = 1, . . . 30. H MODEL COMPARISON IN TERMS OF R2 REGRESSION SCORE IN THE NOISY LORENZ ATTRACTOR EXPERIMENT In this section, theR2 regression scores for CPC, DCA, deterministic & stochastic CPICs (three unisample upper bounds in terms of NWJ, MINE, TUBA, and one multi-sample upper bound) for all ten different SNRs are reported in Table 2. It shows that stochastic CPIC with multi-sample upper bound outperforms other approaches in majority of SNRs. It also shows that that CPIC is most robust to the noisy data and thus detect best latent trajectories from noisy observation compared with CPC and DCA. We also show the aligned latent trajectories inferred from corrupted lifted observation for high, intermediate and low SNR (0.001, 0.01, 0.1) levels of noise with the median R2 scores across 100 replicates for PCA and DCA (as the extension of Figure 2) in Figure 4. The point-wise distances between the recovered dynamics and the ground-truth dynamics are encoded in the colors from blue to red, corresponding to short to long distance. It show that stochastic CPIC outperforms both PCA and DCA. I COMPARISON ON R2 SCORES OF LATENT DYNAMICS REGRESSION FOR NOISY LORENZ ATTRACTOR IN TERMS OF QUANTILE ANALYSIS We displayed the medium performance (with the inter-quantile range as the error bars) of R2 scores of latent dynamics regression for noisy Lorenz attractor in Figure 5. J DETAILS OF REAL-WORLD EXPERIMENTS The four real data are Monkey motor cortical dataset (M1), Rat hippocampal data (HC), Temperature dataset (Temp) and Accelerate dataset (MS). J.1 MONKEY MOTOR CORTICAL DATASET O’Doherty et al. (2017) released multi-electrode spiking data for both M1 and S1 for two monkeys during a continuous grid-based reaching task. We used M1 data from the subject “Indy” (specifically, we used the file “indy 20160627 01.mat”). We discarded single units with fewer than 5,000 spikes, leaving 109 units. We binned the spikes into non-overlapping bins , square-root transformed the data and mean-centered the data using a sliding window 30 s in width. 𝑁 = 10 𝑁 = 100 J.2 RAT HIPPOCAMPAL DATA Glaser et al. (2020) released the original data. The data consist of 93 minutes of extracellular recordings from layer CA1 of dorsal hippocampus while a rat chased rewards on a square platform. We discarded single units with fewer than 10 spikes, leaving 55 units. We binned the spikes into nonoverlapping 50 ms bins, then square-root transformed the data. J.3 TEMPERATURE DATASET The temperature dataset consists of hourly temperature data for 30 U.S. cities over a period of 7 years from OpenWeatherMap.org. We downsampled the data by a factor of 24 to obtain daily temperatures. J.4 ACCELEROMETER DATASET Malekzadeh et al. (2018) released accelerometer data which records roll, pitch, yaw, gravity x, y, z, rotation x, y, z and acceleration x, y, z for a total of 12 kinematic variables. The sampling rate is 50 Hz. We used the file “sub 19.csv” from “A DeviceMotion data.zip”. J.5 FORECASTING TASK The forecasting task is the same in Clark et al. (2019). We use the extracted consecutive 3-length window representation of endogenous data to forecast the future relevant exogenous variables at log n. In M1 and HC, the endogenous variables are processed spiking data, and the exogenous variables are location data. In Temp and MS, we assume endogenous variables and exogenous variables are the same, 30 U.S. cities’ hourly temperature for Temp data and 12 kinematic variables for MS data.
1. What is the focus and contribution of the paper regarding low-dimensional structure extraction for dynamic data? 2. What are the strengths of the proposed approach, particularly in minimizing compression complexity and maximizing predictive information? 3. What are the weaknesses of the paper, especially regarding its similarity to other frameworks and assumptions made? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper aims at extracting low-dimensional structure for dynamic data, especially for time series data. The proposed method, Compressed Predictive Information Coding (CPIC), both minimizes compression complexity and maximizes the predictive information in the latent space. This work extends the prior works dynamical components analysis and deep autoencoding predictive components. This work includes the analysis for both linear and non-linear encoding. The core idea is to estimate the latent predictive information without heavy matrix computation, but with cheap approximations. The resulting model achieves good performance on synthetic lorenz attractor, two neuroscience datasets and other real-world scenarios. Strengths And Weaknesses Pros: The variational bound alleviates the cost of predictive information. It can potentially be used in more scenarios as a plug-in module. Operating in the latent space makes more sense than input space and employs the power of non-linear layers. Various experients and real-world datasets are shown. Cons: It seems that DAPC has a very similar framework, also with a deterministic and probabilistic encoders (also a VAE?). Could you further elaborate on the differences? Could you further justify the stationarity assumption? Is it commonly seens in time series analysis? When you demonstrate the qualitative results (e.g. synthetic data), maybe you can also show some results from other methods. You can also show some time cost numbers to demonstrate the advantage of your method speed-wise. Clarity, Quality, Novelty And Reproducibility In general, this method extends the prior works and propose a variational bound to reduce the computation cost. The paper has both theoretical and empirical results. Most related methods are included for comparison. The code is not provided. The method is not straightforward to re-implement.
ICLR
Title Training Structured Neural Networks Through Manifold Identification and Variance Reduction Abstract This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization. Implementation of RMDA is available at https://www.github.com/zihsyuan1214/rmda. 1 Introduction Training neural networks (NNs) with regularization to obtain a certain desired structure such as structured sparsity or discrete-valued parameters is a problem of increasing interest. Existing approaches either use stochastic subgradients of the regularized objective (Wen et al., 2016; 2018) or combine popular stochastic gradient algorithms for NNs, like SGD with momentum (MSGD) or Adam (Kingma & Ba, 2015), with the proximal operator associated with the regularizer to conduct proximal stochastic gradient updates to obtain a model with preferred structures (Bai et al., 2019; Yang et al., 2019; Yun et al., 2021; Deleu & Bengio, 2021). Such methods come with proven convergence for certain measures of first-order optimality and have shown some empirical success in applications. However, we notice that an essential theoretical support lacking in existing methods is the guarantee for the output iterate to possess the same structure as that at the point of convergence. More specifically, often the imposed regularization is only known to induce a desired structure exactly at optimal or stationary points of the underlying optimization problem (see for example, Zhao & Yu, 2006), but training algorithms are only able to generate iterates asymptotically converging to a stationary point. Without further theoretical guarantees, it is unknown whether the output iterate, which is just an approximation of the stationary point, still has the same structure. For example, let us assume that sparsity is desired, the point of convergence is x∗ = (1, 0, 0), and two algorithms respectively produce iterates {yt = (1, t−1, t−1)} and {zt = (1 + t−1, 0, 0)}. Clearly, both iterate sequences converge to x∗, but only zt has the same desired structure as its limit point x∗, while yt is not useful for sparsity despite that the point of convergence is. This work aims at filling this gap to propose an algorithm for training structured NNs that can provably make all its iterates after a finite number of iterations possess the desired structure of the stationary point to which the iterates converge. We term the structure at a stationary point a stationary structure, and it should be understood that for multiple stationary points, each might correspond to a different stationary structure, and we aim at identifying the one at the limit point of the iterates of an algorithm, instead of selecting the optimal one among all stationary structures. Although finding the structure at an inferior stationary point might seem not very meaningful, another reason for studying this identification property is that for the same point of convergence, the structure at the limit point is the most preferable one. Consider the same example above, we note that for any sequence {xt} converging to x∗, xt1 6= 0 for all t large enough, for otherwise xt does not converge to x∗. Therefore, xt cannot be sparser than x∗ if xt → x∗.1 Identifying the structure of the point of convergence thus also amounts to finding the locally most ideal structure under the same convergence premise. It is well-known in the literature of nonlinear optimization that generating iterates consistently possessing the structure at the stationary point of convergence is possible if all points with the same structure near the stationary point can be presented locally as a manifold along which the regularizer is smooth. This manifold is often termed as the active manifold (relative to the given stationary point), and the task of generating iterates staying in the active manifold relative to the point of convergence after finite iterations is called manifold identification (Lewis, 2002; Hare & Lewis, 2004; Lewis & Zhang, 2013). To identify the active manifold of a stationary point, we need the regularizer to be partly smooth (Lewis, 2002; Hare & Lewis, 2004) at that point, roughly meaning that the regularizer is smooth along the active manifold around the point, while the change in its value is drastic along directions leaving the manifold. A more technical definition will be given in Section 3. Fortunately, most regularizers used in machine learning are partly smooth, so stationary structure identification is possible, and various deterministic algorithms are known to achieve so (Hare & Lewis, 2007; Hare, 2011; Wright, 2012; Liang et al., 2017a;b; Li et al., 2020; Lee, 2020; Bareilles et al., 2020). On the other hand, for stochastic gradient-related methods to identify a stationary structure, existing theory suggests that the variance of the gradient estimation needs to vanish as the iterates approach a stationary point (Poon et al., 2018), and indeed, it is observed empirically that proximal stochastic gradient descent (SGD) is incapable of manifold identification due to the presence of the variance in the gradient estimation (Lee & Wright, 2012; Sun et al., 2019).2 Poon et al. (2018) showed that variance-reduction methods such as SVRG (Johnson & Zhang, 2013; Xiao & Zhang, 2014) and SAGA (Defazio et al., 2014) that utilize the finite-sum structure of empirical risk minimization to drive the variance of their gradient estimators to zero are suitable for this task. Unfortunately, with the standard practice of data augmentation in deep learning, training of deep learning models with a regularizer should be treated as the following stochastic optimization problem that minimizes the expected loss over a distribution, instead of the commonly seen finite-sum form: min W∈E F (W ) := Eξ∼D [fξ (W )] + ψ (W ) , (1) where E is a Euclidean space with inner product 〈·, ·〉 and the associated norm ‖·‖, D is a distribution over a space Ω, fξ is differentiable almost everywhere for all ξ ∈ Ω, and ψ(W ) is a regularizer that might be nondifferentiable. We will also use the notation f(W ) := Eξ∼D[fξ(W )]. Without a finite-sum structure in (1), Defazio & Bottou (2019) pointed out that classical variance-reduction methods are ineffective for deep learning, and one major reason is the necessity of periodically evaluating ∇f(W ) (or at least using a large batch from D to get a precise approximation of it) in variance-reduction methods is intractable, hence manifold identification and therefore finding the stationary structure becomes an extremely tough task for deep learning. Although recently there are efforts in developing variancereduction methods for (1) inspired by online problems (Wang et al., 2019; Nguyen et al., 2021; Pham et al., 2020; Cutkosky & Orabona, 2019; Tran-Dinh et al., 2019), these methods all have multiple hyperparameters to tune and incur computational cost at least twice or 1See a more detailed discussion in Appendix B.1. 2An exception is the interpolation case, in which the variance of plain SGD vanishes asymptot- ically. But data augmentation often fails this interpolation condition. thrice to that of (proximal) SGD. As the training of deep learning models is time- and resource-consuming, these drawbacks make such methods less ideal for deep learning. To tackle these difficulties, we extend the recently proposed modernized dual averaging framework (Jelassi & Defazio, 2020) to the regularized setting by incorporating proximal operations, and obtain a new algorithm RMDA (Regularized Modernized Dual Averaging) for (1). The proposed algorithm provably achieves variance reduction beyond finite-sum problems without any cost or hard-to-tune hyperparameters additional to those of proximal momentum SGD (proxMSGD), and we provide theoretical guarantees for its convergence and ability for manifold identification. The key difference between RMDA and the original regularized dual averaging (RDA) of Xiao (2010) is that RMDA incorporates momentum and can achieve better performance for deep learning in terms of the generalization ability, and the new algorithm requires nontrivial proofs for its guarantees. We further conduct experiments on training deep learning models with a regularizer for structured-sparsity to demonstrate the ability of RMDA to identify the stationary structure without sacrificing the prediction accuracy. When the desired structure is (unstructured) sparsity, a popular approach is pruning that trims a given dense model to a specified level, and works like (Gale et al., 2019; Blalock et al., 2020; Evci et al., 2020; Verma & Pesquet, 2021) have shown promising results. However, as a post-processing approach, pruning is essentially different from structured training considered in this work, because pruning is mainly used when a model is available, while structured training combines training and structure inducing in one procedure to potentially reduce the computational cost and memory footprint when resources are scarce. We will also show in our experiment that RMDA can achieve better performance than a state-of-the-art pruning method, suggesting that structured training indeed has its merits for obtaining sparse NNs. The main contributions of this work are summarized as follows. • Principled analysis: We use the theory of manifold identification from nonlinear opti- mization to provide a unified way towards better understanding of algorithms for training structured neural networks. • Variance reduction beyond finite-sum with low cost: RMDA achieves variance reduction for problems that consist of an infinite-sum term plus a regularizer (see Lemma 2) while incorporating momentum to improve the generalization performance. Its spatial and computational cost is almost the same as proxMSGD, and there is no additional hyperparameters to tune, making RMDA suitable for large-scale deep learning. • Structure identification: With the help of variance reduction, our theory shows that under suitable conditions, after a finite number of iterations, iterates of RMDA stay in the active manifold of its limit point. • Superior empirical performance: Experiments on neural networks with structured sparsity exemplify that RMDA can identify a stationary structure without reducing the validation accuracy, thus outperforming existing methods by achieving higher group sparsity. Another experiment on unstructured sparsity also shows RMDA outperforms a state-of-the-art pruning method. After this work is finished, we found a very recent paper Kungurtsev & Shikhman (2021) that proposed the same algorithm (with slightly differences in the parameters setting in Line 5 of Algorithm 1) and analyzed the expected convergence of (1) under a specific scheduling of ct = st+1α−1t+1 when both terms are convex. In contrast, our work focuses on nonconvex deep learning problems, and especially on the manifold identification aspect. 2 Algorithm Details of the proposed RMDA are in Algorithm 1. At the t-th iteration with the iterate W t−1, we draw an independent sample ξt ∼ D to compute the stochastic gradient ∇fξt(W t−1), decide a learning rate ηt, and update the weighted sum Vt of previous stochastic gradients using ηt and the scaling factor βt := √ t: V0 := 0, Vt := ∑t k=1 ηkβk∇fξk(W k−1) = Vt−1 + ηtβt∇fξt(W t−1), ∀t > 0. Algorithm 1: RMDA (W 0, T, η(·), c(·)) input : Initial point W 0, learning rate schedule η(·), momentum schedule c(·), number of epochs T 1 V0 ← 0, α0 ← 0 2 for t = 1, . . . , T do 3 βt ← √ t, st ← η(t)βt, αt ← αt−1 + st 4 Sample ξt ∼ D and compute V t ← V t−1 + st∇fξt(W t−1) 5 W̃ t ← arg minW 〈V t, W 〉+ βt2 ∥∥W −W 0∥∥2 + αtψ(W ) // (2) 6 W t ← (1− c(t))W t−1 + c(t)W̃ t output: The final model WT The tentative iterate W̃ t is then obtained by the proximal operation associated with ψ: W̃ t = proxαtβ−1t ψ ( W 0 − β−1t V t ) , αt := ∑t k=1 βkηk, (2) where for any function g, proxg(x) := arg miny ‖y − x‖ 2 /2 + g(y) is its proximal operator. The iterate is then updated along the direction W̃ t −W t−1 with a factor of ct ∈ [0, 1]: W t = (1− ct)W t−1 + ctW̃ t = W t−1 + ct ( W̃ t −W t−1 ) . (3) When ψ ≡ 0, RMDA reduces to the modernized dual averaging algorithm of Jelassi & Defazio (2020), in which case it has been shown that mixing W t−1 and W̃ t in (3) equals to introducing momentum (Jelassi & Defazio, 2020; Tao et al., 2018). We found that this introduction of momentum greatly improves the performance of RMDA and is therefore essential for applying it on deep learning problems. 3 Analysis We provide theoretical analysis of the proposed RMDA in this section. Our analysis shows variance reduction in RMDA and stationarity of the limit point of its iterates, but all of them revolves around our main purpose of identification of a stationary structure within a finite number of iterations. The key tools for this end are partial smoothness and manifold identification (Hare & Lewis, 2004; Lewis, 2002). Our result is the currently missing cornerstone for those proximal algorithms applied to deep learning problems for identifying desired structures. In fact, it is actually well-known in convex optimization that those algorithms based on plain proximal stochastic gradient without variance reduction are unable to identify the active manifold, and the structure of the iterates oscillates due to the variance in the gradient estimation; see, for example, experiments and discussions in Lee & Wright (2012); Sun et al. (2019). Our work is therefore the first one to provide justification for solving the regularized optimization problem in deep learning to really identify a desired structure induced by the regularizer. Throughout, ∇fξ denotes the gradient of fξ, ∂ψ is the (regular) subdifferential of ψ, and relint(C) means the relative interior of the set C. We start from introducing the notion of partial smoothness. Definition 1. A function ψ is partly smooth at a point W ∗ relative to a set MW∗ 3W ∗ if 1. Around W ∗, MW∗ is a C2-manifold and ψ|MW∗ is C2. 2. ψ is regular (finite with the Fréchet subdifferential coincides with the limiting Fréchet subdifferential) at all points W ∈MW∗ around W ∗ with ∂ψ(W ) 6= ∅. 3. The affine span of ∂ψ(W ∗) is a translate of the normal space to MW∗ at W ∗. 4. ∂ψ is continuous at W ∗ relative to MW∗ . We often call MW∗ the active manifold at W ∗. Another concept required for manifold identification is prox-regularity (Poliquin & Rockafellar, 1996). Definition 2. A function ψ is prox-regular at W ∗ for V ∗ ∈ ∂ψ(W ∗) if ψ is finite at W ∗, locally lower semi-continuous around W ∗, and there is ρ > 0 such that ψ(W1) ≥ ψ(W2) + 〈V, W1−W2〉− ρ2‖W1 −W2‖ 2 whenever W1,W2 are close to W ∗ with ψ(W2) near ψ(W ∗) and V ∈ ∂ψ(W2) near V ∗. ψ is prox-regular at W ∗ if it is so for all V ∈ ∂ψ(W ∗). To broaden the applicable range, a function ψ prox-regular at some W ∗ is often also assumed to be subdifferentially continuous (Poliquin & Rockafellar, 1996) there, meaning that if W t → W ∗, ψ(W t) → ψ(W ∗) holds when there are V ∗ ∈ ∂ψ(W ∗) and a sequence {V t} such that V t ∈ ∂ψ(W t) and V t → V ∗. Notably, all convex and weakly-convex (Nurminskii, 1973) functions are regular, prox-regular, and subdifferentially continuous in their domain. 3.1 Theoretical Results When the problem is convex, convergence guarantees for Algorithm 1 under two specific specific schemes are known. First, when ct ≡ 1, RMDA reduces to the classical RDA, and convergence to a global optimum (of W t = W̃ t in this case) on convex problems has been proven by Lee & Wright (2012); Duchi & Ruan (2021), with convergence rates of the expected objective or the regret given by Xiao (2010); Lee & Wright (2012). Second, when ct = st+1α−1t+1 and (βt, αt) in Line 5 of Algorithm 1 are replaced by (βt+1, αt+1), convergence is recently analyzed by Kungurtsev & Shikhman (2021). In our analysis below, we do not assume convexity of either term. We show that if {W̃ t} converges to a point W ∗ (which could be a non-stationary one), {W t} also converges to W ∗. Lemma 1. Consider Algorithm 1 with {ct} satisfying ∑ ct = ∞. If {W̃ t} converges to a point W ∗, {W t} also converges to W ∗. We then show that if {W̃ t} converges to a point, almost surely this point of convergence is stationary. This requires the following lemma for variance reduction of RMDA, meaning that the variance of using Vt to estimate ∇f(W t−1) reduces to zero, as α−1t Vt converges to ∇f(W t−1) almost surely, and this result could be of its own interest. The first claim below uses a classical result in stochastic optimization that can be found at, for example, (Gupal, 1979, Theorem 4.1, Chapter 2.4), but the second one is, to our knowledge, new. Lemma 2. Consider Algorithm 1. Assume for any ξ ∼ D, fξ is L-Lipschitz-continuouslydifferentiable almost surely for some L, so f is also L-Lipschitz-continuously-differentiable, and there is C ≥ 0 such that Eξt∼D ∥∥∇fξt (W t−1)∥∥2 ≤ C for all t. If {ηt} satisfies∑ βtηtα −1 t =∞, ∑( βtηtα −1 t )2 <∞, ∥∥W t+1 −W t∥∥ (βtηtα−1t )−1 a.s.−−→ 0, (4) then α−1t V t −→ ∇f(W t−1) with probability one. Moreover, if {W t} lies in a bounded set, we get E ∥∥α−1t V t −∇f (W t−1)∥∥2 → 0 even if the second condition in (4) is replaced by a weaker condition of βtηtα−1t → 0. In general, the last condition in (4) requires some regularity conditions in F to control the change speed of W t. One possibility is when ψ is the indicator function of a convex set, βtηt ∝ tp for t ∈ (1/2, 1) will satisfy this condition. However, in other settings for ηt, even when F and ψ are both convex, existing analyses for the classical RDA such that ct ≡ 1 in Algorithm 1 still need an additional local error bound assumption to control the change of W t+1 −W t. Hence, to stay focused on our main message, we take this assumption for granted, and leave finding suitable sufficient conditions for it as future work. With the help of Lemmas 1 and 2, we can now show the stationarity result for the limit point of the iterates. The assumption of βtα−1t approaching 0 below is classical in analyses of dual averaging in order to gradually remove the influence of the term ∥∥W −W 0∥∥2. Theorem 1. Consider Algorithm 1 with the conditions in Lemmas 1 and 2 hold, and assume the set of stationary points Z := {W | 0 ∈ ∂F (W )} is nonempty and βtα−1t → 0. For any given W 0, consider the event that {W̃ t} converges to a point W ∗ (each event corresponds to a different W ∗), then if ∂ψ is outer semicontinuous at W ∗, and this event has a nonzero probability, W ∗ ∈ Z, or equivalently, W ∗ is a stationary point, with probability one conditional on this event. Finally, with Lemmas 1 and 2 and Theorem 1, we prove the main result that the active manifold of the limit point is identified in finite iterations of RMDA under nondegeneracy. Theorem 2. Consider Algorithm 1 with the conditions in Theorem 1 satisfied. Consider the event of {W̃ t} converging to a certain point W ∗ as in Theorem 1, if the probability of this event is nonzero; ψ is prox-regular and subdifferentially continuous at W ∗ and partly smooth at W ∗ relative to the active C2 manifoldM; ∂ψ is outer semicontinuous at W ∗; and the nondegeneracy condition −∇f (W ∗) ∈ relint ∂ψ (W ∗) (5) holds at W ∗, then conditional on this event, almost surely there is T0 ≥ 0 such that W̃ t ∈M, ∀t ≥ T0. (6) In other words, the active manifold at W ∗ is identified by the iterates of Algorithm 1 after a finite number of iterations almost surely. As mentioned in Section 1, an important reason for studying manifold identification is to get the lowest-dimensional manifold representing the structure of the limit point, which often corresponds to a preferred property for the application, like the highest sparsity, lowest rank, or lowest VC dimension locally. See an illustrated example in Appendix B.1. 4 Applications in Deep Learning We discuss two popular schemes of training structured deep learning models achieved through regularization to demonstrate the applications of RMDA. More technical details for applying our theory to the regularizers in these applications are in Appendix B. 4.1 Structured Sparsity As modern deep NN models are often gigantic, it is sometimes desirable to trim the model to a smaller one when only limited resources are available. In this case, zeroing out redundant parameters during training at the group level is shown to be useful (Zhou et al., 2016), and one can utilize regularizers promoting structured sparsity for this purpose. The most famous regularizer of this kind is the group-LASSO norm (Yuan & Lin, 2006; Friedman et al., 2010). Given λ ≥ 0 and a collection G of index sets {Ig} of the variable W , this convex regularizer is defined as ψ(W ) := λ ∑|G| g=1 wg ∥∥WIg∥∥, (7) with wg > 0 being the pre-specified weight for Ig. For any W ∗, let GW∗ ⊆ G be the index set such that W ∗Ij = 0 for all j ∈ GW∗ , the group-LASSO norm is partly smooth around W ∗ relative to the manifold MW∗ := {W |WIi = 0,∀i ∈ GW∗}, so our theory applies. In order to promote structured sparsity, we need to carefully design the grouping. Fortunately, in NNs, the parameters can be grouped naturally (Wen et al., 2016). For any fully-connected layer, let W ∈ Rout×in be the matrix representation of the associated parameters, where out is the number of output neurons and in is that of input neurons, we can consider the column-wise groups, defined as W:,j for all j, and the row-wise groups of the form Wi,:. For a convolutional layer with W ∈ Rfilter×channel×height×width being the tensor form of the corresponding parameters, we can consider channel-wise, filter-wise, and kernel-wise groups, defined respectively as W:,j,:,:, Wi,:,:,: and Wi,j,:,:. 4.2 Binary/Discrete Neural Networks Making the parameters of an NN binary integers is another way to obtain a more compact model during training and deployment (Hubara et al., 2016), but discrete optimization is hard to scale-up. Using a vector representation w ∈ Rm of the variables, Hou et al. (2017) thus proposed to use the indicator function of { w | wIi = αibIi , αi > 0, bIi ∈ {±1}|Ii| } to induce the entries of w to be binary without resorting to discrete optimization tools, where each Ii enumerates all parameters in the i-th layer. Yang et al. (2019) later proposed to use minα∈[0,1]m ∑m i=1 ( αi(wi + 1)2 + (1− αi)(wi − 1)2 ) as the regularizer and to include α as a variable to train. At any α∗ with I0 := {i | α∗i = 0} and I1 := {i | α∗i = 1}, the objective is partly smooth relative to the manifold {(W,α) | αI0 = 0, αI1 = 1}. Extension to discrete NNs beyond the binary ones is possible, and Bai et al. (2019) have proposed regularizers with closed-form proximal operators for it. 5 Experiments We use the structured sparsity application in Section 4.1 to empirically exemplify the ability of RMDA to find desired structures in the trained NNs. RMDA and the following methods for structured sparsity in deep learning are compared using PyTorch (Paszke et al., 2019). • ProxSGD (Yang et al., 2019): A simple proxMSGD algorithm. To obtain group sparsity, we skip the interpolating step in Yang et al. (2019). • ProxSSI (Deleu & Bengio, 2021): This is a special case of the adaptive proximal SGD framework of Yun et al. (2021) that uses the Newton-Raphson algorithm to approximately solve the subproblem. We directly use the package released by the authors. We exclude the algorithm of Wen et al. (2016) because their method is shown to be worse than ProxSSI by Deleu & Bengio (2021). To compare these algorithms, we examine both the validation accuracy and the group sparsity level of their trained models. We compute the group sparsity as the percentage of groups whose elements are all zero, so the reported group sparsity is zero when there is no group with a zero norm, and is one when the whole model is zero. For all methods above, we use (7) with column-wise and channel-wise groupings in the regularization for training, but adopt the kernel-wise grouping in their group sparsity evaluation. Throughout the experiments, we always use multi-step learning rate scheduling that decays the learning rate by a constant factor every time the epoch count reaches a pre-specified threshold. For all methods, we conduct grid searches to find the best hyperparameters. All results shown in tables in Sections 5.1 and 5.2 are the mean and standard deviation of three independent runs with the same hyperparameters, while figures use one representative run for better visualization. In convex optimization, a popular way to improve the practical convergence behavior for momentum-based methods is restarting that periodically reset the momentum to zero (O’donoghue & Candes, 2015). Following this idea, we introduce a restart heuristic to RMDA. At each round, we use the output of Algorithm 1 from the previous round as the new input to the same algorithm, and continue using the scheduling η and c without resetting them. For ψ ≡ 0, Jelassi & Defazio (2020) suggested to increase ct proportional to the decrease of ηt until reaching ct = 1. We adopt the same setting for ct and ηt and restart RMDA whenever ηt changes. As shown in Section 3 that W̃ t finds the active manifold, increasing ct to 1 also accords with our interest in identifying the stationary structure. 5.1 Correctness of Identified Structure Using Synthetic Data Our first step is to numerically verify that RMDA can indeed identify the stationary structure desired. To exactly find a stationary point and its structure a priori, we consider synthetic problems. We first decide a ground truth model W that is structured sparse, generate random data points that can be well separated by W , and then decide their labels using W . The generated data are then taken as our training data. We consider a linear logistic regression model and a small NN that has one fully-connected layer and one convolutional layer. To ensure convergence to the ground truth, for logistic regression we generate more data points than the problem dimension to ensure the problem is strongly convex so that there is only one stationary/optimal point, and for the small NN, we initialize all algorithms close enough to the ground truth. We report in Fig. 1 training error rates (as an indicator for the proximity to the ground truth) and percentages of the optimal group sparsity pattern of the ground truth identified. Clearly, although all methods converge to the ground truth, only RMDA identifies the correct structure of it, and other methods without guarantees for manifold identification fail. 5.2 Neural Networks with Real Data We turn to real-world data used in modern computer vision problems. We consider two rather simple models and six more complicated modern CNN cases. The two simpler models are linear logistic regression with the MNIST dataset (LeCun et al., 1998), and training a small NN with seven fully-connected layers on the FashionMNIST dataset (Xiao et al., 2017). The six more complicated cases are: 1. A version of LeNet5 with the MNIST dataset, 2. The same version of LeNet5 with the FashionMNIST dataset, 3. A modified VGG19 (Simonyan & Zisserman, 2015) with the CIFAR10 dataset (Krizhevsky, 2009), 4. The same modified VGG19 with the CIFAR100 dataset (Krizhevsky, 2009), 5. ResNet50 (He et al., 2016) with the CIFAR10 dataset, and 6. ResNet50 with the CIFAR100 dataset. For these six more complicated tasks, we include a dense baseline of MSGD with no sparsityinducing regularizer in our comparison. For all training algorithms on VGG19 and ResNet50, we follow the standard practice in modern vision tasks to apply data augmentation through random cropping and horizontal flipping so that the training problem is no longer a finitesum one. From Fig. 2, we see that similar to the previous experiment, the group sparsity level of RMDA is stable in the last epochs, while that of ProxSGD and ProxSSI oscillates below. This suggests that RMDA is the only method that, as proven in Section 3, identifies the structured sparsity at its limit point, and other methods with no variance reduction fail. Moreover, Table 1 shows that manifold identification of RMDA is achieved with no sacrifice of the validation accuracy, so RMDA beats ProxSGD and ProxSSI in both criteria, and its accuracy is close to that of the dense baseline of MSGD. Moreover, for VGG19 and ResNet50, RMDA succeeds in finding the optimal structured sparsity pattern despite the presence of data augmentation, showing that RMDA can indeed overcome the difficulty from the infinite-sum setting of modern deep learning tasks. We also report that in the ResNet50/CIFAR100 task, on our NVIDIA RTX 8000 GPU, MSGD, ProxSGD, and RMDA have similar per-epoch cost of 68, 77, and 91 seconds respectively, while ProxSSI needs 674 seconds per epoch. RMDA is thus also more suitable for large-scale structured deep learning in terms of practical efficiency. 5.3 Comparison with Pruning We compare RMDA with a state-of-the-art pruning method RigL (Evci et al., 2020). As pruning focuses on unstructured sparsity, we use RMDA with ψ(W ) = λ‖W‖1 to have a fair comparison, and tune λ to achieve a pre-specified sparsity level. We run RigL with 1000 epochs, as its performance at the default 500 epochs was unstable, and let RMDA use the same number of epochs. Results of 98% sparsity in Table 2 show that RMDA consistently outdoes RigL, indicating regularized training could be a promising alternative to pruning. 6 Conclusions In this work, we proposed and analyzed a new algorithm, RMDA, for efficiently training structured neural networks with state-of-the-art performance. Even in the presence of data augmentation, RMDA can still achieve variance reduction and provably identify the desired structure at a stationary point using the tools of manifold identification. Experiments show that existing algorithms for the same purpose fail to find a stable stationary structure, while RMDA achieves so with no accuracy drop nor additional time cost. Acknowledgements This work was supported in part by MOST of R.O.C. grant 109-2222-E-001-003-MY3, and the AWS Cloud Credits for Research program of Amazon Inc. Appendices Table of Contents A Proofs 13 A.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.2 Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 A.3 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.4 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B Additional Discussions on Applications 18 B.1 Structured Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.2 Binary Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C Experiment Setting Details 20 D More Results from the Experiments 20 E Other Regularizers for Possibly Better Group Sparsity and Generalization 21 A Proofs A.1 Proof of Lemma 1 Proof. Using (3), the distance between W t and W ∗ can be upper bounded through the triangle inequality:∥∥W t −W ∗∥∥ = ∥∥(1− ct) (W t−1 −W ∗)+ ct (W̃ t −W ∗)∥∥ ≤ ct ∥∥W̃ t −W ∗∥∥+ (1− ct)∥∥W t−1 −W ∗∥∥. (8) For any event such that W̃ t →W ∗, for any > 0, we can find T ≥ 0 such that∥∥W̃ t −W ∗∥∥ ≤ , ∀t ≥ T . Let δt := ‖W t −W ∗‖, we see from the above inequality and (8) that δt ≤ (1− ct) δt−1 + ct , ∀t ≥ T . By deducting from both sides, we get that (δt − ) ≤ (1− ct) (δt−1 − ) , ∀t ≥ T . Since ∑ ct =∞, we further deduce that lim t→∞ (δt − ) ≤ ∞∏ t=T (1− ct) (δT −1 − ) ≤ ∞∏ t=T exp (−ct) (δT −1 − ) = exp ( − ∞∑ t=T ct ) (δT −1 − ) = 0, where in the first inequality we used the fact that exp(x) ≥ 1 +x for all real number x. The result above then implies that lim t→∞ δt ≤ . As is arbitrary and δt ≥ 0 from the definition, we conclude that limt→∞ δt = 0, which is equivalent to that W t →W ∗. A.2 Proof of Lemma 2 Proof. We observe that α−1t V t = t∑ k=1 ηkβk αt ∇fξk ( W k−1 ) = αt−1 αt α−1t−1V t−1 + αt − αt−1 αt ∇fξt ( W t−1 ) = ( 1− βtηt αt ) α−1t−1V t−1 + βtηt αt ∇fξt ( W t−1 ) . From that f is L-Lipschitz-continuously differentiable, we have that∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ = ∥∥∇f (W t)− f (W t−1)∥∥ ≤ L ∥∥W t −W t−1∥∥. (9) Therefore, (4) and (9) imply that 0 ≤ ∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ βtηtα −1 t ≤ L ∥∥W t −W t−1∥∥ βtηtα −1 t a.s.−−→ 0, which together with the sandwich lemma shows that∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ βtηtα −1 t a.s.−−→ 0. (10) Therefore, the first two conditions of (4) together with (10) and the bounded variance assumption satisfy the requirements of (Gupal, 1979, Chapter 2.4, Theorem 4.1), so the conclusion of almost sure convergence hold. For the convergence in L2 part, we first define mt := α−1t Vt and τt := βtηtα−1t for notational ease. Consider ∥∥mt+1 −∇F (W t)∥∥2, we have from the update rule in Algorithm 1 that∥∥mt+1 −∇F (W t)∥∥2 = ∥∥(1− τt)mt + τt∇fξt+1(W t)−∇F (W t)∥∥2 = ∥∥(1− τt) (mt −∇F (W t))+ τt (∇fξt+1(W t)−∇F (W t))∥∥2 = (1− τt)2 ∥∥mt −∇F (W t)∥∥2 + τ2t ∥∥∇fξt+1(W t)−∇F (W t)∥∥2 + 2τt(1− τt)〈mt −∇F (W t), ∇fξt+1(W t)−∇F (W t)〉 = (1− τt)2 ∥∥(mt −∇F (W t−1))+ (∇F (W t−1)−∇F (W t))∥∥2 (11) + τ2t ∥∥∇fξt+1(W t)−∇F (W t)∥∥2 + 2τt(1− τt)〈mt −∇F (W t), ∇fξt+1(W t)−∇F (W t)〉. Let {Ft}t≥0 denote the natural filtration of {(mt,W t)}t≥0. Namely, Ft records the information of W 0, {ci}t−1i=0, {ηi}t−1i=0, and {ξi}ti=1. By defining Ut := ∥∥mt −∇F (W t−1)∥∥2 and taking expectation over (11) conditional on Ft, we obtain from E [ ∇fξt+1(W t) | Ft ] = ∇F (W t) that E [Ut+1 | Ft] = (1− τt)2 ∥∥(mt −∇F (W t−1))+ (∇F (W t−1)−∇F (W t))∥∥2 + τ2t E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] . (12) From the last condition in (4) and the Lipschitz continuity of∇F , there are random variables { t} and {ut} such that ‖ut‖ = 1, t ≥ 0, and ∇F (W t−1)−∇F (W t) = τt tut for all t > 0, with t ↓ 0 almost surely. We thus obtain that∥∥mt −∇F (W t−1) +∇F (W t−1)−∇F (W t)∥∥2 = ∥∥mt −∇F (W t−1) + τt tut∥∥2 = (1 + τt)2 ∥∥∥∥ 11 + τt (mt −∇F (W t−1))+ τt1 + τt tut ∥∥∥∥2 ≤ (1 + τt)2 ( 1 1 + τt Ut + τt 1 + τt t 2 ) , (13) where we used Jensen’s inequality and the convexity of ‖·‖2 in the last inequality. By substituting (13) back into (12), we obtain E [Ut+1|Ft] ≤ (1− τt)2(1 + τt)Ut + (1− τt)2(1 + τt)τt t2 + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ (1− τt)(Ut + τt t2) + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ (1− τt)Ut + τt t2 + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] . (14) For the last term in (14), we notice that E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ 2(E [∥∥∇fξt(W t)∥∥2]+ ∥∥∇F (W t)∥∥2) ≤ 2 ( C + ∥∥∇F (W t)∥∥2) , (15) where the last inequality is from the bounded variance assumption. Since by assumption the {W t} lies in a bounded set K, we have that for any point W ∗ ∈ K, W t −W ∗ is upper bounded, and thus ‖∇F (W t)−∇F (W ∗)‖ is also bounded, implying that ‖∇F (W t)‖2 ≤ C2 for some C2 ≥ 0. Therefore, (15) further leads to E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ C3 (16) for some C3 ≥ 0. Now we further take expectation on (14) and apply (16) to obtain EUt+1 ≤ (1− τt)EUt + τt t2 + τt2C3 = (1− τt)EUt + τt ( 2t + τtC3 ) . (17) Note that the third implies t ↓ 0, so this together with the second condition that τt ↓ 0 means 2t+τtC3 ↓ 0 as well, and thus for any δ > 0, we can find Tδ ≥ 0 such that 2t+τtC3 ≤ δ for all t ≥ Tδ. Thus, (17) further leads to EUt+1 − δ ≤ (1− τt)EUt + τtδ − δ = (1− τt) (EUt − δ) ,∀t ≥ Tδ. (18) This implies that (EUt − δ) becomes a decreasing sequence starting from t ≥ Tδ, and since Ut ≥ 0, this sequence is lower bounded by −δ, and hence it converges to a certain value. By recursion of (18), we have that EUt − δ ≤ t∏ i=Tδ (1− τi) (EUTδ − δ) , and from the well-known inequality (1 + x) ≤ expx for all x ∈ R, the above result leads to EUt − δ ≤ exp ( − ∑ i = Tδtτi ) (EUTδ − δ) . By letting t approach infinity and noting that the first condition of (4) indicates ∞∑ t=k τt =∞ for any k ≥ 0, we see that −δ ≤ lim t→∞ EUt − δ ≤ exp ( − ∞∑ i=Tδ τi ) (EUTδ − δ) = 0. (19) As δ is arbitrary, by taking δ ↓ 0 in (19) and noting the nonnegativity of Ut, we conclude that limEUt = 0, as desired. This proves the last result in Lemma 2. A.3 Proof of Theorem 1 Proof. Using Lemma 2, we can view α−1t V t as ∇f(W t) plus some noise that asymptotically decreases to zero with probability one: α−1t Vt = ∇f(W t) + t, ‖ t‖ a.s.−−→ 0. (20) We use (20) to rewrite the optimality condition of (2) as (also see Line 5 of Algorithm 1) − ( ∇f ( W t ) + t + βtα−1t ( W̃ t −W 0 )) ∈ ∂ψ ( W̃ t ) . (21) Now we consider ∂F (W̃ t). Clearly from (21), we have that ∇f ( W̃ t ) −∇f ( W t ) − t − βtα−1t ( W̃ t −W 0 ) ∈ ∂∇f ( W̃ t ) + ψ ( W̃ t ) = ∂F ( W̃ t ) . (22) Now we consider the said event that W̃ t → W ∗ for a certain W ∗, and let us define this event as A ⊆ Ω. From Lemma 1, we know that W t → W ∗ as well under A. Let us define B ⊆ Ω as the event of t → 0, then we know that since P (A) > 0 and P (B) = 1, where P is the probability function for events in Ω, P (A ∩ B) = P (A). Therefore, conditional on the event of A, we have that t a.s.−−→ 0 still holds. Now we consider any realization of A∩B. For the right-hand side of (22), as W̃ t is convergent and βtα−1t decreases to zero, by letting t approach infinity, we have that lim t→∞ t + βtα−1t ( W̃ t −W 0 ) = 0 + 0 ( W ∗ −W 0 ) = 0. By the Lipschitz continuity of ∇f , we have from (3) and (4) that 0 ≤ ∥∥∇f (W̃ t)−∇f (W t)∥∥ ≤ L∥∥W t − W̃ t∥∥. As {W t} and {W̃ t} converge to the same point, we see that ∥∥W t − W̃ t∥∥→ 0, so ∇f (W̃ t)− ∇f(W t) also approaches zero. Hence, the limit of the right-hand side of (22) is lim t→∞ ∇f ( W̃ t ) − ( ∇f ( W t ) + t + βtα−1t ( W̃ t −W 0 )) = 0. (23) On the other hand, for the left-hand side of (22), the outer semicontinuity of ∂ψ at W ∗ and the continuity of ∇f show that lim t→∞ ∇f(W̃ t) + ∂ψ(W̃ t) ⊆ ∂∇f(W ∗) + ψ (W ∗) = ∂F (W ∗). (24) Substituting (23) and (24) back into (22) then proves that 0 ∈ ∂F (W ∗) and thus W ∗ ∈ Z. A.4 Proof of Theorem 2 Proof. Our discussion in this proof are all under the event that W̃ t →W ∗. From the argument in Appendix A.3, we can view α−1t V t as ∇f(W t) plus some noise that asymptotically decreases to zero with probability one as shown in (20). From Lemma 1, we know that W t →W ∗. From (21), there is U t ∈ ∂ψ ( W̃ t ) such that U t = −α−1t V t + α−1t βt ( W̃ t −W 0 ) . (25) Moreover, we define γt := W t − W̃ t. (26) By combining (25)–(26) with (20), we obtain min Y ∈∂F (W̃ t) ‖Y ‖ ≤ ∥∥∇f (W̃ t)+ U t∥∥ = ∥∥∇f (W̃ t)−∇f (W t)− t − α−1t βt (W̃ t −W 0)∥∥ ≤ ∥∥∇f (W̃ t)−∇f (W t)∥∥+ ‖ t‖+ α−1t βt∥∥W̃ t −W 0∥∥ ≤L‖γt‖+ ‖ t‖+ α−1t βt (∥∥W ∗ − W̃ t∥∥+ ∥∥W 0 −W ∗∥∥) , (27) where we used the Lipschitz continuity of ∇f and the triangle inequality in the last inequality. We now separately bound the terms in (27). From that W t → W ∗ and W̃ t → W ∗, it is straightforward that ‖γt‖ → 0. The second term decreases to zero almost surely according to (20) and the argument in Appendix A.3. For the last term, since α−1t βt → 0, and∥∥W̃ t −W ∗∥∥→ 0, we know that α−1t βt ∥∥W 0 −W ∗∥∥→ 0, α−1t βt∥∥W̃ t −W ∗∥∥→ 0. Therefore, we conclude from the above argument and (27) that min Y ∈∂F (W̃ t) ‖Y ‖ a.s.−−→ 0. As f is smooth with probability one, we know that if ψ is partly smooth at W ∗ relative toM, then so is F = f + ψ with probability one. Moreover, Lipschitz-continuously differentiable functions are always prox-regular, and the sum of two prox-regular functions is still proxregular, so F is also prox-regular at W ∗ with probability one. Following the argument identical to that in Appendix A.3, we know that these probability one events are still probability one conditional on the event of W̃ t →W ∗ as this event has a nonzero probability. As W̃ t →W ∗ and ∇f(W̃ t) +U t a.s.−−→ 0 ∈ ∂F (W ∗) (the inclusion is from (5)), we have from the subdifferential continuity of ψ and the smoothness of f that F (W̃ t) a.s.−−→ F (W ∗). Since we also have W̃ t →W ∗ and minY ∈∂F (W̃ t) ‖Y ‖ a.s.−−→ 0, clearly( W̃ t, F ( W t ) , min Y ∈∂F(W̃ t) ‖Y ‖ ) a.s.−−→ (W ∗, F (W ∗), 0) . (28) Therefore, (28) and (5) together with the assumptions on ψ at W ∗ imply that with probability one, all conditions of Lemma 1 of Lee (2020) are satisfied, so from it, (6) holds almost surely, conditional on the event of W̃ t →W ∗. B Additional Discussions on Applications We now discuss in more technical details the applications in Section 4.1, especially regarding how the regularizers satisfy the properties required by our theory. B.1 Structured Sparsity We start our discussion with the simple `1 norm as the warm-up for the group-LASSO norm. It is clear that ‖W‖1 is a convex function that is finite everywhere, so it is prox-regular, subdifferentially continuous, and regular everywhere, hence we just need to discuss about the remaining parts in Definition 1. Consider a problem with dimension n > 0. Note that ‖x‖1 = n∑ i=1 |xi|, and the absolute value is smooth everywhere except the point of origin. Therefore, it is clear that ‖x‖1 is locally smooth if xi 6= 0 for all i. For any point x∗, when there is an index set I such that x∗i = 0 for all i ∈ I and x∗i 6= 0 for i /∈ I, we see that the part of the norm corresponds to IC (the complement of I):∑ i∈IC |x∗i | is locally smooth around x∗. Without loss of generality, we assume that I = {1, 2, . . . , k} for some k ≥ 0, then the subdifferential of ‖x‖1 at x∗ is the set {sgn(x1)} × · · · × {sgn(xk)} × [−1, 1]n−k, (29) and clearly if we move from x∗ along any direction y := (y1, . . . , yk, 0, . . . , 0) with a small step, the function value changes smoothly as it is a linear function, satisfying the first condition of Definition 1. Along the same direction y with a small enough step, the set of subdifferential remains the same, so the continuity of subdifferential requirement holds. We can also observe from the above argument that the manifold should be Mx∗ = {x | xi = 0,∀i ∈ I}, and clearly it is a subspace of Rn with its normal space at x∗ being N := {y | 〈x∗, y〉 = 0} = {y | yi = 0,∀i ∈ IC}, which is clearly the affine span of (29) with the translation being (sgn(x1)× · · ·× sgn(xk), 0, . . . , 0). Moreover, indeed the manifolds are low dimensional ones, and for iterates approaching x∗, staying in this active manifold means that the (unstructured) sparsity of the iterates is the same as the limit point x∗. We also provide a graphical illustration of ‖x‖1 with n = 2 in Fig. 3. We can observe that for any x with x1 6= 0 and x2 6= 0, the function is smooth locally around any point, meaning that ‖x‖1 is partly smooth relative to the whole space at x (so actually smooth locally around x). For x with x1 = 0, the function value corresponds to the sharp valley in the graph, and we can see that the function is smooth along the valley, and this valley corresponds to the one-dimensional manifold {x | x1 = 0} for partial smoothness. Next, we use the same graph to illustrate the importance of manifold identification. Consider that the red point x∗ = (0, 1.5) is the limit point of the iterates of a certain algorithm, and the yellow points and black points are two sequences that both converge to x∗. If the iterates of the algorithm are the black points, then clearly except for the limit point itself, all iterates are nonsparse, and thus the final output of the algorithm is also nonsparse unless we can get to exactly the limit point within finite iterations (which is usually impossible for iterative methods). On the other hand, if the iterates are the yellow points, this is the case that the manifold is identified, because all points sit in the valley and enjoy the same sparsity pattern as the limit point x∗. This is why we concern about manifold identification when we solve regularized optimization problems. From this example, we can also see an explanation for why our algorithm with the property of manifold identification performs better than other methods without such a property. Consider a Euclidean space any point x∗ with an index set I such that x∗I = 0 and |I| > 0. This means that x∗ has at least one coordinate being zero, namely x∗ contains sparsity. Now let 0 := min i∈IC |x∗i |, then from the definition of I, 0 > 0. Fro any sequence {xt} converging to x∗, for any ∈ (0, 0), we can find T ≥ 0 such that∥∥xt − x∗∥∥2 ≤ , ∀t ≥ T . Therefore, for any i /∈ I, we must have that xti 6= 0 for all t ≥ T . Otherwise, ‖xt − x∗‖2 ≥ 0, but 0 > ≥ ‖xt − x∗‖2, leading to a contradiction. On the other hand, for any i ∈ I, we can have xti 6= 0 for all t without violating the convergence. That being said, for any sequence converging to x∗, eventually the iterates cannot be sparser than x∗, so the sparsity level of x∗, or of its active manifold, is the local upper bound for the sparsity level of points converging to x∗. Therefore, if iterates of two algorithms converge to the same limit point, the one with a proven manifold identification ability clearly will produce a higher sparsity level. Similar to our example here, in applications other than sparsity, iterates converging to a limit point dwell on super-manifolds of the active manifold, and the active manifold is the minimum one that locally describes points with the same structure as the limit point, and thus identifying this manifold is equivalent to finding the locally most ideal structure of the application. Now back to the sparsity case. One possible concern is the case that the limit point is (0, 0) in the two-dimension example. In this case, the manifold is the 0-dimensional subspace {0}. If this is the case and manifold identification can be ensured, it means that limit point itself can be found within finite iterations. This case is known as the weak sharp minima (Burke & Ferris, 1993) in nonlinear optimization, and its associated finite termination property is also well-studied. For this example, We also see that ‖x‖1 is partly smooth at any point x∗, but the manifold differs with x∗. This is a specific benign example, and in other cases, partial smoothness might happen only locally at some points of interest instead of everywhere. Next, we further extend our argument above to the case of (7). This can be viewed as the `1 norm for each group and we can easily obtain similar results. Again, since the group-LASSO norm is also convex and finite everywhere, prox-regularity, regularity, and subdifferential continuity are not issues at all. For the other properties, we consider one group first, then the group-LASSO norm reduces to the `2 norm. Clearly, ‖x‖2 is smooth locally if x 6= 0, with the gradient being x/‖x‖2, but it is nonsmooth at the point x = 0, where the subdifferential is the unit ball. This is very similar to the absolute value, whose subdifferential at 0 is the interval [−1, 1]. Thus, we can directly apply similar arguments above, and conclude that for any W ∗, (7) is partly smooth at W ∗ with respect to the manifold MW∗ = {W | WIg = 0,∀g : W ∗Ig = 0}, which is again a lower-dimensional subspace. Therefore, the manifold of defining the partial smoothness for the group-LASSO norm exactly corresponds to its structured sparsity pattern. B.2 Binary Neural Networks We continue to consider the binary neural network problem. For easier description, for the Euclidean space E we consider, we will use a vectorized representation for W,A ∈ E such that the elements are enumerated as W1, . . . ,Wn and α1, . . . , αn. The corresponding optimization problem can therefore be written as min W,A∈E Eξ∼D [fξ (W )] + λ n∑ i=1 ( αi (wi + 1)2 + (1− αi) (wi − 1)2 + δ[0,1] (αi) ) , (30) where given any set C, δC is the indicator function of C, defined as δC(x) = { 0 if x ∈ C, ∞ else. We see that except for the indicator function part, the objective is smooth, so the real partly smooth term that we treat as the regularizer is Φ(α) := n∑ i=1 δ[0,1](αi). We note that for αi ∈ (0, 1), the value of δ[0,1](αi) remains a constant zero in a neighborhood of αi, and for αi /∈ [0, 1], the indicator function is also constantly infinite within a neighborhood. Thus, the point of nonsmoothness, happens only at αi ∈ {0, 1}, and similar to our discussion in the previous subsection, Φ is partly smooth along directions that we fix those αi at the boundary (namely, being either 0 or 1) unchanged. The identified manifold therefore corresponds to the entries of αi that are fixed at 0 or 1, and this can serve as the indicator for the desired binary pattern in this task. C Experiment Setting Details For the weights wg of each group in (7), for all experiments in Section 5, we follow Deleu & Bengio (2021) to set wg = √ |Ig|. All ProxSSI parameter settings, excluding the regularization weight and the learning rate schedule, follow the default values in their package. Tables 3 to 13 provide detailed settings of Section 5.2. For the modified VGG19 model, we follow Deleu & Bengio (2021) to eliminate all fully-connected layers except the output layer, and add one batch-norm layer (Ioffe & Szegedy, 2015) after each convolutional layer to simulate modern CNNs like those proposed in He et al. (2016); Huang et al. (2017). For ResNet50 in the structured sparsity experiment in Section 5.2, our version of ResNet50 is the one constructed by the publicly available script at https://github.com/weiaicunzai/ pytorch-cifar100. In the unstructured sparsity experiment presented in Section 5.3, for better comparison with existing works in the literature of pruning, we adopt the version of ResNet50 used by Sundar & Dwaraknath (2021).3 Table 14 provides detailed settings of Section 5.3. For RigL, we use the PyTorch implementation of Sundar & Dwaraknath (2021). D More Results from the Experiments In this section, we provide more details of the results of the experiments we conducted in the main text. In particular, in Fig. 4, we present the change of validation accuracies and group sparsity levels with epochs for the group sparsity tasks in Section 5.2. We then present in Fig. 5 validation accuracies and unstructured sparsity level versus epochs for the task in Section 5.3. We note that although it takes more epochs for RMDA to fully stabilize in terms of manifold identification, the sparsity level usually only changes in a very limited range once (sometimes even before) the validation accuracy becomes steady, meaning that we do not need to run the algorithm for an unreasonably long time to obtain satisfactory results. 3https://github.com/varun19299/rigl-reproducibility. ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA E Other Regularizers for Possibly Better Group Sparsity and Generalization A downside of (7) is that it pushes all groups toward zeros and thus introduces bias in the final model. For its remedy, minimax concave penalty (MCP, Zhang, 2010) is then proposed to penalize only the groups whose norm is smaller than a user-specified threshold. More precisely, given hyperparameters λ ≥ 0, ω ≥ 1, the one-dimensional MCP is defined by MCP(w;λ, ω) := { λ|w| − w 2 2ω if|w| < ωλ, ωλ2 2 if|w| ≥ ωλ. One can then apply the above formulation to the norm of a vector to achieve the effect of inducing group-sparsity. In our case, given an index set Ig that represents a group, the ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA MCP for this group is then computed as (Breheny & Huang, 2009) MCP ( WIg ;λg, ωg ) := λg ∥∥WIg∥∥2 − ‖WIg‖22ωg if ∥∥WIg∥∥ < ωgλg, ωgλg 2 2 if ∥∥WIg∥∥ ≥ ωgλg. We then consider ψ(W ) = |G|∑ g=1 MCP ( WIg ;λg, ωg ) . (31) It is shown in Deleu & Bengio (2021) that group MCP regularization may simultaneously provide higher group sparsity and better validation accuracy than the group LASSO norm in vision and language tasks. Another possibility to enhance sparsity is to add another `1- norm or entry-wise MCP regularization to the group-level regularizer. The major drawback ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA of these approaches is the requirement of additional hyperparameters, and we prefer simpler approaches over those with more hyperparameters, as hyperparameter tuning in the latter can be troublesome for users with limited computational resources, and using a simpler setting can also help us to focus on the comparison of the algorithms themselves. The experiment in this subsection is therefore only for illustrating that these more complicated regularizers can be combined with RMDA if the user wishes, and such regularizers might lead to better results. Therefore, we train a version of LeNet5, which is slightly simpler than the one we used in previous experiments, on the MNIST dataset with such regularizers using RMDA and display the respective performance of various regularization schemes in Fig. 6. For the weights wg of each group in (7), in this experiment we consider the following setting. Let Li be the collection of all index sets that belong to the i-th layer in the network, ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA and denote NLi := ∑ Ij∈Li |Ij | the number of parameters in this layer, for all i, we set wg = √ NLi for all g such that Ig ∈ Li. Given two constants λ > 0 and ,ω > 1, The values of λg and ωg in (31) are then assigned as λg = λwg and ωg = ωwg. In this figure, group LASSO is abbreviated as GLASSO; `1-norm plus a group LASSO norm, L1GLASSO; group MCP, GMCP; element-wise MCP plus group MCP, L1GMCP. Our results exemplify that different regularization schemes might have different benefits on one of the criteria with proper hyperparameter tuning. The detailed numbers are reported in Table 15 and the experiment settings can be found in Tables 16 and 17. RigL RMDA Table 15: Results of training LeNet5 on MNIST using RMDA with different regularizers. We report mean and standard deviation of three independent runs. Regularizers Validation accuracy Group sparsity GLASSO 99.11 ± 0.06% 45.33 ± 0.99% L1GLASSO 99.02 ± 0.01% 58.92 ± 1.30% GMCP 99.25 ± 0.08% 32.81 ± 0.96% L1GMCP 99.21 ± 0.03% 32.91 ± 0.35% Table 16: Details of the modified simpler LeNet5 for the experiment in Appendix E. https://github.com/zihsyuan1214/rmda/blob/master/Experiments/Models/lenet5_ small.py. Parameter Value Number of layers 5 Number of convolutional layers 3 Number of fully-connected layers 2 Size of convolutional kernels 5× 5 Number of output filters 1, 2 6, 16 Number of output neurons 3, 4, 5 120, 84, 10 Kernel size, stride, padding of maxing pooling 2× 2, none, invalid Operations after convolutional layers max pooling Activation function for convolution/output layer relu/softmax L1GMCP GMCP L1GLASSO GLASSO
1. What is the focus and contribution of the paper regarding regularization techniques in NN training? 2. What are the strengths and weaknesses of the proposed RMDA algorithm compared to other variance reduction techniques? 3. Do you have any concerns about the claims made in the paper, particularly regarding structure identification and superior empirical performance? 4. How does the reviewer assess the theoretical analysis and assumptions made in the paper? 5. What is the significance of the contradiction between RMDA's ability to identify the active manifold in finite iterations and the limitation of proximal stochastic gradient algorithms in convex optimization?
Summary Of The Paper Review
Summary Of The Paper Regularization is generally used to impose desired structure on NN during training. The authors developed RMDA (Regularized Modernized Dual Averaging) which uses the weighted average of the previous SG to compute the tentative update via proximal operation associated with the regularizer. Then the parameters of the model are updated in the direction of this tentative update with a pre-defined factor. The authors theoretically analyzed the performance of their algorithm and showed that under some assumptions, RMDA can identify the structure of model in finite iterations. Review Using momentum and different regularizers to impose the desired structure during optimization and esp. NN training is common in machine learning. The proposed RMDA can be viewed as a non-trivial extension of the dial averaging algorithm to use momentum. I found the claims not 100% accurate, and sometimes misleading: Variance reduction beyond ...: It was not clear how this achieved? can the authors explain what they meant by variance reduction and where in the paper they showed that? Also, despite claiming that RDMA's cost is the same as SGD, keeping track of momentum and dual averaging makes it slightly more complex than SGD (although still better than other variance reduction techniques) Guaranteed strucutre identification: I find this slightly exaggeration of the results as there is no theoretical result on the convergence of the RDMA at all. Theorem 2 assumes that the algorithm converges, and under some assumptions, the converged parameter belongs to the active manifold. Superior empirical performance: "RDMA identifies the optimum structure" is hard to prove, as there is no way to show that the structure found by any algorithm is the best and optimum, even though it is better than baseline. The theoretical analysis is not complete, and is based on the restrictive assumptions and expecting that with the given values of β t , η t the algorithm converges. In theorem 2, does T 0 refer to the extra iterations of RDMA after having W ~ t converged? or it includes all iterations of RDMA, including the convergence of W ~ t ? Section 3, It is stated that in convex optimization, "algorithms based on proximal stochastic gradient are unable to identify the manifold within finite iterations". However, RMDA can identify the active manifold in finite iterations. What is the main reason to achieve this contradicting result? Is it because of the set of assumptions made in the analysis? or the way that momentum is incorporated in developing the algorithm? minor suggestions: It would be much better to define all notations used in the paper, for example the notations for subdifferential, interior, relative-interior (although mostly standard).
ICLR
Title Training Structured Neural Networks Through Manifold Identification and Variance Reduction Abstract This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization. Implementation of RMDA is available at https://www.github.com/zihsyuan1214/rmda. 1 Introduction Training neural networks (NNs) with regularization to obtain a certain desired structure such as structured sparsity or discrete-valued parameters is a problem of increasing interest. Existing approaches either use stochastic subgradients of the regularized objective (Wen et al., 2016; 2018) or combine popular stochastic gradient algorithms for NNs, like SGD with momentum (MSGD) or Adam (Kingma & Ba, 2015), with the proximal operator associated with the regularizer to conduct proximal stochastic gradient updates to obtain a model with preferred structures (Bai et al., 2019; Yang et al., 2019; Yun et al., 2021; Deleu & Bengio, 2021). Such methods come with proven convergence for certain measures of first-order optimality and have shown some empirical success in applications. However, we notice that an essential theoretical support lacking in existing methods is the guarantee for the output iterate to possess the same structure as that at the point of convergence. More specifically, often the imposed regularization is only known to induce a desired structure exactly at optimal or stationary points of the underlying optimization problem (see for example, Zhao & Yu, 2006), but training algorithms are only able to generate iterates asymptotically converging to a stationary point. Without further theoretical guarantees, it is unknown whether the output iterate, which is just an approximation of the stationary point, still has the same structure. For example, let us assume that sparsity is desired, the point of convergence is x∗ = (1, 0, 0), and two algorithms respectively produce iterates {yt = (1, t−1, t−1)} and {zt = (1 + t−1, 0, 0)}. Clearly, both iterate sequences converge to x∗, but only zt has the same desired structure as its limit point x∗, while yt is not useful for sparsity despite that the point of convergence is. This work aims at filling this gap to propose an algorithm for training structured NNs that can provably make all its iterates after a finite number of iterations possess the desired structure of the stationary point to which the iterates converge. We term the structure at a stationary point a stationary structure, and it should be understood that for multiple stationary points, each might correspond to a different stationary structure, and we aim at identifying the one at the limit point of the iterates of an algorithm, instead of selecting the optimal one among all stationary structures. Although finding the structure at an inferior stationary point might seem not very meaningful, another reason for studying this identification property is that for the same point of convergence, the structure at the limit point is the most preferable one. Consider the same example above, we note that for any sequence {xt} converging to x∗, xt1 6= 0 for all t large enough, for otherwise xt does not converge to x∗. Therefore, xt cannot be sparser than x∗ if xt → x∗.1 Identifying the structure of the point of convergence thus also amounts to finding the locally most ideal structure under the same convergence premise. It is well-known in the literature of nonlinear optimization that generating iterates consistently possessing the structure at the stationary point of convergence is possible if all points with the same structure near the stationary point can be presented locally as a manifold along which the regularizer is smooth. This manifold is often termed as the active manifold (relative to the given stationary point), and the task of generating iterates staying in the active manifold relative to the point of convergence after finite iterations is called manifold identification (Lewis, 2002; Hare & Lewis, 2004; Lewis & Zhang, 2013). To identify the active manifold of a stationary point, we need the regularizer to be partly smooth (Lewis, 2002; Hare & Lewis, 2004) at that point, roughly meaning that the regularizer is smooth along the active manifold around the point, while the change in its value is drastic along directions leaving the manifold. A more technical definition will be given in Section 3. Fortunately, most regularizers used in machine learning are partly smooth, so stationary structure identification is possible, and various deterministic algorithms are known to achieve so (Hare & Lewis, 2007; Hare, 2011; Wright, 2012; Liang et al., 2017a;b; Li et al., 2020; Lee, 2020; Bareilles et al., 2020). On the other hand, for stochastic gradient-related methods to identify a stationary structure, existing theory suggests that the variance of the gradient estimation needs to vanish as the iterates approach a stationary point (Poon et al., 2018), and indeed, it is observed empirically that proximal stochastic gradient descent (SGD) is incapable of manifold identification due to the presence of the variance in the gradient estimation (Lee & Wright, 2012; Sun et al., 2019).2 Poon et al. (2018) showed that variance-reduction methods such as SVRG (Johnson & Zhang, 2013; Xiao & Zhang, 2014) and SAGA (Defazio et al., 2014) that utilize the finite-sum structure of empirical risk minimization to drive the variance of their gradient estimators to zero are suitable for this task. Unfortunately, with the standard practice of data augmentation in deep learning, training of deep learning models with a regularizer should be treated as the following stochastic optimization problem that minimizes the expected loss over a distribution, instead of the commonly seen finite-sum form: min W∈E F (W ) := Eξ∼D [fξ (W )] + ψ (W ) , (1) where E is a Euclidean space with inner product 〈·, ·〉 and the associated norm ‖·‖, D is a distribution over a space Ω, fξ is differentiable almost everywhere for all ξ ∈ Ω, and ψ(W ) is a regularizer that might be nondifferentiable. We will also use the notation f(W ) := Eξ∼D[fξ(W )]. Without a finite-sum structure in (1), Defazio & Bottou (2019) pointed out that classical variance-reduction methods are ineffective for deep learning, and one major reason is the necessity of periodically evaluating ∇f(W ) (or at least using a large batch from D to get a precise approximation of it) in variance-reduction methods is intractable, hence manifold identification and therefore finding the stationary structure becomes an extremely tough task for deep learning. Although recently there are efforts in developing variancereduction methods for (1) inspired by online problems (Wang et al., 2019; Nguyen et al., 2021; Pham et al., 2020; Cutkosky & Orabona, 2019; Tran-Dinh et al., 2019), these methods all have multiple hyperparameters to tune and incur computational cost at least twice or 1See a more detailed discussion in Appendix B.1. 2An exception is the interpolation case, in which the variance of plain SGD vanishes asymptot- ically. But data augmentation often fails this interpolation condition. thrice to that of (proximal) SGD. As the training of deep learning models is time- and resource-consuming, these drawbacks make such methods less ideal for deep learning. To tackle these difficulties, we extend the recently proposed modernized dual averaging framework (Jelassi & Defazio, 2020) to the regularized setting by incorporating proximal operations, and obtain a new algorithm RMDA (Regularized Modernized Dual Averaging) for (1). The proposed algorithm provably achieves variance reduction beyond finite-sum problems without any cost or hard-to-tune hyperparameters additional to those of proximal momentum SGD (proxMSGD), and we provide theoretical guarantees for its convergence and ability for manifold identification. The key difference between RMDA and the original regularized dual averaging (RDA) of Xiao (2010) is that RMDA incorporates momentum and can achieve better performance for deep learning in terms of the generalization ability, and the new algorithm requires nontrivial proofs for its guarantees. We further conduct experiments on training deep learning models with a regularizer for structured-sparsity to demonstrate the ability of RMDA to identify the stationary structure without sacrificing the prediction accuracy. When the desired structure is (unstructured) sparsity, a popular approach is pruning that trims a given dense model to a specified level, and works like (Gale et al., 2019; Blalock et al., 2020; Evci et al., 2020; Verma & Pesquet, 2021) have shown promising results. However, as a post-processing approach, pruning is essentially different from structured training considered in this work, because pruning is mainly used when a model is available, while structured training combines training and structure inducing in one procedure to potentially reduce the computational cost and memory footprint when resources are scarce. We will also show in our experiment that RMDA can achieve better performance than a state-of-the-art pruning method, suggesting that structured training indeed has its merits for obtaining sparse NNs. The main contributions of this work are summarized as follows. • Principled analysis: We use the theory of manifold identification from nonlinear opti- mization to provide a unified way towards better understanding of algorithms for training structured neural networks. • Variance reduction beyond finite-sum with low cost: RMDA achieves variance reduction for problems that consist of an infinite-sum term plus a regularizer (see Lemma 2) while incorporating momentum to improve the generalization performance. Its spatial and computational cost is almost the same as proxMSGD, and there is no additional hyperparameters to tune, making RMDA suitable for large-scale deep learning. • Structure identification: With the help of variance reduction, our theory shows that under suitable conditions, after a finite number of iterations, iterates of RMDA stay in the active manifold of its limit point. • Superior empirical performance: Experiments on neural networks with structured sparsity exemplify that RMDA can identify a stationary structure without reducing the validation accuracy, thus outperforming existing methods by achieving higher group sparsity. Another experiment on unstructured sparsity also shows RMDA outperforms a state-of-the-art pruning method. After this work is finished, we found a very recent paper Kungurtsev & Shikhman (2021) that proposed the same algorithm (with slightly differences in the parameters setting in Line 5 of Algorithm 1) and analyzed the expected convergence of (1) under a specific scheduling of ct = st+1α−1t+1 when both terms are convex. In contrast, our work focuses on nonconvex deep learning problems, and especially on the manifold identification aspect. 2 Algorithm Details of the proposed RMDA are in Algorithm 1. At the t-th iteration with the iterate W t−1, we draw an independent sample ξt ∼ D to compute the stochastic gradient ∇fξt(W t−1), decide a learning rate ηt, and update the weighted sum Vt of previous stochastic gradients using ηt and the scaling factor βt := √ t: V0 := 0, Vt := ∑t k=1 ηkβk∇fξk(W k−1) = Vt−1 + ηtβt∇fξt(W t−1), ∀t > 0. Algorithm 1: RMDA (W 0, T, η(·), c(·)) input : Initial point W 0, learning rate schedule η(·), momentum schedule c(·), number of epochs T 1 V0 ← 0, α0 ← 0 2 for t = 1, . . . , T do 3 βt ← √ t, st ← η(t)βt, αt ← αt−1 + st 4 Sample ξt ∼ D and compute V t ← V t−1 + st∇fξt(W t−1) 5 W̃ t ← arg minW 〈V t, W 〉+ βt2 ∥∥W −W 0∥∥2 + αtψ(W ) // (2) 6 W t ← (1− c(t))W t−1 + c(t)W̃ t output: The final model WT The tentative iterate W̃ t is then obtained by the proximal operation associated with ψ: W̃ t = proxαtβ−1t ψ ( W 0 − β−1t V t ) , αt := ∑t k=1 βkηk, (2) where for any function g, proxg(x) := arg miny ‖y − x‖ 2 /2 + g(y) is its proximal operator. The iterate is then updated along the direction W̃ t −W t−1 with a factor of ct ∈ [0, 1]: W t = (1− ct)W t−1 + ctW̃ t = W t−1 + ct ( W̃ t −W t−1 ) . (3) When ψ ≡ 0, RMDA reduces to the modernized dual averaging algorithm of Jelassi & Defazio (2020), in which case it has been shown that mixing W t−1 and W̃ t in (3) equals to introducing momentum (Jelassi & Defazio, 2020; Tao et al., 2018). We found that this introduction of momentum greatly improves the performance of RMDA and is therefore essential for applying it on deep learning problems. 3 Analysis We provide theoretical analysis of the proposed RMDA in this section. Our analysis shows variance reduction in RMDA and stationarity of the limit point of its iterates, but all of them revolves around our main purpose of identification of a stationary structure within a finite number of iterations. The key tools for this end are partial smoothness and manifold identification (Hare & Lewis, 2004; Lewis, 2002). Our result is the currently missing cornerstone for those proximal algorithms applied to deep learning problems for identifying desired structures. In fact, it is actually well-known in convex optimization that those algorithms based on plain proximal stochastic gradient without variance reduction are unable to identify the active manifold, and the structure of the iterates oscillates due to the variance in the gradient estimation; see, for example, experiments and discussions in Lee & Wright (2012); Sun et al. (2019). Our work is therefore the first one to provide justification for solving the regularized optimization problem in deep learning to really identify a desired structure induced by the regularizer. Throughout, ∇fξ denotes the gradient of fξ, ∂ψ is the (regular) subdifferential of ψ, and relint(C) means the relative interior of the set C. We start from introducing the notion of partial smoothness. Definition 1. A function ψ is partly smooth at a point W ∗ relative to a set MW∗ 3W ∗ if 1. Around W ∗, MW∗ is a C2-manifold and ψ|MW∗ is C2. 2. ψ is regular (finite with the Fréchet subdifferential coincides with the limiting Fréchet subdifferential) at all points W ∈MW∗ around W ∗ with ∂ψ(W ) 6= ∅. 3. The affine span of ∂ψ(W ∗) is a translate of the normal space to MW∗ at W ∗. 4. ∂ψ is continuous at W ∗ relative to MW∗ . We often call MW∗ the active manifold at W ∗. Another concept required for manifold identification is prox-regularity (Poliquin & Rockafellar, 1996). Definition 2. A function ψ is prox-regular at W ∗ for V ∗ ∈ ∂ψ(W ∗) if ψ is finite at W ∗, locally lower semi-continuous around W ∗, and there is ρ > 0 such that ψ(W1) ≥ ψ(W2) + 〈V, W1−W2〉− ρ2‖W1 −W2‖ 2 whenever W1,W2 are close to W ∗ with ψ(W2) near ψ(W ∗) and V ∈ ∂ψ(W2) near V ∗. ψ is prox-regular at W ∗ if it is so for all V ∈ ∂ψ(W ∗). To broaden the applicable range, a function ψ prox-regular at some W ∗ is often also assumed to be subdifferentially continuous (Poliquin & Rockafellar, 1996) there, meaning that if W t → W ∗, ψ(W t) → ψ(W ∗) holds when there are V ∗ ∈ ∂ψ(W ∗) and a sequence {V t} such that V t ∈ ∂ψ(W t) and V t → V ∗. Notably, all convex and weakly-convex (Nurminskii, 1973) functions are regular, prox-regular, and subdifferentially continuous in their domain. 3.1 Theoretical Results When the problem is convex, convergence guarantees for Algorithm 1 under two specific specific schemes are known. First, when ct ≡ 1, RMDA reduces to the classical RDA, and convergence to a global optimum (of W t = W̃ t in this case) on convex problems has been proven by Lee & Wright (2012); Duchi & Ruan (2021), with convergence rates of the expected objective or the regret given by Xiao (2010); Lee & Wright (2012). Second, when ct = st+1α−1t+1 and (βt, αt) in Line 5 of Algorithm 1 are replaced by (βt+1, αt+1), convergence is recently analyzed by Kungurtsev & Shikhman (2021). In our analysis below, we do not assume convexity of either term. We show that if {W̃ t} converges to a point W ∗ (which could be a non-stationary one), {W t} also converges to W ∗. Lemma 1. Consider Algorithm 1 with {ct} satisfying ∑ ct = ∞. If {W̃ t} converges to a point W ∗, {W t} also converges to W ∗. We then show that if {W̃ t} converges to a point, almost surely this point of convergence is stationary. This requires the following lemma for variance reduction of RMDA, meaning that the variance of using Vt to estimate ∇f(W t−1) reduces to zero, as α−1t Vt converges to ∇f(W t−1) almost surely, and this result could be of its own interest. The first claim below uses a classical result in stochastic optimization that can be found at, for example, (Gupal, 1979, Theorem 4.1, Chapter 2.4), but the second one is, to our knowledge, new. Lemma 2. Consider Algorithm 1. Assume for any ξ ∼ D, fξ is L-Lipschitz-continuouslydifferentiable almost surely for some L, so f is also L-Lipschitz-continuously-differentiable, and there is C ≥ 0 such that Eξt∼D ∥∥∇fξt (W t−1)∥∥2 ≤ C for all t. If {ηt} satisfies∑ βtηtα −1 t =∞, ∑( βtηtα −1 t )2 <∞, ∥∥W t+1 −W t∥∥ (βtηtα−1t )−1 a.s.−−→ 0, (4) then α−1t V t −→ ∇f(W t−1) with probability one. Moreover, if {W t} lies in a bounded set, we get E ∥∥α−1t V t −∇f (W t−1)∥∥2 → 0 even if the second condition in (4) is replaced by a weaker condition of βtηtα−1t → 0. In general, the last condition in (4) requires some regularity conditions in F to control the change speed of W t. One possibility is when ψ is the indicator function of a convex set, βtηt ∝ tp for t ∈ (1/2, 1) will satisfy this condition. However, in other settings for ηt, even when F and ψ are both convex, existing analyses for the classical RDA such that ct ≡ 1 in Algorithm 1 still need an additional local error bound assumption to control the change of W t+1 −W t. Hence, to stay focused on our main message, we take this assumption for granted, and leave finding suitable sufficient conditions for it as future work. With the help of Lemmas 1 and 2, we can now show the stationarity result for the limit point of the iterates. The assumption of βtα−1t approaching 0 below is classical in analyses of dual averaging in order to gradually remove the influence of the term ∥∥W −W 0∥∥2. Theorem 1. Consider Algorithm 1 with the conditions in Lemmas 1 and 2 hold, and assume the set of stationary points Z := {W | 0 ∈ ∂F (W )} is nonempty and βtα−1t → 0. For any given W 0, consider the event that {W̃ t} converges to a point W ∗ (each event corresponds to a different W ∗), then if ∂ψ is outer semicontinuous at W ∗, and this event has a nonzero probability, W ∗ ∈ Z, or equivalently, W ∗ is a stationary point, with probability one conditional on this event. Finally, with Lemmas 1 and 2 and Theorem 1, we prove the main result that the active manifold of the limit point is identified in finite iterations of RMDA under nondegeneracy. Theorem 2. Consider Algorithm 1 with the conditions in Theorem 1 satisfied. Consider the event of {W̃ t} converging to a certain point W ∗ as in Theorem 1, if the probability of this event is nonzero; ψ is prox-regular and subdifferentially continuous at W ∗ and partly smooth at W ∗ relative to the active C2 manifoldM; ∂ψ is outer semicontinuous at W ∗; and the nondegeneracy condition −∇f (W ∗) ∈ relint ∂ψ (W ∗) (5) holds at W ∗, then conditional on this event, almost surely there is T0 ≥ 0 such that W̃ t ∈M, ∀t ≥ T0. (6) In other words, the active manifold at W ∗ is identified by the iterates of Algorithm 1 after a finite number of iterations almost surely. As mentioned in Section 1, an important reason for studying manifold identification is to get the lowest-dimensional manifold representing the structure of the limit point, which often corresponds to a preferred property for the application, like the highest sparsity, lowest rank, or lowest VC dimension locally. See an illustrated example in Appendix B.1. 4 Applications in Deep Learning We discuss two popular schemes of training structured deep learning models achieved through regularization to demonstrate the applications of RMDA. More technical details for applying our theory to the regularizers in these applications are in Appendix B. 4.1 Structured Sparsity As modern deep NN models are often gigantic, it is sometimes desirable to trim the model to a smaller one when only limited resources are available. In this case, zeroing out redundant parameters during training at the group level is shown to be useful (Zhou et al., 2016), and one can utilize regularizers promoting structured sparsity for this purpose. The most famous regularizer of this kind is the group-LASSO norm (Yuan & Lin, 2006; Friedman et al., 2010). Given λ ≥ 0 and a collection G of index sets {Ig} of the variable W , this convex regularizer is defined as ψ(W ) := λ ∑|G| g=1 wg ∥∥WIg∥∥, (7) with wg > 0 being the pre-specified weight for Ig. For any W ∗, let GW∗ ⊆ G be the index set such that W ∗Ij = 0 for all j ∈ GW∗ , the group-LASSO norm is partly smooth around W ∗ relative to the manifold MW∗ := {W |WIi = 0,∀i ∈ GW∗}, so our theory applies. In order to promote structured sparsity, we need to carefully design the grouping. Fortunately, in NNs, the parameters can be grouped naturally (Wen et al., 2016). For any fully-connected layer, let W ∈ Rout×in be the matrix representation of the associated parameters, where out is the number of output neurons and in is that of input neurons, we can consider the column-wise groups, defined as W:,j for all j, and the row-wise groups of the form Wi,:. For a convolutional layer with W ∈ Rfilter×channel×height×width being the tensor form of the corresponding parameters, we can consider channel-wise, filter-wise, and kernel-wise groups, defined respectively as W:,j,:,:, Wi,:,:,: and Wi,j,:,:. 4.2 Binary/Discrete Neural Networks Making the parameters of an NN binary integers is another way to obtain a more compact model during training and deployment (Hubara et al., 2016), but discrete optimization is hard to scale-up. Using a vector representation w ∈ Rm of the variables, Hou et al. (2017) thus proposed to use the indicator function of { w | wIi = αibIi , αi > 0, bIi ∈ {±1}|Ii| } to induce the entries of w to be binary without resorting to discrete optimization tools, where each Ii enumerates all parameters in the i-th layer. Yang et al. (2019) later proposed to use minα∈[0,1]m ∑m i=1 ( αi(wi + 1)2 + (1− αi)(wi − 1)2 ) as the regularizer and to include α as a variable to train. At any α∗ with I0 := {i | α∗i = 0} and I1 := {i | α∗i = 1}, the objective is partly smooth relative to the manifold {(W,α) | αI0 = 0, αI1 = 1}. Extension to discrete NNs beyond the binary ones is possible, and Bai et al. (2019) have proposed regularizers with closed-form proximal operators for it. 5 Experiments We use the structured sparsity application in Section 4.1 to empirically exemplify the ability of RMDA to find desired structures in the trained NNs. RMDA and the following methods for structured sparsity in deep learning are compared using PyTorch (Paszke et al., 2019). • ProxSGD (Yang et al., 2019): A simple proxMSGD algorithm. To obtain group sparsity, we skip the interpolating step in Yang et al. (2019). • ProxSSI (Deleu & Bengio, 2021): This is a special case of the adaptive proximal SGD framework of Yun et al. (2021) that uses the Newton-Raphson algorithm to approximately solve the subproblem. We directly use the package released by the authors. We exclude the algorithm of Wen et al. (2016) because their method is shown to be worse than ProxSSI by Deleu & Bengio (2021). To compare these algorithms, we examine both the validation accuracy and the group sparsity level of their trained models. We compute the group sparsity as the percentage of groups whose elements are all zero, so the reported group sparsity is zero when there is no group with a zero norm, and is one when the whole model is zero. For all methods above, we use (7) with column-wise and channel-wise groupings in the regularization for training, but adopt the kernel-wise grouping in their group sparsity evaluation. Throughout the experiments, we always use multi-step learning rate scheduling that decays the learning rate by a constant factor every time the epoch count reaches a pre-specified threshold. For all methods, we conduct grid searches to find the best hyperparameters. All results shown in tables in Sections 5.1 and 5.2 are the mean and standard deviation of three independent runs with the same hyperparameters, while figures use one representative run for better visualization. In convex optimization, a popular way to improve the practical convergence behavior for momentum-based methods is restarting that periodically reset the momentum to zero (O’donoghue & Candes, 2015). Following this idea, we introduce a restart heuristic to RMDA. At each round, we use the output of Algorithm 1 from the previous round as the new input to the same algorithm, and continue using the scheduling η and c without resetting them. For ψ ≡ 0, Jelassi & Defazio (2020) suggested to increase ct proportional to the decrease of ηt until reaching ct = 1. We adopt the same setting for ct and ηt and restart RMDA whenever ηt changes. As shown in Section 3 that W̃ t finds the active manifold, increasing ct to 1 also accords with our interest in identifying the stationary structure. 5.1 Correctness of Identified Structure Using Synthetic Data Our first step is to numerically verify that RMDA can indeed identify the stationary structure desired. To exactly find a stationary point and its structure a priori, we consider synthetic problems. We first decide a ground truth model W that is structured sparse, generate random data points that can be well separated by W , and then decide their labels using W . The generated data are then taken as our training data. We consider a linear logistic regression model and a small NN that has one fully-connected layer and one convolutional layer. To ensure convergence to the ground truth, for logistic regression we generate more data points than the problem dimension to ensure the problem is strongly convex so that there is only one stationary/optimal point, and for the small NN, we initialize all algorithms close enough to the ground truth. We report in Fig. 1 training error rates (as an indicator for the proximity to the ground truth) and percentages of the optimal group sparsity pattern of the ground truth identified. Clearly, although all methods converge to the ground truth, only RMDA identifies the correct structure of it, and other methods without guarantees for manifold identification fail. 5.2 Neural Networks with Real Data We turn to real-world data used in modern computer vision problems. We consider two rather simple models and six more complicated modern CNN cases. The two simpler models are linear logistic regression with the MNIST dataset (LeCun et al., 1998), and training a small NN with seven fully-connected layers on the FashionMNIST dataset (Xiao et al., 2017). The six more complicated cases are: 1. A version of LeNet5 with the MNIST dataset, 2. The same version of LeNet5 with the FashionMNIST dataset, 3. A modified VGG19 (Simonyan & Zisserman, 2015) with the CIFAR10 dataset (Krizhevsky, 2009), 4. The same modified VGG19 with the CIFAR100 dataset (Krizhevsky, 2009), 5. ResNet50 (He et al., 2016) with the CIFAR10 dataset, and 6. ResNet50 with the CIFAR100 dataset. For these six more complicated tasks, we include a dense baseline of MSGD with no sparsityinducing regularizer in our comparison. For all training algorithms on VGG19 and ResNet50, we follow the standard practice in modern vision tasks to apply data augmentation through random cropping and horizontal flipping so that the training problem is no longer a finitesum one. From Fig. 2, we see that similar to the previous experiment, the group sparsity level of RMDA is stable in the last epochs, while that of ProxSGD and ProxSSI oscillates below. This suggests that RMDA is the only method that, as proven in Section 3, identifies the structured sparsity at its limit point, and other methods with no variance reduction fail. Moreover, Table 1 shows that manifold identification of RMDA is achieved with no sacrifice of the validation accuracy, so RMDA beats ProxSGD and ProxSSI in both criteria, and its accuracy is close to that of the dense baseline of MSGD. Moreover, for VGG19 and ResNet50, RMDA succeeds in finding the optimal structured sparsity pattern despite the presence of data augmentation, showing that RMDA can indeed overcome the difficulty from the infinite-sum setting of modern deep learning tasks. We also report that in the ResNet50/CIFAR100 task, on our NVIDIA RTX 8000 GPU, MSGD, ProxSGD, and RMDA have similar per-epoch cost of 68, 77, and 91 seconds respectively, while ProxSSI needs 674 seconds per epoch. RMDA is thus also more suitable for large-scale structured deep learning in terms of practical efficiency. 5.3 Comparison with Pruning We compare RMDA with a state-of-the-art pruning method RigL (Evci et al., 2020). As pruning focuses on unstructured sparsity, we use RMDA with ψ(W ) = λ‖W‖1 to have a fair comparison, and tune λ to achieve a pre-specified sparsity level. We run RigL with 1000 epochs, as its performance at the default 500 epochs was unstable, and let RMDA use the same number of epochs. Results of 98% sparsity in Table 2 show that RMDA consistently outdoes RigL, indicating regularized training could be a promising alternative to pruning. 6 Conclusions In this work, we proposed and analyzed a new algorithm, RMDA, for efficiently training structured neural networks with state-of-the-art performance. Even in the presence of data augmentation, RMDA can still achieve variance reduction and provably identify the desired structure at a stationary point using the tools of manifold identification. Experiments show that existing algorithms for the same purpose fail to find a stable stationary structure, while RMDA achieves so with no accuracy drop nor additional time cost. Acknowledgements This work was supported in part by MOST of R.O.C. grant 109-2222-E-001-003-MY3, and the AWS Cloud Credits for Research program of Amazon Inc. Appendices Table of Contents A Proofs 13 A.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.2 Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 A.3 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.4 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B Additional Discussions on Applications 18 B.1 Structured Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.2 Binary Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C Experiment Setting Details 20 D More Results from the Experiments 20 E Other Regularizers for Possibly Better Group Sparsity and Generalization 21 A Proofs A.1 Proof of Lemma 1 Proof. Using (3), the distance between W t and W ∗ can be upper bounded through the triangle inequality:∥∥W t −W ∗∥∥ = ∥∥(1− ct) (W t−1 −W ∗)+ ct (W̃ t −W ∗)∥∥ ≤ ct ∥∥W̃ t −W ∗∥∥+ (1− ct)∥∥W t−1 −W ∗∥∥. (8) For any event such that W̃ t →W ∗, for any > 0, we can find T ≥ 0 such that∥∥W̃ t −W ∗∥∥ ≤ , ∀t ≥ T . Let δt := ‖W t −W ∗‖, we see from the above inequality and (8) that δt ≤ (1− ct) δt−1 + ct , ∀t ≥ T . By deducting from both sides, we get that (δt − ) ≤ (1− ct) (δt−1 − ) , ∀t ≥ T . Since ∑ ct =∞, we further deduce that lim t→∞ (δt − ) ≤ ∞∏ t=T (1− ct) (δT −1 − ) ≤ ∞∏ t=T exp (−ct) (δT −1 − ) = exp ( − ∞∑ t=T ct ) (δT −1 − ) = 0, where in the first inequality we used the fact that exp(x) ≥ 1 +x for all real number x. The result above then implies that lim t→∞ δt ≤ . As is arbitrary and δt ≥ 0 from the definition, we conclude that limt→∞ δt = 0, which is equivalent to that W t →W ∗. A.2 Proof of Lemma 2 Proof. We observe that α−1t V t = t∑ k=1 ηkβk αt ∇fξk ( W k−1 ) = αt−1 αt α−1t−1V t−1 + αt − αt−1 αt ∇fξt ( W t−1 ) = ( 1− βtηt αt ) α−1t−1V t−1 + βtηt αt ∇fξt ( W t−1 ) . From that f is L-Lipschitz-continuously differentiable, we have that∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ = ∥∥∇f (W t)− f (W t−1)∥∥ ≤ L ∥∥W t −W t−1∥∥. (9) Therefore, (4) and (9) imply that 0 ≤ ∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ βtηtα −1 t ≤ L ∥∥W t −W t−1∥∥ βtηtα −1 t a.s.−−→ 0, which together with the sandwich lemma shows that∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ βtηtα −1 t a.s.−−→ 0. (10) Therefore, the first two conditions of (4) together with (10) and the bounded variance assumption satisfy the requirements of (Gupal, 1979, Chapter 2.4, Theorem 4.1), so the conclusion of almost sure convergence hold. For the convergence in L2 part, we first define mt := α−1t Vt and τt := βtηtα−1t for notational ease. Consider ∥∥mt+1 −∇F (W t)∥∥2, we have from the update rule in Algorithm 1 that∥∥mt+1 −∇F (W t)∥∥2 = ∥∥(1− τt)mt + τt∇fξt+1(W t)−∇F (W t)∥∥2 = ∥∥(1− τt) (mt −∇F (W t))+ τt (∇fξt+1(W t)−∇F (W t))∥∥2 = (1− τt)2 ∥∥mt −∇F (W t)∥∥2 + τ2t ∥∥∇fξt+1(W t)−∇F (W t)∥∥2 + 2τt(1− τt)〈mt −∇F (W t), ∇fξt+1(W t)−∇F (W t)〉 = (1− τt)2 ∥∥(mt −∇F (W t−1))+ (∇F (W t−1)−∇F (W t))∥∥2 (11) + τ2t ∥∥∇fξt+1(W t)−∇F (W t)∥∥2 + 2τt(1− τt)〈mt −∇F (W t), ∇fξt+1(W t)−∇F (W t)〉. Let {Ft}t≥0 denote the natural filtration of {(mt,W t)}t≥0. Namely, Ft records the information of W 0, {ci}t−1i=0, {ηi}t−1i=0, and {ξi}ti=1. By defining Ut := ∥∥mt −∇F (W t−1)∥∥2 and taking expectation over (11) conditional on Ft, we obtain from E [ ∇fξt+1(W t) | Ft ] = ∇F (W t) that E [Ut+1 | Ft] = (1− τt)2 ∥∥(mt −∇F (W t−1))+ (∇F (W t−1)−∇F (W t))∥∥2 + τ2t E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] . (12) From the last condition in (4) and the Lipschitz continuity of∇F , there are random variables { t} and {ut} such that ‖ut‖ = 1, t ≥ 0, and ∇F (W t−1)−∇F (W t) = τt tut for all t > 0, with t ↓ 0 almost surely. We thus obtain that∥∥mt −∇F (W t−1) +∇F (W t−1)−∇F (W t)∥∥2 = ∥∥mt −∇F (W t−1) + τt tut∥∥2 = (1 + τt)2 ∥∥∥∥ 11 + τt (mt −∇F (W t−1))+ τt1 + τt tut ∥∥∥∥2 ≤ (1 + τt)2 ( 1 1 + τt Ut + τt 1 + τt t 2 ) , (13) where we used Jensen’s inequality and the convexity of ‖·‖2 in the last inequality. By substituting (13) back into (12), we obtain E [Ut+1|Ft] ≤ (1− τt)2(1 + τt)Ut + (1− τt)2(1 + τt)τt t2 + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ (1− τt)(Ut + τt t2) + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ (1− τt)Ut + τt t2 + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] . (14) For the last term in (14), we notice that E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ 2(E [∥∥∇fξt(W t)∥∥2]+ ∥∥∇F (W t)∥∥2) ≤ 2 ( C + ∥∥∇F (W t)∥∥2) , (15) where the last inequality is from the bounded variance assumption. Since by assumption the {W t} lies in a bounded set K, we have that for any point W ∗ ∈ K, W t −W ∗ is upper bounded, and thus ‖∇F (W t)−∇F (W ∗)‖ is also bounded, implying that ‖∇F (W t)‖2 ≤ C2 for some C2 ≥ 0. Therefore, (15) further leads to E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ C3 (16) for some C3 ≥ 0. Now we further take expectation on (14) and apply (16) to obtain EUt+1 ≤ (1− τt)EUt + τt t2 + τt2C3 = (1− τt)EUt + τt ( 2t + τtC3 ) . (17) Note that the third implies t ↓ 0, so this together with the second condition that τt ↓ 0 means 2t+τtC3 ↓ 0 as well, and thus for any δ > 0, we can find Tδ ≥ 0 such that 2t+τtC3 ≤ δ for all t ≥ Tδ. Thus, (17) further leads to EUt+1 − δ ≤ (1− τt)EUt + τtδ − δ = (1− τt) (EUt − δ) ,∀t ≥ Tδ. (18) This implies that (EUt − δ) becomes a decreasing sequence starting from t ≥ Tδ, and since Ut ≥ 0, this sequence is lower bounded by −δ, and hence it converges to a certain value. By recursion of (18), we have that EUt − δ ≤ t∏ i=Tδ (1− τi) (EUTδ − δ) , and from the well-known inequality (1 + x) ≤ expx for all x ∈ R, the above result leads to EUt − δ ≤ exp ( − ∑ i = Tδtτi ) (EUTδ − δ) . By letting t approach infinity and noting that the first condition of (4) indicates ∞∑ t=k τt =∞ for any k ≥ 0, we see that −δ ≤ lim t→∞ EUt − δ ≤ exp ( − ∞∑ i=Tδ τi ) (EUTδ − δ) = 0. (19) As δ is arbitrary, by taking δ ↓ 0 in (19) and noting the nonnegativity of Ut, we conclude that limEUt = 0, as desired. This proves the last result in Lemma 2. A.3 Proof of Theorem 1 Proof. Using Lemma 2, we can view α−1t V t as ∇f(W t) plus some noise that asymptotically decreases to zero with probability one: α−1t Vt = ∇f(W t) + t, ‖ t‖ a.s.−−→ 0. (20) We use (20) to rewrite the optimality condition of (2) as (also see Line 5 of Algorithm 1) − ( ∇f ( W t ) + t + βtα−1t ( W̃ t −W 0 )) ∈ ∂ψ ( W̃ t ) . (21) Now we consider ∂F (W̃ t). Clearly from (21), we have that ∇f ( W̃ t ) −∇f ( W t ) − t − βtα−1t ( W̃ t −W 0 ) ∈ ∂∇f ( W̃ t ) + ψ ( W̃ t ) = ∂F ( W̃ t ) . (22) Now we consider the said event that W̃ t → W ∗ for a certain W ∗, and let us define this event as A ⊆ Ω. From Lemma 1, we know that W t → W ∗ as well under A. Let us define B ⊆ Ω as the event of t → 0, then we know that since P (A) > 0 and P (B) = 1, where P is the probability function for events in Ω, P (A ∩ B) = P (A). Therefore, conditional on the event of A, we have that t a.s.−−→ 0 still holds. Now we consider any realization of A∩B. For the right-hand side of (22), as W̃ t is convergent and βtα−1t decreases to zero, by letting t approach infinity, we have that lim t→∞ t + βtα−1t ( W̃ t −W 0 ) = 0 + 0 ( W ∗ −W 0 ) = 0. By the Lipschitz continuity of ∇f , we have from (3) and (4) that 0 ≤ ∥∥∇f (W̃ t)−∇f (W t)∥∥ ≤ L∥∥W t − W̃ t∥∥. As {W t} and {W̃ t} converge to the same point, we see that ∥∥W t − W̃ t∥∥→ 0, so ∇f (W̃ t)− ∇f(W t) also approaches zero. Hence, the limit of the right-hand side of (22) is lim t→∞ ∇f ( W̃ t ) − ( ∇f ( W t ) + t + βtα−1t ( W̃ t −W 0 )) = 0. (23) On the other hand, for the left-hand side of (22), the outer semicontinuity of ∂ψ at W ∗ and the continuity of ∇f show that lim t→∞ ∇f(W̃ t) + ∂ψ(W̃ t) ⊆ ∂∇f(W ∗) + ψ (W ∗) = ∂F (W ∗). (24) Substituting (23) and (24) back into (22) then proves that 0 ∈ ∂F (W ∗) and thus W ∗ ∈ Z. A.4 Proof of Theorem 2 Proof. Our discussion in this proof are all under the event that W̃ t →W ∗. From the argument in Appendix A.3, we can view α−1t V t as ∇f(W t) plus some noise that asymptotically decreases to zero with probability one as shown in (20). From Lemma 1, we know that W t →W ∗. From (21), there is U t ∈ ∂ψ ( W̃ t ) such that U t = −α−1t V t + α−1t βt ( W̃ t −W 0 ) . (25) Moreover, we define γt := W t − W̃ t. (26) By combining (25)–(26) with (20), we obtain min Y ∈∂F (W̃ t) ‖Y ‖ ≤ ∥∥∇f (W̃ t)+ U t∥∥ = ∥∥∇f (W̃ t)−∇f (W t)− t − α−1t βt (W̃ t −W 0)∥∥ ≤ ∥∥∇f (W̃ t)−∇f (W t)∥∥+ ‖ t‖+ α−1t βt∥∥W̃ t −W 0∥∥ ≤L‖γt‖+ ‖ t‖+ α−1t βt (∥∥W ∗ − W̃ t∥∥+ ∥∥W 0 −W ∗∥∥) , (27) where we used the Lipschitz continuity of ∇f and the triangle inequality in the last inequality. We now separately bound the terms in (27). From that W t → W ∗ and W̃ t → W ∗, it is straightforward that ‖γt‖ → 0. The second term decreases to zero almost surely according to (20) and the argument in Appendix A.3. For the last term, since α−1t βt → 0, and∥∥W̃ t −W ∗∥∥→ 0, we know that α−1t βt ∥∥W 0 −W ∗∥∥→ 0, α−1t βt∥∥W̃ t −W ∗∥∥→ 0. Therefore, we conclude from the above argument and (27) that min Y ∈∂F (W̃ t) ‖Y ‖ a.s.−−→ 0. As f is smooth with probability one, we know that if ψ is partly smooth at W ∗ relative toM, then so is F = f + ψ with probability one. Moreover, Lipschitz-continuously differentiable functions are always prox-regular, and the sum of two prox-regular functions is still proxregular, so F is also prox-regular at W ∗ with probability one. Following the argument identical to that in Appendix A.3, we know that these probability one events are still probability one conditional on the event of W̃ t →W ∗ as this event has a nonzero probability. As W̃ t →W ∗ and ∇f(W̃ t) +U t a.s.−−→ 0 ∈ ∂F (W ∗) (the inclusion is from (5)), we have from the subdifferential continuity of ψ and the smoothness of f that F (W̃ t) a.s.−−→ F (W ∗). Since we also have W̃ t →W ∗ and minY ∈∂F (W̃ t) ‖Y ‖ a.s.−−→ 0, clearly( W̃ t, F ( W t ) , min Y ∈∂F(W̃ t) ‖Y ‖ ) a.s.−−→ (W ∗, F (W ∗), 0) . (28) Therefore, (28) and (5) together with the assumptions on ψ at W ∗ imply that with probability one, all conditions of Lemma 1 of Lee (2020) are satisfied, so from it, (6) holds almost surely, conditional on the event of W̃ t →W ∗. B Additional Discussions on Applications We now discuss in more technical details the applications in Section 4.1, especially regarding how the regularizers satisfy the properties required by our theory. B.1 Structured Sparsity We start our discussion with the simple `1 norm as the warm-up for the group-LASSO norm. It is clear that ‖W‖1 is a convex function that is finite everywhere, so it is prox-regular, subdifferentially continuous, and regular everywhere, hence we just need to discuss about the remaining parts in Definition 1. Consider a problem with dimension n > 0. Note that ‖x‖1 = n∑ i=1 |xi|, and the absolute value is smooth everywhere except the point of origin. Therefore, it is clear that ‖x‖1 is locally smooth if xi 6= 0 for all i. For any point x∗, when there is an index set I such that x∗i = 0 for all i ∈ I and x∗i 6= 0 for i /∈ I, we see that the part of the norm corresponds to IC (the complement of I):∑ i∈IC |x∗i | is locally smooth around x∗. Without loss of generality, we assume that I = {1, 2, . . . , k} for some k ≥ 0, then the subdifferential of ‖x‖1 at x∗ is the set {sgn(x1)} × · · · × {sgn(xk)} × [−1, 1]n−k, (29) and clearly if we move from x∗ along any direction y := (y1, . . . , yk, 0, . . . , 0) with a small step, the function value changes smoothly as it is a linear function, satisfying the first condition of Definition 1. Along the same direction y with a small enough step, the set of subdifferential remains the same, so the continuity of subdifferential requirement holds. We can also observe from the above argument that the manifold should be Mx∗ = {x | xi = 0,∀i ∈ I}, and clearly it is a subspace of Rn with its normal space at x∗ being N := {y | 〈x∗, y〉 = 0} = {y | yi = 0,∀i ∈ IC}, which is clearly the affine span of (29) with the translation being (sgn(x1)× · · ·× sgn(xk), 0, . . . , 0). Moreover, indeed the manifolds are low dimensional ones, and for iterates approaching x∗, staying in this active manifold means that the (unstructured) sparsity of the iterates is the same as the limit point x∗. We also provide a graphical illustration of ‖x‖1 with n = 2 in Fig. 3. We can observe that for any x with x1 6= 0 and x2 6= 0, the function is smooth locally around any point, meaning that ‖x‖1 is partly smooth relative to the whole space at x (so actually smooth locally around x). For x with x1 = 0, the function value corresponds to the sharp valley in the graph, and we can see that the function is smooth along the valley, and this valley corresponds to the one-dimensional manifold {x | x1 = 0} for partial smoothness. Next, we use the same graph to illustrate the importance of manifold identification. Consider that the red point x∗ = (0, 1.5) is the limit point of the iterates of a certain algorithm, and the yellow points and black points are two sequences that both converge to x∗. If the iterates of the algorithm are the black points, then clearly except for the limit point itself, all iterates are nonsparse, and thus the final output of the algorithm is also nonsparse unless we can get to exactly the limit point within finite iterations (which is usually impossible for iterative methods). On the other hand, if the iterates are the yellow points, this is the case that the manifold is identified, because all points sit in the valley and enjoy the same sparsity pattern as the limit point x∗. This is why we concern about manifold identification when we solve regularized optimization problems. From this example, we can also see an explanation for why our algorithm with the property of manifold identification performs better than other methods without such a property. Consider a Euclidean space any point x∗ with an index set I such that x∗I = 0 and |I| > 0. This means that x∗ has at least one coordinate being zero, namely x∗ contains sparsity. Now let 0 := min i∈IC |x∗i |, then from the definition of I, 0 > 0. Fro any sequence {xt} converging to x∗, for any ∈ (0, 0), we can find T ≥ 0 such that∥∥xt − x∗∥∥2 ≤ , ∀t ≥ T . Therefore, for any i /∈ I, we must have that xti 6= 0 for all t ≥ T . Otherwise, ‖xt − x∗‖2 ≥ 0, but 0 > ≥ ‖xt − x∗‖2, leading to a contradiction. On the other hand, for any i ∈ I, we can have xti 6= 0 for all t without violating the convergence. That being said, for any sequence converging to x∗, eventually the iterates cannot be sparser than x∗, so the sparsity level of x∗, or of its active manifold, is the local upper bound for the sparsity level of points converging to x∗. Therefore, if iterates of two algorithms converge to the same limit point, the one with a proven manifold identification ability clearly will produce a higher sparsity level. Similar to our example here, in applications other than sparsity, iterates converging to a limit point dwell on super-manifolds of the active manifold, and the active manifold is the minimum one that locally describes points with the same structure as the limit point, and thus identifying this manifold is equivalent to finding the locally most ideal structure of the application. Now back to the sparsity case. One possible concern is the case that the limit point is (0, 0) in the two-dimension example. In this case, the manifold is the 0-dimensional subspace {0}. If this is the case and manifold identification can be ensured, it means that limit point itself can be found within finite iterations. This case is known as the weak sharp minima (Burke & Ferris, 1993) in nonlinear optimization, and its associated finite termination property is also well-studied. For this example, We also see that ‖x‖1 is partly smooth at any point x∗, but the manifold differs with x∗. This is a specific benign example, and in other cases, partial smoothness might happen only locally at some points of interest instead of everywhere. Next, we further extend our argument above to the case of (7). This can be viewed as the `1 norm for each group and we can easily obtain similar results. Again, since the group-LASSO norm is also convex and finite everywhere, prox-regularity, regularity, and subdifferential continuity are not issues at all. For the other properties, we consider one group first, then the group-LASSO norm reduces to the `2 norm. Clearly, ‖x‖2 is smooth locally if x 6= 0, with the gradient being x/‖x‖2, but it is nonsmooth at the point x = 0, where the subdifferential is the unit ball. This is very similar to the absolute value, whose subdifferential at 0 is the interval [−1, 1]. Thus, we can directly apply similar arguments above, and conclude that for any W ∗, (7) is partly smooth at W ∗ with respect to the manifold MW∗ = {W | WIg = 0,∀g : W ∗Ig = 0}, which is again a lower-dimensional subspace. Therefore, the manifold of defining the partial smoothness for the group-LASSO norm exactly corresponds to its structured sparsity pattern. B.2 Binary Neural Networks We continue to consider the binary neural network problem. For easier description, for the Euclidean space E we consider, we will use a vectorized representation for W,A ∈ E such that the elements are enumerated as W1, . . . ,Wn and α1, . . . , αn. The corresponding optimization problem can therefore be written as min W,A∈E Eξ∼D [fξ (W )] + λ n∑ i=1 ( αi (wi + 1)2 + (1− αi) (wi − 1)2 + δ[0,1] (αi) ) , (30) where given any set C, δC is the indicator function of C, defined as δC(x) = { 0 if x ∈ C, ∞ else. We see that except for the indicator function part, the objective is smooth, so the real partly smooth term that we treat as the regularizer is Φ(α) := n∑ i=1 δ[0,1](αi). We note that for αi ∈ (0, 1), the value of δ[0,1](αi) remains a constant zero in a neighborhood of αi, and for αi /∈ [0, 1], the indicator function is also constantly infinite within a neighborhood. Thus, the point of nonsmoothness, happens only at αi ∈ {0, 1}, and similar to our discussion in the previous subsection, Φ is partly smooth along directions that we fix those αi at the boundary (namely, being either 0 or 1) unchanged. The identified manifold therefore corresponds to the entries of αi that are fixed at 0 or 1, and this can serve as the indicator for the desired binary pattern in this task. C Experiment Setting Details For the weights wg of each group in (7), for all experiments in Section 5, we follow Deleu & Bengio (2021) to set wg = √ |Ig|. All ProxSSI parameter settings, excluding the regularization weight and the learning rate schedule, follow the default values in their package. Tables 3 to 13 provide detailed settings of Section 5.2. For the modified VGG19 model, we follow Deleu & Bengio (2021) to eliminate all fully-connected layers except the output layer, and add one batch-norm layer (Ioffe & Szegedy, 2015) after each convolutional layer to simulate modern CNNs like those proposed in He et al. (2016); Huang et al. (2017). For ResNet50 in the structured sparsity experiment in Section 5.2, our version of ResNet50 is the one constructed by the publicly available script at https://github.com/weiaicunzai/ pytorch-cifar100. In the unstructured sparsity experiment presented in Section 5.3, for better comparison with existing works in the literature of pruning, we adopt the version of ResNet50 used by Sundar & Dwaraknath (2021).3 Table 14 provides detailed settings of Section 5.3. For RigL, we use the PyTorch implementation of Sundar & Dwaraknath (2021). D More Results from the Experiments In this section, we provide more details of the results of the experiments we conducted in the main text. In particular, in Fig. 4, we present the change of validation accuracies and group sparsity levels with epochs for the group sparsity tasks in Section 5.2. We then present in Fig. 5 validation accuracies and unstructured sparsity level versus epochs for the task in Section 5.3. We note that although it takes more epochs for RMDA to fully stabilize in terms of manifold identification, the sparsity level usually only changes in a very limited range once (sometimes even before) the validation accuracy becomes steady, meaning that we do not need to run the algorithm for an unreasonably long time to obtain satisfactory results. 3https://github.com/varun19299/rigl-reproducibility. ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA E Other Regularizers for Possibly Better Group Sparsity and Generalization A downside of (7) is that it pushes all groups toward zeros and thus introduces bias in the final model. For its remedy, minimax concave penalty (MCP, Zhang, 2010) is then proposed to penalize only the groups whose norm is smaller than a user-specified threshold. More precisely, given hyperparameters λ ≥ 0, ω ≥ 1, the one-dimensional MCP is defined by MCP(w;λ, ω) := { λ|w| − w 2 2ω if|w| < ωλ, ωλ2 2 if|w| ≥ ωλ. One can then apply the above formulation to the norm of a vector to achieve the effect of inducing group-sparsity. In our case, given an index set Ig that represents a group, the ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA MCP for this group is then computed as (Breheny & Huang, 2009) MCP ( WIg ;λg, ωg ) := λg ∥∥WIg∥∥2 − ‖WIg‖22ωg if ∥∥WIg∥∥ < ωgλg, ωgλg 2 2 if ∥∥WIg∥∥ ≥ ωgλg. We then consider ψ(W ) = |G|∑ g=1 MCP ( WIg ;λg, ωg ) . (31) It is shown in Deleu & Bengio (2021) that group MCP regularization may simultaneously provide higher group sparsity and better validation accuracy than the group LASSO norm in vision and language tasks. Another possibility to enhance sparsity is to add another `1- norm or entry-wise MCP regularization to the group-level regularizer. The major drawback ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA of these approaches is the requirement of additional hyperparameters, and we prefer simpler approaches over those with more hyperparameters, as hyperparameter tuning in the latter can be troublesome for users with limited computational resources, and using a simpler setting can also help us to focus on the comparison of the algorithms themselves. The experiment in this subsection is therefore only for illustrating that these more complicated regularizers can be combined with RMDA if the user wishes, and such regularizers might lead to better results. Therefore, we train a version of LeNet5, which is slightly simpler than the one we used in previous experiments, on the MNIST dataset with such regularizers using RMDA and display the respective performance of various regularization schemes in Fig. 6. For the weights wg of each group in (7), in this experiment we consider the following setting. Let Li be the collection of all index sets that belong to the i-th layer in the network, ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA and denote NLi := ∑ Ij∈Li |Ij | the number of parameters in this layer, for all i, we set wg = √ NLi for all g such that Ig ∈ Li. Given two constants λ > 0 and ,ω > 1, The values of λg and ωg in (31) are then assigned as λg = λwg and ωg = ωwg. In this figure, group LASSO is abbreviated as GLASSO; `1-norm plus a group LASSO norm, L1GLASSO; group MCP, GMCP; element-wise MCP plus group MCP, L1GMCP. Our results exemplify that different regularization schemes might have different benefits on one of the criteria with proper hyperparameter tuning. The detailed numbers are reported in Table 15 and the experiment settings can be found in Tables 16 and 17. RigL RMDA Table 15: Results of training LeNet5 on MNIST using RMDA with different regularizers. We report mean and standard deviation of three independent runs. Regularizers Validation accuracy Group sparsity GLASSO 99.11 ± 0.06% 45.33 ± 0.99% L1GLASSO 99.02 ± 0.01% 58.92 ± 1.30% GMCP 99.25 ± 0.08% 32.81 ± 0.96% L1GMCP 99.21 ± 0.03% 32.91 ± 0.35% Table 16: Details of the modified simpler LeNet5 for the experiment in Appendix E. https://github.com/zihsyuan1214/rmda/blob/master/Experiments/Models/lenet5_ small.py. Parameter Value Number of layers 5 Number of convolutional layers 3 Number of fully-connected layers 2 Size of convolutional kernels 5× 5 Number of output filters 1, 2 6, 16 Number of output neurons 3, 4, 5 120, 84, 10 Kernel size, stride, padding of maxing pooling 2× 2, none, invalid Operations after convolutional layers max pooling Activation function for convolution/output layer relu/softmax L1GMCP GMCP L1GLASSO GLASSO
1. What is the focus and contribution of the paper on structured neural networks? 2. What are the strengths of the proposed Regularized Modernized Dual Averaging (RDMA) algorithm, particularly in terms of variance reduction and theoretical guarantees? 3. What are the weaknesses of the paper regarding experimentation and comparisons with other works? 4. Do you have any concerns or suggestions regarding the scope and novelty of the proposed approach? 5. Are there any recent works related to structured learning that could be included in the discussion for further comparison and context?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose Regularized Modernized Dual Averaging (RDMA) algorithm to train structured neural networks. This is a proximal method that achieves variance reduction without any extra cost per iteration. The algorithm does not require any extra hyperparameters than what stochastic gradient descent requires. Authors theoretically prove that structure identification is guaranteed after a finite number of iterations. Experiments have been performed to obtain group sparse neural networks. Networks and datasets used in the experiments are simple logistic regression network on MNIST, 7-layer fully-connected network on FashionMNIST, LeNet5 on MNIST, and VGG16 on CIFAR10. The method has been compared with SGD, ProxSGD, and ProxSSI. Review Positive aspects: The proposed algorithm is well motivated. The proposed algorithm is a unique and different take on structured learning. The proposed algorithm and the related math is well explained in the paper. Negative aspects: Experiments are limited. It is understandable that the main goal of the paper is to share a new proximal based method for structured learning but it will be good to have a few more experiments. It will also be good if comparisons with other structure learning-based methods are done. Related work is missing a recent work that leverages proximal operators to find a sparse network from a pretrained network which can also be applied during training for group sparsity. Verma, Sagar, and Pesquet, Jean-Christophe. "Sparsifying Networks via Subdifferential Inclusion." International Conference on Machine Learning. PMLR, 2021.
ICLR
Title Training Structured Neural Networks Through Manifold Identification and Variance Reduction Abstract This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization. Implementation of RMDA is available at https://www.github.com/zihsyuan1214/rmda. 1 Introduction Training neural networks (NNs) with regularization to obtain a certain desired structure such as structured sparsity or discrete-valued parameters is a problem of increasing interest. Existing approaches either use stochastic subgradients of the regularized objective (Wen et al., 2016; 2018) or combine popular stochastic gradient algorithms for NNs, like SGD with momentum (MSGD) or Adam (Kingma & Ba, 2015), with the proximal operator associated with the regularizer to conduct proximal stochastic gradient updates to obtain a model with preferred structures (Bai et al., 2019; Yang et al., 2019; Yun et al., 2021; Deleu & Bengio, 2021). Such methods come with proven convergence for certain measures of first-order optimality and have shown some empirical success in applications. However, we notice that an essential theoretical support lacking in existing methods is the guarantee for the output iterate to possess the same structure as that at the point of convergence. More specifically, often the imposed regularization is only known to induce a desired structure exactly at optimal or stationary points of the underlying optimization problem (see for example, Zhao & Yu, 2006), but training algorithms are only able to generate iterates asymptotically converging to a stationary point. Without further theoretical guarantees, it is unknown whether the output iterate, which is just an approximation of the stationary point, still has the same structure. For example, let us assume that sparsity is desired, the point of convergence is x∗ = (1, 0, 0), and two algorithms respectively produce iterates {yt = (1, t−1, t−1)} and {zt = (1 + t−1, 0, 0)}. Clearly, both iterate sequences converge to x∗, but only zt has the same desired structure as its limit point x∗, while yt is not useful for sparsity despite that the point of convergence is. This work aims at filling this gap to propose an algorithm for training structured NNs that can provably make all its iterates after a finite number of iterations possess the desired structure of the stationary point to which the iterates converge. We term the structure at a stationary point a stationary structure, and it should be understood that for multiple stationary points, each might correspond to a different stationary structure, and we aim at identifying the one at the limit point of the iterates of an algorithm, instead of selecting the optimal one among all stationary structures. Although finding the structure at an inferior stationary point might seem not very meaningful, another reason for studying this identification property is that for the same point of convergence, the structure at the limit point is the most preferable one. Consider the same example above, we note that for any sequence {xt} converging to x∗, xt1 6= 0 for all t large enough, for otherwise xt does not converge to x∗. Therefore, xt cannot be sparser than x∗ if xt → x∗.1 Identifying the structure of the point of convergence thus also amounts to finding the locally most ideal structure under the same convergence premise. It is well-known in the literature of nonlinear optimization that generating iterates consistently possessing the structure at the stationary point of convergence is possible if all points with the same structure near the stationary point can be presented locally as a manifold along which the regularizer is smooth. This manifold is often termed as the active manifold (relative to the given stationary point), and the task of generating iterates staying in the active manifold relative to the point of convergence after finite iterations is called manifold identification (Lewis, 2002; Hare & Lewis, 2004; Lewis & Zhang, 2013). To identify the active manifold of a stationary point, we need the regularizer to be partly smooth (Lewis, 2002; Hare & Lewis, 2004) at that point, roughly meaning that the regularizer is smooth along the active manifold around the point, while the change in its value is drastic along directions leaving the manifold. A more technical definition will be given in Section 3. Fortunately, most regularizers used in machine learning are partly smooth, so stationary structure identification is possible, and various deterministic algorithms are known to achieve so (Hare & Lewis, 2007; Hare, 2011; Wright, 2012; Liang et al., 2017a;b; Li et al., 2020; Lee, 2020; Bareilles et al., 2020). On the other hand, for stochastic gradient-related methods to identify a stationary structure, existing theory suggests that the variance of the gradient estimation needs to vanish as the iterates approach a stationary point (Poon et al., 2018), and indeed, it is observed empirically that proximal stochastic gradient descent (SGD) is incapable of manifold identification due to the presence of the variance in the gradient estimation (Lee & Wright, 2012; Sun et al., 2019).2 Poon et al. (2018) showed that variance-reduction methods such as SVRG (Johnson & Zhang, 2013; Xiao & Zhang, 2014) and SAGA (Defazio et al., 2014) that utilize the finite-sum structure of empirical risk minimization to drive the variance of their gradient estimators to zero are suitable for this task. Unfortunately, with the standard practice of data augmentation in deep learning, training of deep learning models with a regularizer should be treated as the following stochastic optimization problem that minimizes the expected loss over a distribution, instead of the commonly seen finite-sum form: min W∈E F (W ) := Eξ∼D [fξ (W )] + ψ (W ) , (1) where E is a Euclidean space with inner product 〈·, ·〉 and the associated norm ‖·‖, D is a distribution over a space Ω, fξ is differentiable almost everywhere for all ξ ∈ Ω, and ψ(W ) is a regularizer that might be nondifferentiable. We will also use the notation f(W ) := Eξ∼D[fξ(W )]. Without a finite-sum structure in (1), Defazio & Bottou (2019) pointed out that classical variance-reduction methods are ineffective for deep learning, and one major reason is the necessity of periodically evaluating ∇f(W ) (or at least using a large batch from D to get a precise approximation of it) in variance-reduction methods is intractable, hence manifold identification and therefore finding the stationary structure becomes an extremely tough task for deep learning. Although recently there are efforts in developing variancereduction methods for (1) inspired by online problems (Wang et al., 2019; Nguyen et al., 2021; Pham et al., 2020; Cutkosky & Orabona, 2019; Tran-Dinh et al., 2019), these methods all have multiple hyperparameters to tune and incur computational cost at least twice or 1See a more detailed discussion in Appendix B.1. 2An exception is the interpolation case, in which the variance of plain SGD vanishes asymptot- ically. But data augmentation often fails this interpolation condition. thrice to that of (proximal) SGD. As the training of deep learning models is time- and resource-consuming, these drawbacks make such methods less ideal for deep learning. To tackle these difficulties, we extend the recently proposed modernized dual averaging framework (Jelassi & Defazio, 2020) to the regularized setting by incorporating proximal operations, and obtain a new algorithm RMDA (Regularized Modernized Dual Averaging) for (1). The proposed algorithm provably achieves variance reduction beyond finite-sum problems without any cost or hard-to-tune hyperparameters additional to those of proximal momentum SGD (proxMSGD), and we provide theoretical guarantees for its convergence and ability for manifold identification. The key difference between RMDA and the original regularized dual averaging (RDA) of Xiao (2010) is that RMDA incorporates momentum and can achieve better performance for deep learning in terms of the generalization ability, and the new algorithm requires nontrivial proofs for its guarantees. We further conduct experiments on training deep learning models with a regularizer for structured-sparsity to demonstrate the ability of RMDA to identify the stationary structure without sacrificing the prediction accuracy. When the desired structure is (unstructured) sparsity, a popular approach is pruning that trims a given dense model to a specified level, and works like (Gale et al., 2019; Blalock et al., 2020; Evci et al., 2020; Verma & Pesquet, 2021) have shown promising results. However, as a post-processing approach, pruning is essentially different from structured training considered in this work, because pruning is mainly used when a model is available, while structured training combines training and structure inducing in one procedure to potentially reduce the computational cost and memory footprint when resources are scarce. We will also show in our experiment that RMDA can achieve better performance than a state-of-the-art pruning method, suggesting that structured training indeed has its merits for obtaining sparse NNs. The main contributions of this work are summarized as follows. • Principled analysis: We use the theory of manifold identification from nonlinear opti- mization to provide a unified way towards better understanding of algorithms for training structured neural networks. • Variance reduction beyond finite-sum with low cost: RMDA achieves variance reduction for problems that consist of an infinite-sum term plus a regularizer (see Lemma 2) while incorporating momentum to improve the generalization performance. Its spatial and computational cost is almost the same as proxMSGD, and there is no additional hyperparameters to tune, making RMDA suitable for large-scale deep learning. • Structure identification: With the help of variance reduction, our theory shows that under suitable conditions, after a finite number of iterations, iterates of RMDA stay in the active manifold of its limit point. • Superior empirical performance: Experiments on neural networks with structured sparsity exemplify that RMDA can identify a stationary structure without reducing the validation accuracy, thus outperforming existing methods by achieving higher group sparsity. Another experiment on unstructured sparsity also shows RMDA outperforms a state-of-the-art pruning method. After this work is finished, we found a very recent paper Kungurtsev & Shikhman (2021) that proposed the same algorithm (with slightly differences in the parameters setting in Line 5 of Algorithm 1) and analyzed the expected convergence of (1) under a specific scheduling of ct = st+1α−1t+1 when both terms are convex. In contrast, our work focuses on nonconvex deep learning problems, and especially on the manifold identification aspect. 2 Algorithm Details of the proposed RMDA are in Algorithm 1. At the t-th iteration with the iterate W t−1, we draw an independent sample ξt ∼ D to compute the stochastic gradient ∇fξt(W t−1), decide a learning rate ηt, and update the weighted sum Vt of previous stochastic gradients using ηt and the scaling factor βt := √ t: V0 := 0, Vt := ∑t k=1 ηkβk∇fξk(W k−1) = Vt−1 + ηtβt∇fξt(W t−1), ∀t > 0. Algorithm 1: RMDA (W 0, T, η(·), c(·)) input : Initial point W 0, learning rate schedule η(·), momentum schedule c(·), number of epochs T 1 V0 ← 0, α0 ← 0 2 for t = 1, . . . , T do 3 βt ← √ t, st ← η(t)βt, αt ← αt−1 + st 4 Sample ξt ∼ D and compute V t ← V t−1 + st∇fξt(W t−1) 5 W̃ t ← arg minW 〈V t, W 〉+ βt2 ∥∥W −W 0∥∥2 + αtψ(W ) // (2) 6 W t ← (1− c(t))W t−1 + c(t)W̃ t output: The final model WT The tentative iterate W̃ t is then obtained by the proximal operation associated with ψ: W̃ t = proxαtβ−1t ψ ( W 0 − β−1t V t ) , αt := ∑t k=1 βkηk, (2) where for any function g, proxg(x) := arg miny ‖y − x‖ 2 /2 + g(y) is its proximal operator. The iterate is then updated along the direction W̃ t −W t−1 with a factor of ct ∈ [0, 1]: W t = (1− ct)W t−1 + ctW̃ t = W t−1 + ct ( W̃ t −W t−1 ) . (3) When ψ ≡ 0, RMDA reduces to the modernized dual averaging algorithm of Jelassi & Defazio (2020), in which case it has been shown that mixing W t−1 and W̃ t in (3) equals to introducing momentum (Jelassi & Defazio, 2020; Tao et al., 2018). We found that this introduction of momentum greatly improves the performance of RMDA and is therefore essential for applying it on deep learning problems. 3 Analysis We provide theoretical analysis of the proposed RMDA in this section. Our analysis shows variance reduction in RMDA and stationarity of the limit point of its iterates, but all of them revolves around our main purpose of identification of a stationary structure within a finite number of iterations. The key tools for this end are partial smoothness and manifold identification (Hare & Lewis, 2004; Lewis, 2002). Our result is the currently missing cornerstone for those proximal algorithms applied to deep learning problems for identifying desired structures. In fact, it is actually well-known in convex optimization that those algorithms based on plain proximal stochastic gradient without variance reduction are unable to identify the active manifold, and the structure of the iterates oscillates due to the variance in the gradient estimation; see, for example, experiments and discussions in Lee & Wright (2012); Sun et al. (2019). Our work is therefore the first one to provide justification for solving the regularized optimization problem in deep learning to really identify a desired structure induced by the regularizer. Throughout, ∇fξ denotes the gradient of fξ, ∂ψ is the (regular) subdifferential of ψ, and relint(C) means the relative interior of the set C. We start from introducing the notion of partial smoothness. Definition 1. A function ψ is partly smooth at a point W ∗ relative to a set MW∗ 3W ∗ if 1. Around W ∗, MW∗ is a C2-manifold and ψ|MW∗ is C2. 2. ψ is regular (finite with the Fréchet subdifferential coincides with the limiting Fréchet subdifferential) at all points W ∈MW∗ around W ∗ with ∂ψ(W ) 6= ∅. 3. The affine span of ∂ψ(W ∗) is a translate of the normal space to MW∗ at W ∗. 4. ∂ψ is continuous at W ∗ relative to MW∗ . We often call MW∗ the active manifold at W ∗. Another concept required for manifold identification is prox-regularity (Poliquin & Rockafellar, 1996). Definition 2. A function ψ is prox-regular at W ∗ for V ∗ ∈ ∂ψ(W ∗) if ψ is finite at W ∗, locally lower semi-continuous around W ∗, and there is ρ > 0 such that ψ(W1) ≥ ψ(W2) + 〈V, W1−W2〉− ρ2‖W1 −W2‖ 2 whenever W1,W2 are close to W ∗ with ψ(W2) near ψ(W ∗) and V ∈ ∂ψ(W2) near V ∗. ψ is prox-regular at W ∗ if it is so for all V ∈ ∂ψ(W ∗). To broaden the applicable range, a function ψ prox-regular at some W ∗ is often also assumed to be subdifferentially continuous (Poliquin & Rockafellar, 1996) there, meaning that if W t → W ∗, ψ(W t) → ψ(W ∗) holds when there are V ∗ ∈ ∂ψ(W ∗) and a sequence {V t} such that V t ∈ ∂ψ(W t) and V t → V ∗. Notably, all convex and weakly-convex (Nurminskii, 1973) functions are regular, prox-regular, and subdifferentially continuous in their domain. 3.1 Theoretical Results When the problem is convex, convergence guarantees for Algorithm 1 under two specific specific schemes are known. First, when ct ≡ 1, RMDA reduces to the classical RDA, and convergence to a global optimum (of W t = W̃ t in this case) on convex problems has been proven by Lee & Wright (2012); Duchi & Ruan (2021), with convergence rates of the expected objective or the regret given by Xiao (2010); Lee & Wright (2012). Second, when ct = st+1α−1t+1 and (βt, αt) in Line 5 of Algorithm 1 are replaced by (βt+1, αt+1), convergence is recently analyzed by Kungurtsev & Shikhman (2021). In our analysis below, we do not assume convexity of either term. We show that if {W̃ t} converges to a point W ∗ (which could be a non-stationary one), {W t} also converges to W ∗. Lemma 1. Consider Algorithm 1 with {ct} satisfying ∑ ct = ∞. If {W̃ t} converges to a point W ∗, {W t} also converges to W ∗. We then show that if {W̃ t} converges to a point, almost surely this point of convergence is stationary. This requires the following lemma for variance reduction of RMDA, meaning that the variance of using Vt to estimate ∇f(W t−1) reduces to zero, as α−1t Vt converges to ∇f(W t−1) almost surely, and this result could be of its own interest. The first claim below uses a classical result in stochastic optimization that can be found at, for example, (Gupal, 1979, Theorem 4.1, Chapter 2.4), but the second one is, to our knowledge, new. Lemma 2. Consider Algorithm 1. Assume for any ξ ∼ D, fξ is L-Lipschitz-continuouslydifferentiable almost surely for some L, so f is also L-Lipschitz-continuously-differentiable, and there is C ≥ 0 such that Eξt∼D ∥∥∇fξt (W t−1)∥∥2 ≤ C for all t. If {ηt} satisfies∑ βtηtα −1 t =∞, ∑( βtηtα −1 t )2 <∞, ∥∥W t+1 −W t∥∥ (βtηtα−1t )−1 a.s.−−→ 0, (4) then α−1t V t −→ ∇f(W t−1) with probability one. Moreover, if {W t} lies in a bounded set, we get E ∥∥α−1t V t −∇f (W t−1)∥∥2 → 0 even if the second condition in (4) is replaced by a weaker condition of βtηtα−1t → 0. In general, the last condition in (4) requires some regularity conditions in F to control the change speed of W t. One possibility is when ψ is the indicator function of a convex set, βtηt ∝ tp for t ∈ (1/2, 1) will satisfy this condition. However, in other settings for ηt, even when F and ψ are both convex, existing analyses for the classical RDA such that ct ≡ 1 in Algorithm 1 still need an additional local error bound assumption to control the change of W t+1 −W t. Hence, to stay focused on our main message, we take this assumption for granted, and leave finding suitable sufficient conditions for it as future work. With the help of Lemmas 1 and 2, we can now show the stationarity result for the limit point of the iterates. The assumption of βtα−1t approaching 0 below is classical in analyses of dual averaging in order to gradually remove the influence of the term ∥∥W −W 0∥∥2. Theorem 1. Consider Algorithm 1 with the conditions in Lemmas 1 and 2 hold, and assume the set of stationary points Z := {W | 0 ∈ ∂F (W )} is nonempty and βtα−1t → 0. For any given W 0, consider the event that {W̃ t} converges to a point W ∗ (each event corresponds to a different W ∗), then if ∂ψ is outer semicontinuous at W ∗, and this event has a nonzero probability, W ∗ ∈ Z, or equivalently, W ∗ is a stationary point, with probability one conditional on this event. Finally, with Lemmas 1 and 2 and Theorem 1, we prove the main result that the active manifold of the limit point is identified in finite iterations of RMDA under nondegeneracy. Theorem 2. Consider Algorithm 1 with the conditions in Theorem 1 satisfied. Consider the event of {W̃ t} converging to a certain point W ∗ as in Theorem 1, if the probability of this event is nonzero; ψ is prox-regular and subdifferentially continuous at W ∗ and partly smooth at W ∗ relative to the active C2 manifoldM; ∂ψ is outer semicontinuous at W ∗; and the nondegeneracy condition −∇f (W ∗) ∈ relint ∂ψ (W ∗) (5) holds at W ∗, then conditional on this event, almost surely there is T0 ≥ 0 such that W̃ t ∈M, ∀t ≥ T0. (6) In other words, the active manifold at W ∗ is identified by the iterates of Algorithm 1 after a finite number of iterations almost surely. As mentioned in Section 1, an important reason for studying manifold identification is to get the lowest-dimensional manifold representing the structure of the limit point, which often corresponds to a preferred property for the application, like the highest sparsity, lowest rank, or lowest VC dimension locally. See an illustrated example in Appendix B.1. 4 Applications in Deep Learning We discuss two popular schemes of training structured deep learning models achieved through regularization to demonstrate the applications of RMDA. More technical details for applying our theory to the regularizers in these applications are in Appendix B. 4.1 Structured Sparsity As modern deep NN models are often gigantic, it is sometimes desirable to trim the model to a smaller one when only limited resources are available. In this case, zeroing out redundant parameters during training at the group level is shown to be useful (Zhou et al., 2016), and one can utilize regularizers promoting structured sparsity for this purpose. The most famous regularizer of this kind is the group-LASSO norm (Yuan & Lin, 2006; Friedman et al., 2010). Given λ ≥ 0 and a collection G of index sets {Ig} of the variable W , this convex regularizer is defined as ψ(W ) := λ ∑|G| g=1 wg ∥∥WIg∥∥, (7) with wg > 0 being the pre-specified weight for Ig. For any W ∗, let GW∗ ⊆ G be the index set such that W ∗Ij = 0 for all j ∈ GW∗ , the group-LASSO norm is partly smooth around W ∗ relative to the manifold MW∗ := {W |WIi = 0,∀i ∈ GW∗}, so our theory applies. In order to promote structured sparsity, we need to carefully design the grouping. Fortunately, in NNs, the parameters can be grouped naturally (Wen et al., 2016). For any fully-connected layer, let W ∈ Rout×in be the matrix representation of the associated parameters, where out is the number of output neurons and in is that of input neurons, we can consider the column-wise groups, defined as W:,j for all j, and the row-wise groups of the form Wi,:. For a convolutional layer with W ∈ Rfilter×channel×height×width being the tensor form of the corresponding parameters, we can consider channel-wise, filter-wise, and kernel-wise groups, defined respectively as W:,j,:,:, Wi,:,:,: and Wi,j,:,:. 4.2 Binary/Discrete Neural Networks Making the parameters of an NN binary integers is another way to obtain a more compact model during training and deployment (Hubara et al., 2016), but discrete optimization is hard to scale-up. Using a vector representation w ∈ Rm of the variables, Hou et al. (2017) thus proposed to use the indicator function of { w | wIi = αibIi , αi > 0, bIi ∈ {±1}|Ii| } to induce the entries of w to be binary without resorting to discrete optimization tools, where each Ii enumerates all parameters in the i-th layer. Yang et al. (2019) later proposed to use minα∈[0,1]m ∑m i=1 ( αi(wi + 1)2 + (1− αi)(wi − 1)2 ) as the regularizer and to include α as a variable to train. At any α∗ with I0 := {i | α∗i = 0} and I1 := {i | α∗i = 1}, the objective is partly smooth relative to the manifold {(W,α) | αI0 = 0, αI1 = 1}. Extension to discrete NNs beyond the binary ones is possible, and Bai et al. (2019) have proposed regularizers with closed-form proximal operators for it. 5 Experiments We use the structured sparsity application in Section 4.1 to empirically exemplify the ability of RMDA to find desired structures in the trained NNs. RMDA and the following methods for structured sparsity in deep learning are compared using PyTorch (Paszke et al., 2019). • ProxSGD (Yang et al., 2019): A simple proxMSGD algorithm. To obtain group sparsity, we skip the interpolating step in Yang et al. (2019). • ProxSSI (Deleu & Bengio, 2021): This is a special case of the adaptive proximal SGD framework of Yun et al. (2021) that uses the Newton-Raphson algorithm to approximately solve the subproblem. We directly use the package released by the authors. We exclude the algorithm of Wen et al. (2016) because their method is shown to be worse than ProxSSI by Deleu & Bengio (2021). To compare these algorithms, we examine both the validation accuracy and the group sparsity level of their trained models. We compute the group sparsity as the percentage of groups whose elements are all zero, so the reported group sparsity is zero when there is no group with a zero norm, and is one when the whole model is zero. For all methods above, we use (7) with column-wise and channel-wise groupings in the regularization for training, but adopt the kernel-wise grouping in their group sparsity evaluation. Throughout the experiments, we always use multi-step learning rate scheduling that decays the learning rate by a constant factor every time the epoch count reaches a pre-specified threshold. For all methods, we conduct grid searches to find the best hyperparameters. All results shown in tables in Sections 5.1 and 5.2 are the mean and standard deviation of three independent runs with the same hyperparameters, while figures use one representative run for better visualization. In convex optimization, a popular way to improve the practical convergence behavior for momentum-based methods is restarting that periodically reset the momentum to zero (O’donoghue & Candes, 2015). Following this idea, we introduce a restart heuristic to RMDA. At each round, we use the output of Algorithm 1 from the previous round as the new input to the same algorithm, and continue using the scheduling η and c without resetting them. For ψ ≡ 0, Jelassi & Defazio (2020) suggested to increase ct proportional to the decrease of ηt until reaching ct = 1. We adopt the same setting for ct and ηt and restart RMDA whenever ηt changes. As shown in Section 3 that W̃ t finds the active manifold, increasing ct to 1 also accords with our interest in identifying the stationary structure. 5.1 Correctness of Identified Structure Using Synthetic Data Our first step is to numerically verify that RMDA can indeed identify the stationary structure desired. To exactly find a stationary point and its structure a priori, we consider synthetic problems. We first decide a ground truth model W that is structured sparse, generate random data points that can be well separated by W , and then decide their labels using W . The generated data are then taken as our training data. We consider a linear logistic regression model and a small NN that has one fully-connected layer and one convolutional layer. To ensure convergence to the ground truth, for logistic regression we generate more data points than the problem dimension to ensure the problem is strongly convex so that there is only one stationary/optimal point, and for the small NN, we initialize all algorithms close enough to the ground truth. We report in Fig. 1 training error rates (as an indicator for the proximity to the ground truth) and percentages of the optimal group sparsity pattern of the ground truth identified. Clearly, although all methods converge to the ground truth, only RMDA identifies the correct structure of it, and other methods without guarantees for manifold identification fail. 5.2 Neural Networks with Real Data We turn to real-world data used in modern computer vision problems. We consider two rather simple models and six more complicated modern CNN cases. The two simpler models are linear logistic regression with the MNIST dataset (LeCun et al., 1998), and training a small NN with seven fully-connected layers on the FashionMNIST dataset (Xiao et al., 2017). The six more complicated cases are: 1. A version of LeNet5 with the MNIST dataset, 2. The same version of LeNet5 with the FashionMNIST dataset, 3. A modified VGG19 (Simonyan & Zisserman, 2015) with the CIFAR10 dataset (Krizhevsky, 2009), 4. The same modified VGG19 with the CIFAR100 dataset (Krizhevsky, 2009), 5. ResNet50 (He et al., 2016) with the CIFAR10 dataset, and 6. ResNet50 with the CIFAR100 dataset. For these six more complicated tasks, we include a dense baseline of MSGD with no sparsityinducing regularizer in our comparison. For all training algorithms on VGG19 and ResNet50, we follow the standard practice in modern vision tasks to apply data augmentation through random cropping and horizontal flipping so that the training problem is no longer a finitesum one. From Fig. 2, we see that similar to the previous experiment, the group sparsity level of RMDA is stable in the last epochs, while that of ProxSGD and ProxSSI oscillates below. This suggests that RMDA is the only method that, as proven in Section 3, identifies the structured sparsity at its limit point, and other methods with no variance reduction fail. Moreover, Table 1 shows that manifold identification of RMDA is achieved with no sacrifice of the validation accuracy, so RMDA beats ProxSGD and ProxSSI in both criteria, and its accuracy is close to that of the dense baseline of MSGD. Moreover, for VGG19 and ResNet50, RMDA succeeds in finding the optimal structured sparsity pattern despite the presence of data augmentation, showing that RMDA can indeed overcome the difficulty from the infinite-sum setting of modern deep learning tasks. We also report that in the ResNet50/CIFAR100 task, on our NVIDIA RTX 8000 GPU, MSGD, ProxSGD, and RMDA have similar per-epoch cost of 68, 77, and 91 seconds respectively, while ProxSSI needs 674 seconds per epoch. RMDA is thus also more suitable for large-scale structured deep learning in terms of practical efficiency. 5.3 Comparison with Pruning We compare RMDA with a state-of-the-art pruning method RigL (Evci et al., 2020). As pruning focuses on unstructured sparsity, we use RMDA with ψ(W ) = λ‖W‖1 to have a fair comparison, and tune λ to achieve a pre-specified sparsity level. We run RigL with 1000 epochs, as its performance at the default 500 epochs was unstable, and let RMDA use the same number of epochs. Results of 98% sparsity in Table 2 show that RMDA consistently outdoes RigL, indicating regularized training could be a promising alternative to pruning. 6 Conclusions In this work, we proposed and analyzed a new algorithm, RMDA, for efficiently training structured neural networks with state-of-the-art performance. Even in the presence of data augmentation, RMDA can still achieve variance reduction and provably identify the desired structure at a stationary point using the tools of manifold identification. Experiments show that existing algorithms for the same purpose fail to find a stable stationary structure, while RMDA achieves so with no accuracy drop nor additional time cost. Acknowledgements This work was supported in part by MOST of R.O.C. grant 109-2222-E-001-003-MY3, and the AWS Cloud Credits for Research program of Amazon Inc. Appendices Table of Contents A Proofs 13 A.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.2 Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 A.3 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.4 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B Additional Discussions on Applications 18 B.1 Structured Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.2 Binary Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C Experiment Setting Details 20 D More Results from the Experiments 20 E Other Regularizers for Possibly Better Group Sparsity and Generalization 21 A Proofs A.1 Proof of Lemma 1 Proof. Using (3), the distance between W t and W ∗ can be upper bounded through the triangle inequality:∥∥W t −W ∗∥∥ = ∥∥(1− ct) (W t−1 −W ∗)+ ct (W̃ t −W ∗)∥∥ ≤ ct ∥∥W̃ t −W ∗∥∥+ (1− ct)∥∥W t−1 −W ∗∥∥. (8) For any event such that W̃ t →W ∗, for any > 0, we can find T ≥ 0 such that∥∥W̃ t −W ∗∥∥ ≤ , ∀t ≥ T . Let δt := ‖W t −W ∗‖, we see from the above inequality and (8) that δt ≤ (1− ct) δt−1 + ct , ∀t ≥ T . By deducting from both sides, we get that (δt − ) ≤ (1− ct) (δt−1 − ) , ∀t ≥ T . Since ∑ ct =∞, we further deduce that lim t→∞ (δt − ) ≤ ∞∏ t=T (1− ct) (δT −1 − ) ≤ ∞∏ t=T exp (−ct) (δT −1 − ) = exp ( − ∞∑ t=T ct ) (δT −1 − ) = 0, where in the first inequality we used the fact that exp(x) ≥ 1 +x for all real number x. The result above then implies that lim t→∞ δt ≤ . As is arbitrary and δt ≥ 0 from the definition, we conclude that limt→∞ δt = 0, which is equivalent to that W t →W ∗. A.2 Proof of Lemma 2 Proof. We observe that α−1t V t = t∑ k=1 ηkβk αt ∇fξk ( W k−1 ) = αt−1 αt α−1t−1V t−1 + αt − αt−1 αt ∇fξt ( W t−1 ) = ( 1− βtηt αt ) α−1t−1V t−1 + βtηt αt ∇fξt ( W t−1 ) . From that f is L-Lipschitz-continuously differentiable, we have that∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ = ∥∥∇f (W t)− f (W t−1)∥∥ ≤ L ∥∥W t −W t−1∥∥. (9) Therefore, (4) and (9) imply that 0 ≤ ∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ βtηtα −1 t ≤ L ∥∥W t −W t−1∥∥ βtηtα −1 t a.s.−−→ 0, which together with the sandwich lemma shows that∥∥Eξt+1∼D [∇fξt+1 (W t)]− Eξt∼D [∇fξt (W t−1)]∥∥ βtηtα −1 t a.s.−−→ 0. (10) Therefore, the first two conditions of (4) together with (10) and the bounded variance assumption satisfy the requirements of (Gupal, 1979, Chapter 2.4, Theorem 4.1), so the conclusion of almost sure convergence hold. For the convergence in L2 part, we first define mt := α−1t Vt and τt := βtηtα−1t for notational ease. Consider ∥∥mt+1 −∇F (W t)∥∥2, we have from the update rule in Algorithm 1 that∥∥mt+1 −∇F (W t)∥∥2 = ∥∥(1− τt)mt + τt∇fξt+1(W t)−∇F (W t)∥∥2 = ∥∥(1− τt) (mt −∇F (W t))+ τt (∇fξt+1(W t)−∇F (W t))∥∥2 = (1− τt)2 ∥∥mt −∇F (W t)∥∥2 + τ2t ∥∥∇fξt+1(W t)−∇F (W t)∥∥2 + 2τt(1− τt)〈mt −∇F (W t), ∇fξt+1(W t)−∇F (W t)〉 = (1− τt)2 ∥∥(mt −∇F (W t−1))+ (∇F (W t−1)−∇F (W t))∥∥2 (11) + τ2t ∥∥∇fξt+1(W t)−∇F (W t)∥∥2 + 2τt(1− τt)〈mt −∇F (W t), ∇fξt+1(W t)−∇F (W t)〉. Let {Ft}t≥0 denote the natural filtration of {(mt,W t)}t≥0. Namely, Ft records the information of W 0, {ci}t−1i=0, {ηi}t−1i=0, and {ξi}ti=1. By defining Ut := ∥∥mt −∇F (W t−1)∥∥2 and taking expectation over (11) conditional on Ft, we obtain from E [ ∇fξt+1(W t) | Ft ] = ∇F (W t) that E [Ut+1 | Ft] = (1− τt)2 ∥∥(mt −∇F (W t−1))+ (∇F (W t−1)−∇F (W t))∥∥2 + τ2t E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] . (12) From the last condition in (4) and the Lipschitz continuity of∇F , there are random variables { t} and {ut} such that ‖ut‖ = 1, t ≥ 0, and ∇F (W t−1)−∇F (W t) = τt tut for all t > 0, with t ↓ 0 almost surely. We thus obtain that∥∥mt −∇F (W t−1) +∇F (W t−1)−∇F (W t)∥∥2 = ∥∥mt −∇F (W t−1) + τt tut∥∥2 = (1 + τt)2 ∥∥∥∥ 11 + τt (mt −∇F (W t−1))+ τt1 + τt tut ∥∥∥∥2 ≤ (1 + τt)2 ( 1 1 + τt Ut + τt 1 + τt t 2 ) , (13) where we used Jensen’s inequality and the convexity of ‖·‖2 in the last inequality. By substituting (13) back into (12), we obtain E [Ut+1|Ft] ≤ (1− τt)2(1 + τt)Ut + (1− τt)2(1 + τt)τt t2 + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ (1− τt)(Ut + τt t2) + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ (1− τt)Ut + τt t2 + τt2E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] . (14) For the last term in (14), we notice that E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ 2(E [∥∥∇fξt(W t)∥∥2]+ ∥∥∇F (W t)∥∥2) ≤ 2 ( C + ∥∥∇F (W t)∥∥2) , (15) where the last inequality is from the bounded variance assumption. Since by assumption the {W t} lies in a bounded set K, we have that for any point W ∗ ∈ K, W t −W ∗ is upper bounded, and thus ‖∇F (W t)−∇F (W ∗)‖ is also bounded, implying that ‖∇F (W t)‖2 ≤ C2 for some C2 ≥ 0. Therefore, (15) further leads to E [∥∥∇fξt(W t)−∇F (W t)∥∥2 | Ft] ≤ C3 (16) for some C3 ≥ 0. Now we further take expectation on (14) and apply (16) to obtain EUt+1 ≤ (1− τt)EUt + τt t2 + τt2C3 = (1− τt)EUt + τt ( 2t + τtC3 ) . (17) Note that the third implies t ↓ 0, so this together with the second condition that τt ↓ 0 means 2t+τtC3 ↓ 0 as well, and thus for any δ > 0, we can find Tδ ≥ 0 such that 2t+τtC3 ≤ δ for all t ≥ Tδ. Thus, (17) further leads to EUt+1 − δ ≤ (1− τt)EUt + τtδ − δ = (1− τt) (EUt − δ) ,∀t ≥ Tδ. (18) This implies that (EUt − δ) becomes a decreasing sequence starting from t ≥ Tδ, and since Ut ≥ 0, this sequence is lower bounded by −δ, and hence it converges to a certain value. By recursion of (18), we have that EUt − δ ≤ t∏ i=Tδ (1− τi) (EUTδ − δ) , and from the well-known inequality (1 + x) ≤ expx for all x ∈ R, the above result leads to EUt − δ ≤ exp ( − ∑ i = Tδtτi ) (EUTδ − δ) . By letting t approach infinity and noting that the first condition of (4) indicates ∞∑ t=k τt =∞ for any k ≥ 0, we see that −δ ≤ lim t→∞ EUt − δ ≤ exp ( − ∞∑ i=Tδ τi ) (EUTδ − δ) = 0. (19) As δ is arbitrary, by taking δ ↓ 0 in (19) and noting the nonnegativity of Ut, we conclude that limEUt = 0, as desired. This proves the last result in Lemma 2. A.3 Proof of Theorem 1 Proof. Using Lemma 2, we can view α−1t V t as ∇f(W t) plus some noise that asymptotically decreases to zero with probability one: α−1t Vt = ∇f(W t) + t, ‖ t‖ a.s.−−→ 0. (20) We use (20) to rewrite the optimality condition of (2) as (also see Line 5 of Algorithm 1) − ( ∇f ( W t ) + t + βtα−1t ( W̃ t −W 0 )) ∈ ∂ψ ( W̃ t ) . (21) Now we consider ∂F (W̃ t). Clearly from (21), we have that ∇f ( W̃ t ) −∇f ( W t ) − t − βtα−1t ( W̃ t −W 0 ) ∈ ∂∇f ( W̃ t ) + ψ ( W̃ t ) = ∂F ( W̃ t ) . (22) Now we consider the said event that W̃ t → W ∗ for a certain W ∗, and let us define this event as A ⊆ Ω. From Lemma 1, we know that W t → W ∗ as well under A. Let us define B ⊆ Ω as the event of t → 0, then we know that since P (A) > 0 and P (B) = 1, where P is the probability function for events in Ω, P (A ∩ B) = P (A). Therefore, conditional on the event of A, we have that t a.s.−−→ 0 still holds. Now we consider any realization of A∩B. For the right-hand side of (22), as W̃ t is convergent and βtα−1t decreases to zero, by letting t approach infinity, we have that lim t→∞ t + βtα−1t ( W̃ t −W 0 ) = 0 + 0 ( W ∗ −W 0 ) = 0. By the Lipschitz continuity of ∇f , we have from (3) and (4) that 0 ≤ ∥∥∇f (W̃ t)−∇f (W t)∥∥ ≤ L∥∥W t − W̃ t∥∥. As {W t} and {W̃ t} converge to the same point, we see that ∥∥W t − W̃ t∥∥→ 0, so ∇f (W̃ t)− ∇f(W t) also approaches zero. Hence, the limit of the right-hand side of (22) is lim t→∞ ∇f ( W̃ t ) − ( ∇f ( W t ) + t + βtα−1t ( W̃ t −W 0 )) = 0. (23) On the other hand, for the left-hand side of (22), the outer semicontinuity of ∂ψ at W ∗ and the continuity of ∇f show that lim t→∞ ∇f(W̃ t) + ∂ψ(W̃ t) ⊆ ∂∇f(W ∗) + ψ (W ∗) = ∂F (W ∗). (24) Substituting (23) and (24) back into (22) then proves that 0 ∈ ∂F (W ∗) and thus W ∗ ∈ Z. A.4 Proof of Theorem 2 Proof. Our discussion in this proof are all under the event that W̃ t →W ∗. From the argument in Appendix A.3, we can view α−1t V t as ∇f(W t) plus some noise that asymptotically decreases to zero with probability one as shown in (20). From Lemma 1, we know that W t →W ∗. From (21), there is U t ∈ ∂ψ ( W̃ t ) such that U t = −α−1t V t + α−1t βt ( W̃ t −W 0 ) . (25) Moreover, we define γt := W t − W̃ t. (26) By combining (25)–(26) with (20), we obtain min Y ∈∂F (W̃ t) ‖Y ‖ ≤ ∥∥∇f (W̃ t)+ U t∥∥ = ∥∥∇f (W̃ t)−∇f (W t)− t − α−1t βt (W̃ t −W 0)∥∥ ≤ ∥∥∇f (W̃ t)−∇f (W t)∥∥+ ‖ t‖+ α−1t βt∥∥W̃ t −W 0∥∥ ≤L‖γt‖+ ‖ t‖+ α−1t βt (∥∥W ∗ − W̃ t∥∥+ ∥∥W 0 −W ∗∥∥) , (27) where we used the Lipschitz continuity of ∇f and the triangle inequality in the last inequality. We now separately bound the terms in (27). From that W t → W ∗ and W̃ t → W ∗, it is straightforward that ‖γt‖ → 0. The second term decreases to zero almost surely according to (20) and the argument in Appendix A.3. For the last term, since α−1t βt → 0, and∥∥W̃ t −W ∗∥∥→ 0, we know that α−1t βt ∥∥W 0 −W ∗∥∥→ 0, α−1t βt∥∥W̃ t −W ∗∥∥→ 0. Therefore, we conclude from the above argument and (27) that min Y ∈∂F (W̃ t) ‖Y ‖ a.s.−−→ 0. As f is smooth with probability one, we know that if ψ is partly smooth at W ∗ relative toM, then so is F = f + ψ with probability one. Moreover, Lipschitz-continuously differentiable functions are always prox-regular, and the sum of two prox-regular functions is still proxregular, so F is also prox-regular at W ∗ with probability one. Following the argument identical to that in Appendix A.3, we know that these probability one events are still probability one conditional on the event of W̃ t →W ∗ as this event has a nonzero probability. As W̃ t →W ∗ and ∇f(W̃ t) +U t a.s.−−→ 0 ∈ ∂F (W ∗) (the inclusion is from (5)), we have from the subdifferential continuity of ψ and the smoothness of f that F (W̃ t) a.s.−−→ F (W ∗). Since we also have W̃ t →W ∗ and minY ∈∂F (W̃ t) ‖Y ‖ a.s.−−→ 0, clearly( W̃ t, F ( W t ) , min Y ∈∂F(W̃ t) ‖Y ‖ ) a.s.−−→ (W ∗, F (W ∗), 0) . (28) Therefore, (28) and (5) together with the assumptions on ψ at W ∗ imply that with probability one, all conditions of Lemma 1 of Lee (2020) are satisfied, so from it, (6) holds almost surely, conditional on the event of W̃ t →W ∗. B Additional Discussions on Applications We now discuss in more technical details the applications in Section 4.1, especially regarding how the regularizers satisfy the properties required by our theory. B.1 Structured Sparsity We start our discussion with the simple `1 norm as the warm-up for the group-LASSO norm. It is clear that ‖W‖1 is a convex function that is finite everywhere, so it is prox-regular, subdifferentially continuous, and regular everywhere, hence we just need to discuss about the remaining parts in Definition 1. Consider a problem with dimension n > 0. Note that ‖x‖1 = n∑ i=1 |xi|, and the absolute value is smooth everywhere except the point of origin. Therefore, it is clear that ‖x‖1 is locally smooth if xi 6= 0 for all i. For any point x∗, when there is an index set I such that x∗i = 0 for all i ∈ I and x∗i 6= 0 for i /∈ I, we see that the part of the norm corresponds to IC (the complement of I):∑ i∈IC |x∗i | is locally smooth around x∗. Without loss of generality, we assume that I = {1, 2, . . . , k} for some k ≥ 0, then the subdifferential of ‖x‖1 at x∗ is the set {sgn(x1)} × · · · × {sgn(xk)} × [−1, 1]n−k, (29) and clearly if we move from x∗ along any direction y := (y1, . . . , yk, 0, . . . , 0) with a small step, the function value changes smoothly as it is a linear function, satisfying the first condition of Definition 1. Along the same direction y with a small enough step, the set of subdifferential remains the same, so the continuity of subdifferential requirement holds. We can also observe from the above argument that the manifold should be Mx∗ = {x | xi = 0,∀i ∈ I}, and clearly it is a subspace of Rn with its normal space at x∗ being N := {y | 〈x∗, y〉 = 0} = {y | yi = 0,∀i ∈ IC}, which is clearly the affine span of (29) with the translation being (sgn(x1)× · · ·× sgn(xk), 0, . . . , 0). Moreover, indeed the manifolds are low dimensional ones, and for iterates approaching x∗, staying in this active manifold means that the (unstructured) sparsity of the iterates is the same as the limit point x∗. We also provide a graphical illustration of ‖x‖1 with n = 2 in Fig. 3. We can observe that for any x with x1 6= 0 and x2 6= 0, the function is smooth locally around any point, meaning that ‖x‖1 is partly smooth relative to the whole space at x (so actually smooth locally around x). For x with x1 = 0, the function value corresponds to the sharp valley in the graph, and we can see that the function is smooth along the valley, and this valley corresponds to the one-dimensional manifold {x | x1 = 0} for partial smoothness. Next, we use the same graph to illustrate the importance of manifold identification. Consider that the red point x∗ = (0, 1.5) is the limit point of the iterates of a certain algorithm, and the yellow points and black points are two sequences that both converge to x∗. If the iterates of the algorithm are the black points, then clearly except for the limit point itself, all iterates are nonsparse, and thus the final output of the algorithm is also nonsparse unless we can get to exactly the limit point within finite iterations (which is usually impossible for iterative methods). On the other hand, if the iterates are the yellow points, this is the case that the manifold is identified, because all points sit in the valley and enjoy the same sparsity pattern as the limit point x∗. This is why we concern about manifold identification when we solve regularized optimization problems. From this example, we can also see an explanation for why our algorithm with the property of manifold identification performs better than other methods without such a property. Consider a Euclidean space any point x∗ with an index set I such that x∗I = 0 and |I| > 0. This means that x∗ has at least one coordinate being zero, namely x∗ contains sparsity. Now let 0 := min i∈IC |x∗i |, then from the definition of I, 0 > 0. Fro any sequence {xt} converging to x∗, for any ∈ (0, 0), we can find T ≥ 0 such that∥∥xt − x∗∥∥2 ≤ , ∀t ≥ T . Therefore, for any i /∈ I, we must have that xti 6= 0 for all t ≥ T . Otherwise, ‖xt − x∗‖2 ≥ 0, but 0 > ≥ ‖xt − x∗‖2, leading to a contradiction. On the other hand, for any i ∈ I, we can have xti 6= 0 for all t without violating the convergence. That being said, for any sequence converging to x∗, eventually the iterates cannot be sparser than x∗, so the sparsity level of x∗, or of its active manifold, is the local upper bound for the sparsity level of points converging to x∗. Therefore, if iterates of two algorithms converge to the same limit point, the one with a proven manifold identification ability clearly will produce a higher sparsity level. Similar to our example here, in applications other than sparsity, iterates converging to a limit point dwell on super-manifolds of the active manifold, and the active manifold is the minimum one that locally describes points with the same structure as the limit point, and thus identifying this manifold is equivalent to finding the locally most ideal structure of the application. Now back to the sparsity case. One possible concern is the case that the limit point is (0, 0) in the two-dimension example. In this case, the manifold is the 0-dimensional subspace {0}. If this is the case and manifold identification can be ensured, it means that limit point itself can be found within finite iterations. This case is known as the weak sharp minima (Burke & Ferris, 1993) in nonlinear optimization, and its associated finite termination property is also well-studied. For this example, We also see that ‖x‖1 is partly smooth at any point x∗, but the manifold differs with x∗. This is a specific benign example, and in other cases, partial smoothness might happen only locally at some points of interest instead of everywhere. Next, we further extend our argument above to the case of (7). This can be viewed as the `1 norm for each group and we can easily obtain similar results. Again, since the group-LASSO norm is also convex and finite everywhere, prox-regularity, regularity, and subdifferential continuity are not issues at all. For the other properties, we consider one group first, then the group-LASSO norm reduces to the `2 norm. Clearly, ‖x‖2 is smooth locally if x 6= 0, with the gradient being x/‖x‖2, but it is nonsmooth at the point x = 0, where the subdifferential is the unit ball. This is very similar to the absolute value, whose subdifferential at 0 is the interval [−1, 1]. Thus, we can directly apply similar arguments above, and conclude that for any W ∗, (7) is partly smooth at W ∗ with respect to the manifold MW∗ = {W | WIg = 0,∀g : W ∗Ig = 0}, which is again a lower-dimensional subspace. Therefore, the manifold of defining the partial smoothness for the group-LASSO norm exactly corresponds to its structured sparsity pattern. B.2 Binary Neural Networks We continue to consider the binary neural network problem. For easier description, for the Euclidean space E we consider, we will use a vectorized representation for W,A ∈ E such that the elements are enumerated as W1, . . . ,Wn and α1, . . . , αn. The corresponding optimization problem can therefore be written as min W,A∈E Eξ∼D [fξ (W )] + λ n∑ i=1 ( αi (wi + 1)2 + (1− αi) (wi − 1)2 + δ[0,1] (αi) ) , (30) where given any set C, δC is the indicator function of C, defined as δC(x) = { 0 if x ∈ C, ∞ else. We see that except for the indicator function part, the objective is smooth, so the real partly smooth term that we treat as the regularizer is Φ(α) := n∑ i=1 δ[0,1](αi). We note that for αi ∈ (0, 1), the value of δ[0,1](αi) remains a constant zero in a neighborhood of αi, and for αi /∈ [0, 1], the indicator function is also constantly infinite within a neighborhood. Thus, the point of nonsmoothness, happens only at αi ∈ {0, 1}, and similar to our discussion in the previous subsection, Φ is partly smooth along directions that we fix those αi at the boundary (namely, being either 0 or 1) unchanged. The identified manifold therefore corresponds to the entries of αi that are fixed at 0 or 1, and this can serve as the indicator for the desired binary pattern in this task. C Experiment Setting Details For the weights wg of each group in (7), for all experiments in Section 5, we follow Deleu & Bengio (2021) to set wg = √ |Ig|. All ProxSSI parameter settings, excluding the regularization weight and the learning rate schedule, follow the default values in their package. Tables 3 to 13 provide detailed settings of Section 5.2. For the modified VGG19 model, we follow Deleu & Bengio (2021) to eliminate all fully-connected layers except the output layer, and add one batch-norm layer (Ioffe & Szegedy, 2015) after each convolutional layer to simulate modern CNNs like those proposed in He et al. (2016); Huang et al. (2017). For ResNet50 in the structured sparsity experiment in Section 5.2, our version of ResNet50 is the one constructed by the publicly available script at https://github.com/weiaicunzai/ pytorch-cifar100. In the unstructured sparsity experiment presented in Section 5.3, for better comparison with existing works in the literature of pruning, we adopt the version of ResNet50 used by Sundar & Dwaraknath (2021).3 Table 14 provides detailed settings of Section 5.3. For RigL, we use the PyTorch implementation of Sundar & Dwaraknath (2021). D More Results from the Experiments In this section, we provide more details of the results of the experiments we conducted in the main text. In particular, in Fig. 4, we present the change of validation accuracies and group sparsity levels with epochs for the group sparsity tasks in Section 5.2. We then present in Fig. 5 validation accuracies and unstructured sparsity level versus epochs for the task in Section 5.3. We note that although it takes more epochs for RMDA to fully stabilize in terms of manifold identification, the sparsity level usually only changes in a very limited range once (sometimes even before) the validation accuracy becomes steady, meaning that we do not need to run the algorithm for an unreasonably long time to obtain satisfactory results. 3https://github.com/varun19299/rigl-reproducibility. ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA E Other Regularizers for Possibly Better Group Sparsity and Generalization A downside of (7) is that it pushes all groups toward zeros and thus introduces bias in the final model. For its remedy, minimax concave penalty (MCP, Zhang, 2010) is then proposed to penalize only the groups whose norm is smaller than a user-specified threshold. More precisely, given hyperparameters λ ≥ 0, ω ≥ 1, the one-dimensional MCP is defined by MCP(w;λ, ω) := { λ|w| − w 2 2ω if|w| < ωλ, ωλ2 2 if|w| ≥ ωλ. One can then apply the above formulation to the norm of a vector to achieve the effect of inducing group-sparsity. In our case, given an index set Ig that represents a group, the ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA MCP for this group is then computed as (Breheny & Huang, 2009) MCP ( WIg ;λg, ωg ) := λg ∥∥WIg∥∥2 − ‖WIg‖22ωg if ∥∥WIg∥∥ < ωgλg, ωgλg 2 2 if ∥∥WIg∥∥ ≥ ωgλg. We then consider ψ(W ) = |G|∑ g=1 MCP ( WIg ;λg, ωg ) . (31) It is shown in Deleu & Bengio (2021) that group MCP regularization may simultaneously provide higher group sparsity and better validation accuracy than the group LASSO norm in vision and language tasks. Another possibility to enhance sparsity is to add another `1- norm or entry-wise MCP regularization to the group-level regularizer. The major drawback ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA of these approaches is the requirement of additional hyperparameters, and we prefer simpler approaches over those with more hyperparameters, as hyperparameter tuning in the latter can be troublesome for users with limited computational resources, and using a simpler setting can also help us to focus on the comparison of the algorithms themselves. The experiment in this subsection is therefore only for illustrating that these more complicated regularizers can be combined with RMDA if the user wishes, and such regularizers might lead to better results. Therefore, we train a version of LeNet5, which is slightly simpler than the one we used in previous experiments, on the MNIST dataset with such regularizers using RMDA and display the respective performance of various regularization schemes in Fig. 6. For the weights wg of each group in (7), in this experiment we consider the following setting. Let Li be the collection of all index sets that belong to the i-th layer in the network, ProxSGD ProxSSI RMDA ProxSGD ProxSSI RMDA and denote NLi := ∑ Ij∈Li |Ij | the number of parameters in this layer, for all i, we set wg = √ NLi for all g such that Ig ∈ Li. Given two constants λ > 0 and ,ω > 1, The values of λg and ωg in (31) are then assigned as λg = λwg and ωg = ωwg. In this figure, group LASSO is abbreviated as GLASSO; `1-norm plus a group LASSO norm, L1GLASSO; group MCP, GMCP; element-wise MCP plus group MCP, L1GMCP. Our results exemplify that different regularization schemes might have different benefits on one of the criteria with proper hyperparameter tuning. The detailed numbers are reported in Table 15 and the experiment settings can be found in Tables 16 and 17. RigL RMDA Table 15: Results of training LeNet5 on MNIST using RMDA with different regularizers. We report mean and standard deviation of three independent runs. Regularizers Validation accuracy Group sparsity GLASSO 99.11 ± 0.06% 45.33 ± 0.99% L1GLASSO 99.02 ± 0.01% 58.92 ± 1.30% GMCP 99.25 ± 0.08% 32.81 ± 0.96% L1GMCP 99.21 ± 0.03% 32.91 ± 0.35% Table 16: Details of the modified simpler LeNet5 for the experiment in Appendix E. https://github.com/zihsyuan1214/rmda/blob/master/Experiments/Models/lenet5_ small.py. Parameter Value Number of layers 5 Number of convolutional layers 3 Number of fully-connected layers 2 Size of convolutional kernels 5× 5 Number of output filters 1, 2 6, 16 Number of output neurons 3, 4, 5 120, 84, 10 Kernel size, stride, padding of maxing pooling 2× 2, none, invalid Operations after convolutional layers max pooling Activation function for convolution/output layer relu/softmax L1GMCP GMCP L1GLASSO GLASSO
1. What is the focus and contribution of the paper regarding training neural networks with desired structures? 2. What are the strengths of the proposed algorithm, particularly in terms of its practicality and theoretical foundation? 3. Do you have any concerns or questions regarding the presentation and claims made in the paper, especially regarding its significance and implications? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any specific details or explanations missing in the paper that the reviewer would like to see added or clarified?
Summary Of The Paper Review
Summary Of The Paper This paper proposes an algorithm for training neural networks with a regularization term for promoting desired structures in the model. The paper (claims to) prove that the proposed method can correctly identify within finite steps the underlying structure. The simulations then show that the method outperforms existing packages in identifying structured sparsity without compromising prediction accuracy. Review Strengths The paper introduces an algorithm for training neural networks with a regularization term for promoting desired structures in the model. The new method does not incur computation additional to SGD with momentum and could be of practical use. The ideas of the paper are based on well-founded rationales. In general, the paper is quite well-written, with the main ideas outlined clearly. The approach is well-motivated and seems to be built upon established literature. The review of related works is informative. The main theoretical results of the paper (Theorem 1 and Theorem 2) are non-trivial and meaningful, and the proofs are rigorous. Weakness I have a few concerns with the manuscript in its current state. In my opinion, the paper overstates the significance and implications of its results in its presentation. More precisely: In the abstract, introduction, and throughout the discussion, the paper claims that "we rigorously prove that the proposed method can correctly identify within finite steps the underlying structure". In the experiments, they conclude that "RMDA is the only algorithm that steadily identifies the correct structured sparsity". I think those are not correct characterizations of the results obtained in this work: Roughly speaking, Theorem 2 for structured sparsity of neural networks just proves that: If the estimator converges to some stationary point W, then its sparsity converges to the sparsity of W within finite step Similarly, the experiments just show that RMDA has a stable structured sparsity While these results are non-trivial, they concern the stability of the algorithm, and have little to do with correctness: a bad algorithm that got stuck at a non-optimal stationary point also satisfies those properties. To obtain what the paper claims, some more work is needed. In the theory part, results that are equivalent to model-selection consistency for linear models are required. For the simulations part, some experiments with simulated data, when a measure of deviation from the "ground truth" or "correct/optimal sparsity" could be recognized, are necessary. While the results of Section 3 of the manuscript are rigorous, the applications to neural networks (Section 4) lack several important details. When moving from a general framework to a specific case (Section 4.1 and 4.2), two central concepts: the corresponding active manifold, the precise description of the limit point W^* are not defined explicitly. This causes significant unnecessary confusion and makes it harder to judge the accuracy and the importance of the results. Specifically, In Definition 1: M is required to be a C^2 manifold, and in general, should be independent of the limit point W^* In Section 4.1 (Structured sparsity), the active manifold seems to be defined as a hyperplane around (and depends on) W^*. To make this fit into the framework, W^* needs to be unique for all random realizations (but this is unlikely). If this is not the case, then M is a union of several hyperplanes of different dimensions (correspond to different limit points for each realization) and is no longer a manifold. I believe a clear description of the active manifolds for each case and some verification of their geometric properties would help improve the manuscript. My last concern about the theoretical analysis is its (somewhat hidden and informally defined) assumption that the tentative iterate {W ̃ t} converges almost surely to a certain point W∗. This statement can be interpreted in different ways and needs to be spelled out mathematically. My main question is: Is W* the same for all realizations of randomness? In principle, this is not true, but the analysis was written as if that was the case (see point (2) below). The manuscript also stated that verifying this assumption is possible, but these are not the purpose of this work, so they avoid distracting the readers from such technical results. However, it is worth pointing out that W ̃ and W are intertwined in their constructions, and making the convergence of the W~ an assumption is a bit too strong. Even if rigorous results don't need to be provided, some examples and quick discussions about this should be included, perhaps in the appendix. This part will also help resolve the lack of understanding about the structure of W* described above. Other comments and questions Beginning of Section 3: "In stark contrast to our results, it is actually well-known in convex optimization that those algorithms based on proximal stochastic gradient are unable to identify the manifold within finite iterations". —> This statement needs more references.
ICLR
Title Gradient descent temporal difference-difference learning Abstract Off-policy algorithms, in which a behavior policy differs from the target policy and is used to gain experience for learning, have proven to be of great practical value in reinforcement learning. However, even for simple convex problems such as linear value function approximation, these algorithms are not guaranteed to be stable. To address this, alternative algorithms that are provably convergent in such cases have been introduced, the most well known being gradient descent temporal difference (GTD) learning. This algorithm and others like it, however, tend to converge much more slowly than conventional temporal difference learning. In this paper we propose gradient descent temporal difference-difference (GradientDD) learning in order to improve GTD learning by introducing second-order differences in successive parameter updates. We investigate this algorithm in the framework of linear value function approximation, analytically showing its improvement over GTD learning. Studying the model empirically on the random walk and Boyan-chain prediction tasks, we find substantial improvement over GTD learning and, in several cases, better performance even than conventional TD learning. 1 INTRODUCTION Off-policy algorithms for value function learning enable an agent to use a behavior policy that differs from the target policy in order to gain experience for learning. However, because off-policy methods learn a value function for a target policy given data due to a different behavior policy, they often exhibit greater variance in parameter updates. When applied to problems involving function approximation, off-policy methods are slower to converge than on-policy methods and may even diverge (Baird, 1995; Sutton & Barto, 2018). Two general approaches have been investigated to address the challenge of developing stable and effective off-policy temporal-difference algorithms. One approach is to use importance sampling methods to warp the update distribution back to the on-policy distribution (Precup et al., 2000; Mahmood et al., 2014). This approach is useful for decreasing the variance of parameter updates, but it does not address stability issues. The second main approach to addressing the challenge of off-policy learning is to develop true gradient descent-based methods that are guaranteed to be stable regardless of the update distribution. Sutton et al. (2009a;b) proposed the first off-policy gradientdescent-based temporal difference (GTD and GTD2, respectively) algorithms. These algorithms are guaranteed to be stable, with computational complexity scaling linearly with the size of the function approximator. Empirically, however, their convergence is much slower than conventional temporal difference (TD) learning, limiting their practical utility (Ghiassian et al., 2020; White & White, 2016). Building on this work, extensions to the GTD family of algorithms (see Ghiassian et al. (2018) for a review) have allowed for incorporating eligibility traces (Maei & Sutton, 2010; Geist & Scherrer, 2014), non-linear function approximation such as with a neural network (Maei, 2011), and reformulation of the optimization as a saddle point problem (Liu et al., 2015; Du et al., 2017). However, due to their slow convergence, none of these stable off-policy methods are commonly used in practice. In this work, we introduce a new gradient descent algorithm for temporal difference learning with linear value function approximation. This algorithm, which we call gradient descent temporal difference-difference (Gradient-DD) learning, is an acceleration technique that employs second- order differences in successive parameter updates. The basic idea of Gradient-DD is to modify the error objective function by additionally considering the prediction error obtained in last time step, then to derive a gradient-descent algorithm based on this modified objective function. In addition to exploiting the Bellman equation to get the solution, this modified error objective function avoids drastic changes in the value function estimate by encouraging local search around the current estimate. Algorithmically, the Gradient-DD approach only adds an additional term to the update rule of the GTD2 method, and the extra computational cost is negligible. We show mathematically that applying this method significantly improves the convergence rate relative to the GTD2 method for linear function approximation. This result is supported by numerical experiments, which also show that Gradient-DD obtains better convergence in many cases than conventional TD learning. 1.1 RELATED WORK In related approaches to ours, some previous studies have attempted to improve Gradient-TD algorithms by adding regularization terms to the objective function. Liu et al. (2012) have used l1 regularization on weights to learn sparse representations of value functions, and Ghiassian et al. (2020) has used l2 regularization on weights. Unlike these references, our approach modifies the error objective function by regularizing the evaluation error obtained in the most recent time step. With this modification, our method provides a learning rule that contains second-order differences in successive parameter updates. Our approach is similar to trust region policy optimization (Peters & Schaal, 2008; Schulman et al., 2015) or relative entropy policy search (Peters et al., 2010), which penalize large changes being learned in policy learning. In these methods, constrained optimization is used to update the policy by considering the constraint on some measure between the new policy and the old policy. Here, however, our aim here is to look for the optimal value function, and the regularization term uses the previous value function estimate to avoid drastic changes in the updating process. 2 GRADIENT DESCENT METHOD FOR OFF-POLICY TEMPORAL DIFFERENCE LEARNING 2.1 PROBLEM DEFINITION AND BACKGROUND In this section, we formalize the problem of learning the value function for a given policy under the Markov Decision Process (MDP) framework. In this framework, the agent interacts with the environment over a sequence of discrete time steps, t = 1, 2, . . .. At each time step the agent observes a partial summary of the state st ∈ S and selects an action at ∈ A. In response, the environment emits a reward rt ∈ R and transitions the agent to its next state st+1 ∈ S. The state and action sets are finite. State transitions are stochastic and dependent on the immediately preceding state and action. Rewards are stochastic and dependent on the preceding state and action, as well as on the next state. The process generating the agent’s actions is termed the behavior policy. In off-policy learning, this behavior policy is in general different from the target policy π : S → A. The objective is to learn an approximation to the state-value function under the target policy in a particular environment: V (s) = Eπ [ ∞∑ t=1 γt−1rt|s1 = s ] , (1) where γ ∈ [0, 1) is the discount rate. In problems for which the state space is large, it is practical to approximate the value function. In this paper we consider linear function approximation, where states are mapped to feature vectors with fewer components than the number of states. Specifically, for each state s ∈ S there is a corresponding feature vector x(s) ∈ Rp, with p ≤ |S|, such that the approximate value function is given by Vw(s) := w >x(s). (2) The goal is then to learn the parameters w such that Vw(s) ≈ V (s). 2.2 GRADIENT TEMPORAL DIFFERENCE LEARNING A major breakthrough for the study of the convergence properties of MDP systems came with the introduction of the GTD and GTD2 learning algorithms (Sutton et al., 2009a;b). We begin by briefly recapitulating the GTD algorithms, which we will then extend in the following sections. To begin, we introduce the Bellman operator B such that the true value function V ∈ R|S| satisfies the Bellman equation: V = R + γPV =: BV, where R is the reward vector with components E(rn+1|sn = s), and P is a matrix of state transition probabilities. In temporal difference methods, an appropriate objective function should minimize the difference between the approximate value function and the solution to the Bellman equation. Having defined the Bellman operator, we next introduce the projection operator Π, which takes any value function V and projects it to the nearest value function within the space of approximate value functions of the form (2). Letting X be the matrix whose rows are x(s), the approximate value function can be expressed as Vw = Xw. We will also assume that there exists a limiting probability distribution such that ds = limn→∞ p(sn = s) (or, in the episodic case, ds is the proportion of time steps spent in state s). The projection operator is then given by Π = X(X>DX)−1X>D, where the matrix D is diagonal, with diagonal elements ds. The natural measure of how closely the approximation Vw satisfies the Bellman equation is the mean-squared Bellman error: MSBE(w) = ‖Vw −BVw‖2D, (3) where the norm is weighted by D, such that ‖V‖2D = V>DV. However, because the Bellman operator follows the underlying state dynamics of the Markov chain, irrespective of the structure of the linear function approximator, BVw will typically not be representable as Vw for any w. An alternative objective function, therefore, is the mean squared projected Bellman error (MSPBE), which we define as J(w) = ‖Vw −ΠBVw‖2D. (4) Following (Sutton et al., 2009b), our objective is to minimize this error measure. As usual in stochastic gradient descent, the weights at each time step are then updated by ∆w = −α∇wJ(w), where α > 0, and −1 2 ∇wJ(w) =− E[(γxn+1 − xn)x>n ][E(xnx>n )]−1E(δnxn) ≈− E[(γxn+1 − xn)x>n ]η. (5) For notational simplicity, we have denoted the feature vector associated with sn as xn = x(sn). We have also introduced the temporal difference error δn = rn + (γxn+1 − xn)>wn, as well as η, a linear predictor to approximate [E(xnx>n )] −1E(δnxn). Because the factors in Eqn. (5) can be directly sampled, the resulting updates in each step are δn =rn + (γxn+1 − xn)>wn ηn+1 =ηn + βn(δn − x>n ηn)xn wn+1 =wn − αn(γxn+1 − xn)(x>n ηn). (6) These updates define the GTD2 learning algorithm, which we will build upon in the following section. 3 GRADIENT DESCENT TEMPORAL DIFFERENCE-DIFFERENCE LEARNING In order to improve the GTD2 algorithm described above, in this section we modify the objective function via additionally considering the approximation error Vw−Vwn−1 given the previous time step n− 1. Specifically, we modify Eqn. (4) as follows: JGDD(w|wn−1) = J(w) + κ‖Vw −Vwn−1‖2D, (7) Figure 1: Schematic diagram of Gradient-DD learning with w ∈ R2. Rather than updating w directly along the gradient of the MSPBE (arrow), the update rule selects wn that minimizes the MSPBE while satisfying the constraint ‖Vw −Vwn−1‖2D ≤ µ (shaded ellipse). where κ ≥ 0 is a parameter of the regularization. Minimizing Eqn. (7) is equivalent to the following optimization arg min w J(w) s.t. ‖Vw −Vwn−1‖2D ≤ µ (8) where µ > 0 is a parameter which becomes large when κ is small, so that the MSPBE objective is recovered as µ→∞, equivalent to κ→ 0 in Eqn. (7). We show in the Appendix that for any µ > 0, there exist κ ≥ 0 such that the solution of Eqn. (7) and that of Eqn. (8) are the same. Eqns. (7) and (8) represent a tradeoff between minimizing the MSPBE error and preventing the estimated value function from changing too drastically. Rather than simply minimizing the optimal prediction from the projected Bellman equation, the agent makes use of the most recent update to look for the solution. Figure 1 gives a schematic view of the effect of the regularization. Rather than directly following the direction of the MSPBE gradient, the update chooses a w that minimizes the MSPBE while following the constraint that the estimated value function should not change too greatly. In effect, the regularization term encourages searching around the estimate at previous time step, especially when the state space is large. With these considerations in mind, the negative gradient of JGDD(w|wn−1) is − 1 2 ∇wJGDD(w|wn−1) =− E[(γxn+1 − xn)x>n ][E(xnx>n )]−1E(δnxn)− κE[(x>nwn − x>nwn−1)xn] ≈− E[(γxn+1 − xn)x>n ]ηn − κE[(x>nwn − x>nwn−1)xn]. (9) Because the terms in Eqn. (9) can be directly sampled, the stochastic gradient descent updates are given by δn =rn + (γxn+1 − xn)>wn ηn+1 =ηn + βn(δn − x>n ηn)xn wn+1 =wn − κn(x>nwn − x>nwn−1)xn − αn(γxn+1 − xn)(x>n ηn). (10) These update equations define the Gradient-DD method, in which the GTD2 update equations (6) are generalized by including a second-order update term in the third update equation, where this term originates from the squared bias term in the objective (7). In the following sections, we shall analytically and numerically investigate the convergence and performance of Gradient-DD learning. 4 IMPROVED CONVERGENCE RATE In this section we analyze the convergence rate of Gradient-DD learning. Note that the second-order update in the last line in Eqn. (10) can be rewritten as a system of first-order difference equations: (I + κnxnx > n )(wn+1 −wn) =κnxnx>n (un+1 − un)− αn(γxn+1 − xn)(x>n ηn); un+1 =wn+1 −wn. (11) Let βn = ζαn, ζ > 0. We consider constant step sizes in the updates, i.e., κn = κ and αn = α. Denote Hn = [ 0 0 0 xnx > n ] and Gn = [ √ ζxnx > n xn(xn − γxn+1)> −(xn − γxn+1)x>n 0 ] . We rewrite the update rules of two iterations in Eqn. (11) as a single iteration in a combined parameter vector with 2n components, ρn = (η > n / √ ζ,w>n ) >, and a new reward-related vector with 2n components, gn+1 = (rnx > n ,0 >)>, as follows: ρn+1 =ρn − κHn(ρn − ρn−1) + √ ζα(Gnρn + gn+1), (12) Denoting ψn+1 = α −1(ρn+1 − ρn), Eqn. (12) is rewritten as[ ρn+1 − ρn ψn+1 −ψn ] =α [ I + κHn −καHn I −αI ]−1 [ −√ζ(Gnρn − gn+1) ψn ] =α [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] [ ρn ψn ] + α [ √ ζgn+1√ ζα−1gn+1 ] , (13) where the second step is from [ I + κHn −καHn I −αI ]−1 = [ I −κHn α−1I −α−1(I + κHn) ] . De- note Jn = [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] . Eqn. (13) tells us that Jn is the update matrix of the Gradient-DD algorithm. (Note that Gn is the update matrix of the GTD2 algorithm.) Therefore, assuming the stochastic approximation in Eqn. (13) goes to the solution of an associated ordinary differential equation (ODE) under some regularity conditions (a convergence property is provided in the appendix by following Borkar & Meyn (2000)), we can analyze the improved convergence rate of Gradient-DD learning by comparing the eigenvalues of the matrices E(Gn) denoted by G, and E(Jn) denoted by J (Atkinson et al., 2008). Obviously, J = [ − √ ζG −κH − √ ζα−1G −α−1(I + κH) ] , where H = E(Hn). To simplify, we consider the case that the matrix E(xnx>n ) = I. Let λG be a real eigenvalue of the matrix √ ζG. (Note that G is defined here with opposite sign relative to G in Maei (2011).) From Maei (2011), the eigenvalues of the matrix −G are strictly negative. In other words, λG > 0. Let λ be an eigenvalue of the matrix J, i.e. a solution to the equation |λI− J| =(λ+ λG)(λ+ α−1) + κα−1λ = λ2 + [α−1(1 + κ) + λG]λ+ α−1λG = 0. (14) The smaller eigenvalues λm of the pair solutions to Eqn. (14) are λm < −λG, where details of the above derivations are given in the appendix. This explains the enhanced speed of convergence in Gradient-DD learning. We shall illustrate this enhanced speed of convergence in numerical experiments in Section 5. Additionally, we also show a convergence property of Gradient-DD under constant step sizes by applying the ordinary differential equation method of stochastic approximation (Borkar & Meyn, 2000). Let the TD fixed point be w∗, such that Vw∗ = ΠBVw∗ . Under some conditions, we prove that, for any > 0, there exists b1 < ∞ such that lim sup n→∞ P (‖wn − w∗‖ > ) ≤ b1α. Details are provided in the appendix. For tapered step sizes, which would be necessary to obtain an even stronger convergence proof, the analysis framework in Borkar & Meyn (2000) does not apply into the Gradient-DD algorithm. Although theoretical investigation of the convergence under tapered step sizes is a question to be studied, we find empirically in numerical experiments that the algorithm does in fact converge with tapered step sizes and even obtains much better performance in this case than with fixed step sizes. 5 EMPIRICAL STUDY In this section, we assess the practical utility of the Gradient-DD method in numerical experiments. To validate performance of Gradient-DD learning, we compare Gradient-DD learning with GTD2 learning, TDC learning (TD with gradient correction (Sutton et al., 2009b)), TD learning, and Emphatic TD learning (Sutton & Mahmood, 2016) in tabular representation using a random-walk task and in linear representation using the Boyan-chain task. For each method and each task, we performed a scan over the step sizes αn and the parameter κ so that the comprehensive performance of the different algorithms can be compared. We considered two choices of step size sequence {αn}: • (Case 1) αn is constant, i.e., αn = α0. • (Case 2) The learning rate αn is tapered according to the schedule αn = α0(103 + 1)/(103 + n). We set the κ = cα0 where c = 1, 2, 4. Additionally, we also allow κ dependent on n and consider Case 3: αn is tapered as in Case 2, but κn = cαn. In order to simplify presentation, the results of Case 3 are reported in the Appendix. To begin, we set βn = αn, then later allow for βn = ζαn under ζ ∈ {1/4, 1/2, 1, 2} in order to investigate the effect of the two-timescale approach of the Gradient-based TD algorithms on Gradient-DD. In all cases, we set γ = 1. 5.1 RANDOM WALK TASK As a first test of Gradient-DD learning, we conducted a simple random walk task (Sutton & Barto, 2018) with tabular representation of the value function. The random walk task has a linear arrangement ofm states plus an absorbing terminal state at each end. Thus there arem+2 sequential states, S0, S1, · · · , Sm, Sm+1, where m = 20, 50, or 100. Every walk begins in the center state. At each step, the walk moves to a neighboring state, either to the right or to the left with equal probability. If either edge state (S0 or Sm+1) is entered, the walk terminates. A walk’s outcome is defined to be r = 0 at S0 and r = 1 at Sm+1. Our aim is to learn the value of each state V (s), where the true values are (1, · · · ,m)/(m+ 1). In all cases the approximate value function is initialized to the intermediate value V0(s) = 0.5. In order to investigate the effect of the initialization V0(s), we also initialize V0(s) = 0, and report the results in Figure 7 of the Appendix, where its performance is very similar as the initialization V0(s) = 0.5. We first compare the methods by plotting the empirical RMS error from the final episode during training as a function of step size α in Figure 2, where 5000 episodes are used. From the figure, we can make several observations. (1) Emphatic TD works well but is sensitive to α. It prefers very small α even in the tapering case, and this preference becomes strong as the state space becomes large in size. (2) Gradient-DD works well and is robust to α, as is conventional TD learning. (3) TDC performs similarly to the GTD2 method, but requires slightly larger α than GTD2. (4) Gradient-DD performs similarly to conventional TD learning and better than the GTD2 method. This advantage is consistent in different settings. (5) The range of α leading to effective learning for Gradient-DD is roughly similar to that for GTD2. Next we look closely at the performance during training, which we show in Figure 3, where each method and parameter setting was run for 5000 episodes. From the observations in Figure 2, in order to facilitate comparison of these methods, we set α0 = 0.1 for 10 spaces, α0 = 0.2 for 20 spaces, and α0 = 0.5 for 50 spaces. Because Emphatic TD requires the step size α to be especially small as shown in Figure 2, the plotted values of α0 for Emphatic TD are tuned relative to the values used in the algorithm defined in Sutton & Mahmood (2016), where the step sizes of Emphatic TD α(ETD)0 are chosen from {0.5%, 0.1%, 0.05%, 0.01%} by the smallest area under the performance curve. Additionally we also tune α0 for TDC because TDC requires αn larger a little than GTD2 as shown in Figure 2. The step sizes for TDC are set as α(TDC)n = aαn, where a is chosen from {1, 1.5, 2, 3} by the smallest area under the performance curve. From the results shown in Figure 3a, we draw several observations. (1) For all conditions tested, Gradient-DD converges much more rapidly than GTD2 and TDC. The results indicate that GradientDD even converges faster than TD learning in some cases, though it is not as fast in the beginning episodes. (2) The advantage of Gradient-DD learning over other methods grows as the state space increases in size. (3) Gradient-DD learning is robust to the choice of c, which controls the size κ of the second-order update, as long as c is not too large. (Empirically c = 2 is a good choice.) (4) Gradient-DD has consistent and good performance under both the constant step size setting and under the tapered step size setting. In summary, compared with GTD2 learning and other methods, Gradient-DD learning in this task leads to improved learning with good convergence. In addition to investigating the effects of the learning rate, size of the state space, and magnitude of the regularization parameter, we also investigated the effect of using distinct values for the two learning rates, αn and βn. To do this, we set βn = ζαn with ζ ∈ {1/4, 1/2, 1, 2} and report the results in Figure 8 of the appendix. The results show that comparably good performance of Gradient-DD is obtained under these various βn settings. 5.2 BOYAN-CHAIN TASK We next investigate Gradient-DD learning on the Boyan-chain problem, which is a standard task for testing linear value-function approximation (Boyan, 2002). In this task we allow for 4p − 3 states, with p = 20, each of which is represented by a p-dimensional feature vector. The p-dimensional representation for every fourth state from the start is [1, 0, · · · , 0] for state s1, [0, 1, 0, · · · , 0] for s5, · · · , and [0, 0, · · · , 0, 1] for the terminal state s4p−3. The representations for the remaining states are obtained by linearly interpolating between these. The optimal coefficients of the feature vector are (−4(p − 1),−4(p − 2), · · · , 0)/5. Simulations with p = 50 and 100 give similar results to those from the random walk task, and hence are not shown here. In each state, except for the last one before the end, there are two possible actions: move forward one step or move forward two steps with equal probability 0.5. Both actions lead to reward -0.3. The last state before the end just has one action of moving forward to the terminal with reward -0.2. As in the random-walk task, α0 used in Emphatic TD is tuned from {0.5%, 0.2%, 0.1%, 0.05%}. We report the results in Figure 4, which leads to conclusions similar to those already drawn from Figure 3. (1) Gradient-DD has much faster convergence than GTD2 and TDC, and generally converges to better values despite being somewhat slower than TD learning at the beginning episodes. (2) Gradient-DD is competitive with Emphatic TD. The improvement over other methods grows as the state space becomes larger. (3) As κ increases, the performance of Gradient-DD improves. Additionally, the performance of Gradient-DD is robust to changes in κ as long as κ is not very large. Empirically a good choice is to set κ = α or 2α. (4) Comparing the performance with constant step size versus that with tapered step size, the Gradient-DD method performs better with tapered step size than it does with constant step size. 5.3 BAIRD’S COUNTEREXAMPLE We also verify the performance of Gradient-DD on Baird’s off-policy counterexample (Baird, 1995), for which TD learning famously diverges. We consider three cases: 7-state, 100-state and 500-state. We set α = 0.02 (but α = 10−5 for ETD), β = α and γ = 0.99. We set κ = 0.2 for GDD1, κ = 0.4 for GDD2 and κ = 0.8 for GDD3. For the initial parameter values (1, · · · , 1, 10, 1)>. We measure the performance by the empirical RMS errors as function of sweep, and report the results in Figure 5. The figure demonstrates that Gradient-DD works as well on this well-known counterexample as GTD2 does, and even works better than GTD2 for the 100-state case. We also observe that the performance improvement of Gradient-DD increases as the state spaces increases. We also note that, because the linear approximation leaves a residual error in the value estimation due to the projection error, the RMS errors in this task do not go to zero. Interestingly, Gradient-DD reduces this residual error as the size of the state space increases. 6 CONCLUSION AND DISCUSSION In this work, we have proposed Gradient-DD learning, a new gradient descent-based TD learning algorithm. The algorithm is based on a modification of the projected Bellman error objective function for value function approximation by introducing a second-order difference term. The algorithm significantly improves upon existing methods for gradient-based TD learning, obtaining better convergence performance than conventional linear TD learning. Since GTD learning was originally proposed, the Gradient-TD family of algorithms has been extended for incorporating eligibility traces and learning optimal policies (Maei & Sutton, 2010; Geist & Scherrer, 2014), as well as for application to neural networks (Maei, 2011). Additionally, many variants of the vanilla Gradient-TD methods have been proposed, including HTD (Hackman, 2012) and Proximal Gradient-TD (Liu et al., 2016). Because Gradient-DD just modifies the objective error of GTD2 by considering an additional squared-bias term, it may be extended and combined with these other methods, potentially broadening its utility for more complicated tasks. In this work we have focused on value function prediction in the two simple cases of tabular representations and linear approximation. An especially interesting direction for future study will be the application of Gradient-DD learning to tasks requiring more complex representations, including neural network implementations. Such approaches are especially useful in cases where state spaces are large, and indeed we have found in our results that Gradient-DD seems to confer the greatest advantage over other methods in such cases. Intuitively, we expect that this is because the difference between the optimal update direction and that chosen by gradient descent becomes greater in higher-dimensional spaces (cf. Fig. 1). This performance benefit in large state spaces suggests that Gradient-DD may be of practical use for these more challenging cases. 6.1 ON THE EQUIVALENCE OF EQNS. (7) & (8) The Karush-Kuhn-Tucker conditions of Eqn. (8) are the following system of equations d dwJ(w) + κ d dw (‖Vw −Vwn−1‖ 2 D − µ) = 0; κ(‖Vw −Vwn−1‖2D − µ) = 0; ‖Vw −Vwn−1‖2D ≤ µ; κ ≥ 0. These equations are equivalent to d dwJ(w) + κ d dw‖Vw −Vwn−1‖ 2 D = 0 and κ > 0, if ‖Vw −Vwn−1‖2D = µ; d dwJ(w) = 0 and κ = 0, if ‖Vw −Vwn−1‖ 2 D < µ. Thus, for any µ > 0, there exists a κ ≥ 0 such that ddwJ(w) + µ d dw‖Vw −Vwn−1‖ 2 D = 0. 6.2 EIGENVALUES OF J Let λ be an eigenvalue of the matrix J. We have that |λI− J| = ∣∣∣∣ λI +√ζG κH√ζα−1G λI + α−1(I + κH) ∣∣∣∣ = ∣∣∣∣ λI +√ζG κH−λα−1I λI + α−1I ∣∣∣∣ = ∣∣∣∣ λI +√ζG κH0 λI + α−1I + κα−1λ(λI +√ζG)−1H ∣∣∣∣ =|(λI + √ ζG)(λI + α−1I) + κα−1λH|. From the assumption E(xnx>n ) = I and the definition of H, some eigenvalues of the matrix J, λ, are solutions to |λI− J| =(λ+ λG)(λ+ α−1) = 0; and other eigenvalues of the matrix J, λ, are solutions to |λI− J| =(λ+ λG)(λ+ α−1) + κα−1λ =λ2 + [α−1(1 + κ) + λG]λ+ α −1λG = 0. Note λG > 0. the pair solutions to the equation above are λ =− 1 2 [α−1(1 + κ) + λG]± 1 2 √ [α−1(1 + κ) + λG]2 − 4α−1λG =− 1 2 [α−1(1 + κ) + λG]± 1 2 √ [α−1(1 + κ)− λG]2 + 4α−1λGκ. Thus, the smaller eigenvalues of the pairs are λm =− 1 2 [α−1(1 + κ) + λG]− 1 2 √ [α−1(1 + κ)− λG]2 + 4α−1λGκ <− 1 2 [α−1(1 + κ) + λG]− 1 2 √ [α−1(1 + κ)− λG]2, where the inequality is from λG > 0. When α−1(1 + κ)− λG > 0, then λm <− 1 2 [α−1(1 + κ) + λG]− 1 2 (α−1(1 + κ)− λG) =− α−1(1 + κ) <− λG, When α−1(1 + κ)− λG ≤ 0, then λm <− 1 2 [α−1(1 + κ) + λG] + 1 2 (α−1(1 + κ)− λG) =− λG, CONVERGENCE WITH CONSTANT STEP SIZES At last we apply the ODE method of stochastic approximation to obtain the convergence performance. Theorem 1 Consider the update rules (10) with constant step size sequences κ, α and β satisfying κ ≥ 0, β = ζα, ζ > 0, α ∈ (0, 1) and β > 0. Let the TD fixed point be w∗, such that Vw∗ = ΠBVw∗ . Suppose that (A1) (xn, rn,xn+1) is an i.i.d. sequence with uniformly bounded second moments, and (A2) E[(xn − γxn+1)x>n ] and E(xnx>n ) are non-singular. Then for any > 0, there exists b1 <∞ such that lim sup n→∞ P (‖wn −w∗‖ > ) ≤ b1α. Proof From the constant step sizes in the conditions, we denote κn = κ and αn = α. Thus, Eqn. (12) equals (I + κHn)(ρn+1 − ρn)− κHn(ρn+1 − 2ρn + ρn−1) =− √ ζα(Gnρn − gn+1). (A.1) Denoting ψn+1 = α −1(ρn+1 − ρn), Eqn. (A.1) is rewritten as[ ρn+1 − ρn ψn+1 −ψn ] =α [ I + κHn −καHn I −αI ]−1 [ −√ζ(Gnρn − gn+1) ψn ] =α [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] [ ρn ψn ] + α [ √ ζgn+1√ ζα−1gn+1 ] , (A.2) where the second step is from[ I + κHn −καHn I −αI ]−1 = [ I −κHn α−1I −α−1(I + κHn) ] . Denoting G = E(Gn), g = E(gn) and H = E(Hn), then the TD fixed point of Eqn. (A.1) is given by −Gρ+ g = 0 (A.3) We apply the ordinary differential equation approach of the stochastic approximation in Theorem 1 (Theorem 2.3 of (Borkar & Meyn, 2000)) into Eqn. (A.2). Note that (Sutton et al., 2009a) and (Sutton et al., 2009b) also applied Theorem 2.3 of (Borkar & Meyn, 2000) in using the gradientdescent method for temporal-difference learning to obtain their convergence results. For simplifying notation, denote Jn = [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] , J = [ − √ ζG −κH − √ ζα−1G −α−1(I + κH) ] , yn = [ ρn ψn ] , hn = [ √ ζgn+1√ ζα−1gn+1 ] , and h = [ √ ζg√ ζα−1g ] . Eqn. (A.2) is rewritten as yn+1 = yn + α(f(yn) + h + Mn+1), (A.4) where f(yn) = Jyn and Mn+1 = (Jn − J)yn + hn − h. Now we verify the conditions (c1-c4) of Lemma 1. Firstly, Condition (c1) is satisfied under the assumption of constant step sizes. Secondly, f(y) is Lipschitz and f∞(y) = Gy. Following Sutton et al. (2009a), the Assumption A2 implies the real parts of all the eigenvalues of G are positive. Therefore, Condition (c2) is satisfied. BecauseE(Mn+1|Fn) = 0 and E(‖Mn+1‖2|Fn) ≤ c0(1+‖yn‖2), whereFn = σ(yi,Mi, i ≤ n), is a martingale difference sequence, we have that ‖Mn+1‖2 ≤ 2(‖Jn − J‖2‖yn‖2 + ‖hn − h‖2). (A.5) From the assumption A1, Eqn. (A.5) follows that there are constants cj and ch such that E(‖Jn − J‖2|Fn) ≤ cj ; E(‖hn+1 − h‖2) ≤ ch. Thus, Condition (c3) is satisfied. Finally, Condition (c4) is satisfied by noting that y∗ = G−1g is the unique globally asymptotically stable equilibrium. Theorem 1 bounds the estimation error of w in probability. Note that the convergence of GradientDD learning provided in Theorem 1 is a somewhat weaker result than the statement that wn → w∗ with probability 1 as n → ∞. The technical reason for this is the condition on step sizes. In Theorem 1, we consider the case of constant step sizes, with αn = α and κn = κ. This restriction is imposed so that Eqn. (12) can be written as a system of first-order difference equations, which cannot be done rigorously when step sizes are tapered as in (Sutton et al., 2009b). As shown below, however, we find empirically in numerical experiments that the algorithm does in fact converge with tapered step sizes and even obtains much better performance in this case than with fixed step sizes. AN ODE RESULT ON STOCHASTIC APPROXIMATION We introduce an ODE result on stochastic approximation in the following lemma, then prove Theorem 1 by applying this result. Lemma 1 (Theorem 2.3 of Borkar & Meyn (2000)) Consider the stochastic approximation algorithm described by the d-dimensional recursion yn+1 = yn + an[f(yn) + Mn+1]. Suppose the following conditions hold: (c1) The sequence {αn} satisfies for some constant 0 < α < ᾱ < 1, α < αn < ᾱ; (c2) The function f is Lipschitz, and there exists a function f∞ such that limr→∞ fr(y) = f∞(y), where the scaled function fr : Rd → Rd is given by fr(y) = f(ry)/r. Furthermore, the ODE ẏ = f∞(y) has the origin as a globally asymptotically stable equilibrium; (c3) The sequence {Mn,Fn}, with Fn = σ(yi,Mi, i ≤ n), is a martingale difference sequence. Moreover, for some c0 <∞ and any initial condition y0, E(‖Mn+1‖2|Fn) ≤ c0(1 + ‖yn‖2). (c4) The ODE ẏ(t) = f(y(t)) has a unique globally asymptotically stable equilibrium y∗. Then for any > 0, there exists b1 <∞ such that lim sup n→∞ P (‖yn − y∗‖ > ) ≤ b1ᾱ. 6.3 ADDITIONAL EMPIRICAL RESULTS
1. What is the main contribution of the paper, and how does it propose to improve the convergence rate of GTD2? 2. What are the concerns regarding the soundness of the proposed method, and how might they be addressed? 3. How does the paper position itself within the existing literature on GTD2 and related methods, and what are some potential improvements that could be made in this regard? 4. What are some specific areas where the clarity and quality of the paper's writing and analysis could be improved? 5. Are there any suggestions or ideas for alternative approaches or experiments that could help better understand and support the paper's claims?
Review
Review EDIT: After reading the other reviews, the author's responses, and thinking more about the concerns raised, I have increased my score. However, I still recommend rejection because of questions around the hyperparameters used in the experiments. Summary: The paper introduces a regularized mean squared projected Bellman error objective function where the regularizer penalizes large changes to the estimated value function. This regularized objective function is used to derive a GTD2-like algorithm where updates to the value function weights are penalized. The paper claims an improved rate of convergence, and empirically investigates the proposed algorithm on tabular random walks, the Boyan Chain environment, and Baird’s counterexample. Pros: paper proposes interesting new method paper includes theoretical argument for proposed method paper empirically investigates proposed method Cons: concerns about soundness of method concerns about originality, clarity, and quality Decision: At the present time I recommend rejecting the paper until the following concerns can be addressed. Soundness: Does the proposed modification to the MSPBE change the underlying problem being solved? Is the solution to the regularized MSPBE the same as the solution to the original MSPBE, even with function approximation? The fact that Gradient-DD(4) did not converge on the Boyan chain is very concerning. The motivation for GTD2 is to converge when used off-policy with function approximation. If the proposed modifications lose the convergence guarantee then why not just use conventional TD off-policy? Originality: There are no references to prior work on convergence rates of GTD2 in section 4. The analysis seems like it was based on an existing analysis, but nothing is cited. There is no explicit related work section, which would help clarify the novelty of contributions and would help position the paper within the existing literature. Clarity: Section 4 (improved convergence rate) is poorly explained, and very difficult to follow. Section 5 doesn't mention beta—the step size for the auxiliary weights. Earlier in the paper kappa is referred to as a regularization parameter, but in section 5 it's called a step size parameter and annealed? There are several statements that don’t make sense to me: “the regularization term uses the previous value function estimate to avoid large biases in the updating process.” The use of the word “biases” here is confusing and conflicts with the statistical notion of bias. Updates to weights would generally not be considered “biases” in the statistical sense. However, the regularization term can be thought of as biasing the optimization towards solutions with certain qualities. - "[importance sampling] is useful for decreasing the variance of parameter updates" Using importance sampling to correct the difference between the target and behaviour policies usually increases the variance of parameter updates. IS shrinks updates that occur more often than they would when following the target policy, and enlarges updates that occur less often than they would when following the target policy. The average distance from the mean update can be larger than without importance sampling. - "In effect, the regularization term encourages re-experience around the estimate at previous time step, especially when the state space is large." What does “re-experience” mean? - “accelerate the GTD2 algorithm” The word “accelerate” is used several times in the paper to describe the Gradient-DD update, but the idea of penalizing large updates to the value function weights conflicts with the conventional meaning of acceleration in optimization (using past information to make larger changes to weights as is done with Nesterov acceleration, momentum, ADAM, etc.), which is confusing. Penalizing updates to the value function weights would actually slow the changing of the value function weights, not accelerate it. This might allow the second set of weights to learn better estimates of the expected TD error (because the expected TD error is changing as the value function weights change), which could account for the performance increase over GTD2. Quality: Best performance in the final episode is not an appropriate way to determine the "best-performing" parameter settings when the paper makes claims about the speed of learning of various methods. The parameter settings that result in the lowest error at the end of training will not in general be the parameter settings that result in the fastest learning (i.e., smallest area under the curve). If the paper is going to make claims about learning speed, then the parameter settings should be selected based on the smallest area under the curve. This might be why TDC performs so poorly in these experiments when it out-performs GTD2 in other papers (see Ghiassian et al. 2018; TDC is called GTD in that paper) and intuitively should perform similarly to conventional TD in early learning when the correction weights are near 0. This seems like a serious issue to me; the experiments may need to be re-run with different parameter settings that better match the claims the paper is making about learning speed. Suggestions for improvement: In addition to addressing the concerns mentioned above, consider adding a related work section that explicitly compares and contrasts the most relevant related methods. Consider motivating Gradient-DD more along the lines of TRPO, REPS, and other algorithms that penalize large changes to the weights being learned instead of motivating it as accelerating GTD2. Actually, it would be better to do some simple experiments to test why the regularization improves performance over GTD2. Does it result in the second set of weights learning the expected TD error with greater accuracy? Can the same effect be achieved by a two timescale approach where the value function weights are updated with a smaller step size than the second set of weights? If not, it would provide more support for the proposed method. Despite the concerns listed in this review, I actually think this paper has a very interesting premise and deserves further study and investigation. Misc. details: A sentence trails off in the first paragraph of the introduction. ”where this term originates from the squared bias term in the objective (6)” Equation 6 seems to be the GTD2 update rules, not the objective function. References: Ghiassian, S., Patterson, A., White, M., Sutton, R. S., & White, A. (2018). Online off-policy prediction. arXiv preprint arXiv:1811.02597.
ICLR
Title Gradient descent temporal difference-difference learning Abstract Off-policy algorithms, in which a behavior policy differs from the target policy and is used to gain experience for learning, have proven to be of great practical value in reinforcement learning. However, even for simple convex problems such as linear value function approximation, these algorithms are not guaranteed to be stable. To address this, alternative algorithms that are provably convergent in such cases have been introduced, the most well known being gradient descent temporal difference (GTD) learning. This algorithm and others like it, however, tend to converge much more slowly than conventional temporal difference learning. In this paper we propose gradient descent temporal difference-difference (GradientDD) learning in order to improve GTD learning by introducing second-order differences in successive parameter updates. We investigate this algorithm in the framework of linear value function approximation, analytically showing its improvement over GTD learning. Studying the model empirically on the random walk and Boyan-chain prediction tasks, we find substantial improvement over GTD learning and, in several cases, better performance even than conventional TD learning. 1 INTRODUCTION Off-policy algorithms for value function learning enable an agent to use a behavior policy that differs from the target policy in order to gain experience for learning. However, because off-policy methods learn a value function for a target policy given data due to a different behavior policy, they often exhibit greater variance in parameter updates. When applied to problems involving function approximation, off-policy methods are slower to converge than on-policy methods and may even diverge (Baird, 1995; Sutton & Barto, 2018). Two general approaches have been investigated to address the challenge of developing stable and effective off-policy temporal-difference algorithms. One approach is to use importance sampling methods to warp the update distribution back to the on-policy distribution (Precup et al., 2000; Mahmood et al., 2014). This approach is useful for decreasing the variance of parameter updates, but it does not address stability issues. The second main approach to addressing the challenge of off-policy learning is to develop true gradient descent-based methods that are guaranteed to be stable regardless of the update distribution. Sutton et al. (2009a;b) proposed the first off-policy gradientdescent-based temporal difference (GTD and GTD2, respectively) algorithms. These algorithms are guaranteed to be stable, with computational complexity scaling linearly with the size of the function approximator. Empirically, however, their convergence is much slower than conventional temporal difference (TD) learning, limiting their practical utility (Ghiassian et al., 2020; White & White, 2016). Building on this work, extensions to the GTD family of algorithms (see Ghiassian et al. (2018) for a review) have allowed for incorporating eligibility traces (Maei & Sutton, 2010; Geist & Scherrer, 2014), non-linear function approximation such as with a neural network (Maei, 2011), and reformulation of the optimization as a saddle point problem (Liu et al., 2015; Du et al., 2017). However, due to their slow convergence, none of these stable off-policy methods are commonly used in practice. In this work, we introduce a new gradient descent algorithm for temporal difference learning with linear value function approximation. This algorithm, which we call gradient descent temporal difference-difference (Gradient-DD) learning, is an acceleration technique that employs second- order differences in successive parameter updates. The basic idea of Gradient-DD is to modify the error objective function by additionally considering the prediction error obtained in last time step, then to derive a gradient-descent algorithm based on this modified objective function. In addition to exploiting the Bellman equation to get the solution, this modified error objective function avoids drastic changes in the value function estimate by encouraging local search around the current estimate. Algorithmically, the Gradient-DD approach only adds an additional term to the update rule of the GTD2 method, and the extra computational cost is negligible. We show mathematically that applying this method significantly improves the convergence rate relative to the GTD2 method for linear function approximation. This result is supported by numerical experiments, which also show that Gradient-DD obtains better convergence in many cases than conventional TD learning. 1.1 RELATED WORK In related approaches to ours, some previous studies have attempted to improve Gradient-TD algorithms by adding regularization terms to the objective function. Liu et al. (2012) have used l1 regularization on weights to learn sparse representations of value functions, and Ghiassian et al. (2020) has used l2 regularization on weights. Unlike these references, our approach modifies the error objective function by regularizing the evaluation error obtained in the most recent time step. With this modification, our method provides a learning rule that contains second-order differences in successive parameter updates. Our approach is similar to trust region policy optimization (Peters & Schaal, 2008; Schulman et al., 2015) or relative entropy policy search (Peters et al., 2010), which penalize large changes being learned in policy learning. In these methods, constrained optimization is used to update the policy by considering the constraint on some measure between the new policy and the old policy. Here, however, our aim here is to look for the optimal value function, and the regularization term uses the previous value function estimate to avoid drastic changes in the updating process. 2 GRADIENT DESCENT METHOD FOR OFF-POLICY TEMPORAL DIFFERENCE LEARNING 2.1 PROBLEM DEFINITION AND BACKGROUND In this section, we formalize the problem of learning the value function for a given policy under the Markov Decision Process (MDP) framework. In this framework, the agent interacts with the environment over a sequence of discrete time steps, t = 1, 2, . . .. At each time step the agent observes a partial summary of the state st ∈ S and selects an action at ∈ A. In response, the environment emits a reward rt ∈ R and transitions the agent to its next state st+1 ∈ S. The state and action sets are finite. State transitions are stochastic and dependent on the immediately preceding state and action. Rewards are stochastic and dependent on the preceding state and action, as well as on the next state. The process generating the agent’s actions is termed the behavior policy. In off-policy learning, this behavior policy is in general different from the target policy π : S → A. The objective is to learn an approximation to the state-value function under the target policy in a particular environment: V (s) = Eπ [ ∞∑ t=1 γt−1rt|s1 = s ] , (1) where γ ∈ [0, 1) is the discount rate. In problems for which the state space is large, it is practical to approximate the value function. In this paper we consider linear function approximation, where states are mapped to feature vectors with fewer components than the number of states. Specifically, for each state s ∈ S there is a corresponding feature vector x(s) ∈ Rp, with p ≤ |S|, such that the approximate value function is given by Vw(s) := w >x(s). (2) The goal is then to learn the parameters w such that Vw(s) ≈ V (s). 2.2 GRADIENT TEMPORAL DIFFERENCE LEARNING A major breakthrough for the study of the convergence properties of MDP systems came with the introduction of the GTD and GTD2 learning algorithms (Sutton et al., 2009a;b). We begin by briefly recapitulating the GTD algorithms, which we will then extend in the following sections. To begin, we introduce the Bellman operator B such that the true value function V ∈ R|S| satisfies the Bellman equation: V = R + γPV =: BV, where R is the reward vector with components E(rn+1|sn = s), and P is a matrix of state transition probabilities. In temporal difference methods, an appropriate objective function should minimize the difference between the approximate value function and the solution to the Bellman equation. Having defined the Bellman operator, we next introduce the projection operator Π, which takes any value function V and projects it to the nearest value function within the space of approximate value functions of the form (2). Letting X be the matrix whose rows are x(s), the approximate value function can be expressed as Vw = Xw. We will also assume that there exists a limiting probability distribution such that ds = limn→∞ p(sn = s) (or, in the episodic case, ds is the proportion of time steps spent in state s). The projection operator is then given by Π = X(X>DX)−1X>D, where the matrix D is diagonal, with diagonal elements ds. The natural measure of how closely the approximation Vw satisfies the Bellman equation is the mean-squared Bellman error: MSBE(w) = ‖Vw −BVw‖2D, (3) where the norm is weighted by D, such that ‖V‖2D = V>DV. However, because the Bellman operator follows the underlying state dynamics of the Markov chain, irrespective of the structure of the linear function approximator, BVw will typically not be representable as Vw for any w. An alternative objective function, therefore, is the mean squared projected Bellman error (MSPBE), which we define as J(w) = ‖Vw −ΠBVw‖2D. (4) Following (Sutton et al., 2009b), our objective is to minimize this error measure. As usual in stochastic gradient descent, the weights at each time step are then updated by ∆w = −α∇wJ(w), where α > 0, and −1 2 ∇wJ(w) =− E[(γxn+1 − xn)x>n ][E(xnx>n )]−1E(δnxn) ≈− E[(γxn+1 − xn)x>n ]η. (5) For notational simplicity, we have denoted the feature vector associated with sn as xn = x(sn). We have also introduced the temporal difference error δn = rn + (γxn+1 − xn)>wn, as well as η, a linear predictor to approximate [E(xnx>n )] −1E(δnxn). Because the factors in Eqn. (5) can be directly sampled, the resulting updates in each step are δn =rn + (γxn+1 − xn)>wn ηn+1 =ηn + βn(δn − x>n ηn)xn wn+1 =wn − αn(γxn+1 − xn)(x>n ηn). (6) These updates define the GTD2 learning algorithm, which we will build upon in the following section. 3 GRADIENT DESCENT TEMPORAL DIFFERENCE-DIFFERENCE LEARNING In order to improve the GTD2 algorithm described above, in this section we modify the objective function via additionally considering the approximation error Vw−Vwn−1 given the previous time step n− 1. Specifically, we modify Eqn. (4) as follows: JGDD(w|wn−1) = J(w) + κ‖Vw −Vwn−1‖2D, (7) Figure 1: Schematic diagram of Gradient-DD learning with w ∈ R2. Rather than updating w directly along the gradient of the MSPBE (arrow), the update rule selects wn that minimizes the MSPBE while satisfying the constraint ‖Vw −Vwn−1‖2D ≤ µ (shaded ellipse). where κ ≥ 0 is a parameter of the regularization. Minimizing Eqn. (7) is equivalent to the following optimization arg min w J(w) s.t. ‖Vw −Vwn−1‖2D ≤ µ (8) where µ > 0 is a parameter which becomes large when κ is small, so that the MSPBE objective is recovered as µ→∞, equivalent to κ→ 0 in Eqn. (7). We show in the Appendix that for any µ > 0, there exist κ ≥ 0 such that the solution of Eqn. (7) and that of Eqn. (8) are the same. Eqns. (7) and (8) represent a tradeoff between minimizing the MSPBE error and preventing the estimated value function from changing too drastically. Rather than simply minimizing the optimal prediction from the projected Bellman equation, the agent makes use of the most recent update to look for the solution. Figure 1 gives a schematic view of the effect of the regularization. Rather than directly following the direction of the MSPBE gradient, the update chooses a w that minimizes the MSPBE while following the constraint that the estimated value function should not change too greatly. In effect, the regularization term encourages searching around the estimate at previous time step, especially when the state space is large. With these considerations in mind, the negative gradient of JGDD(w|wn−1) is − 1 2 ∇wJGDD(w|wn−1) =− E[(γxn+1 − xn)x>n ][E(xnx>n )]−1E(δnxn)− κE[(x>nwn − x>nwn−1)xn] ≈− E[(γxn+1 − xn)x>n ]ηn − κE[(x>nwn − x>nwn−1)xn]. (9) Because the terms in Eqn. (9) can be directly sampled, the stochastic gradient descent updates are given by δn =rn + (γxn+1 − xn)>wn ηn+1 =ηn + βn(δn − x>n ηn)xn wn+1 =wn − κn(x>nwn − x>nwn−1)xn − αn(γxn+1 − xn)(x>n ηn). (10) These update equations define the Gradient-DD method, in which the GTD2 update equations (6) are generalized by including a second-order update term in the third update equation, where this term originates from the squared bias term in the objective (7). In the following sections, we shall analytically and numerically investigate the convergence and performance of Gradient-DD learning. 4 IMPROVED CONVERGENCE RATE In this section we analyze the convergence rate of Gradient-DD learning. Note that the second-order update in the last line in Eqn. (10) can be rewritten as a system of first-order difference equations: (I + κnxnx > n )(wn+1 −wn) =κnxnx>n (un+1 − un)− αn(γxn+1 − xn)(x>n ηn); un+1 =wn+1 −wn. (11) Let βn = ζαn, ζ > 0. We consider constant step sizes in the updates, i.e., κn = κ and αn = α. Denote Hn = [ 0 0 0 xnx > n ] and Gn = [ √ ζxnx > n xn(xn − γxn+1)> −(xn − γxn+1)x>n 0 ] . We rewrite the update rules of two iterations in Eqn. (11) as a single iteration in a combined parameter vector with 2n components, ρn = (η > n / √ ζ,w>n ) >, and a new reward-related vector with 2n components, gn+1 = (rnx > n ,0 >)>, as follows: ρn+1 =ρn − κHn(ρn − ρn−1) + √ ζα(Gnρn + gn+1), (12) Denoting ψn+1 = α −1(ρn+1 − ρn), Eqn. (12) is rewritten as[ ρn+1 − ρn ψn+1 −ψn ] =α [ I + κHn −καHn I −αI ]−1 [ −√ζ(Gnρn − gn+1) ψn ] =α [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] [ ρn ψn ] + α [ √ ζgn+1√ ζα−1gn+1 ] , (13) where the second step is from [ I + κHn −καHn I −αI ]−1 = [ I −κHn α−1I −α−1(I + κHn) ] . De- note Jn = [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] . Eqn. (13) tells us that Jn is the update matrix of the Gradient-DD algorithm. (Note that Gn is the update matrix of the GTD2 algorithm.) Therefore, assuming the stochastic approximation in Eqn. (13) goes to the solution of an associated ordinary differential equation (ODE) under some regularity conditions (a convergence property is provided in the appendix by following Borkar & Meyn (2000)), we can analyze the improved convergence rate of Gradient-DD learning by comparing the eigenvalues of the matrices E(Gn) denoted by G, and E(Jn) denoted by J (Atkinson et al., 2008). Obviously, J = [ − √ ζG −κH − √ ζα−1G −α−1(I + κH) ] , where H = E(Hn). To simplify, we consider the case that the matrix E(xnx>n ) = I. Let λG be a real eigenvalue of the matrix √ ζG. (Note that G is defined here with opposite sign relative to G in Maei (2011).) From Maei (2011), the eigenvalues of the matrix −G are strictly negative. In other words, λG > 0. Let λ be an eigenvalue of the matrix J, i.e. a solution to the equation |λI− J| =(λ+ λG)(λ+ α−1) + κα−1λ = λ2 + [α−1(1 + κ) + λG]λ+ α−1λG = 0. (14) The smaller eigenvalues λm of the pair solutions to Eqn. (14) are λm < −λG, where details of the above derivations are given in the appendix. This explains the enhanced speed of convergence in Gradient-DD learning. We shall illustrate this enhanced speed of convergence in numerical experiments in Section 5. Additionally, we also show a convergence property of Gradient-DD under constant step sizes by applying the ordinary differential equation method of stochastic approximation (Borkar & Meyn, 2000). Let the TD fixed point be w∗, such that Vw∗ = ΠBVw∗ . Under some conditions, we prove that, for any > 0, there exists b1 < ∞ such that lim sup n→∞ P (‖wn − w∗‖ > ) ≤ b1α. Details are provided in the appendix. For tapered step sizes, which would be necessary to obtain an even stronger convergence proof, the analysis framework in Borkar & Meyn (2000) does not apply into the Gradient-DD algorithm. Although theoretical investigation of the convergence under tapered step sizes is a question to be studied, we find empirically in numerical experiments that the algorithm does in fact converge with tapered step sizes and even obtains much better performance in this case than with fixed step sizes. 5 EMPIRICAL STUDY In this section, we assess the practical utility of the Gradient-DD method in numerical experiments. To validate performance of Gradient-DD learning, we compare Gradient-DD learning with GTD2 learning, TDC learning (TD with gradient correction (Sutton et al., 2009b)), TD learning, and Emphatic TD learning (Sutton & Mahmood, 2016) in tabular representation using a random-walk task and in linear representation using the Boyan-chain task. For each method and each task, we performed a scan over the step sizes αn and the parameter κ so that the comprehensive performance of the different algorithms can be compared. We considered two choices of step size sequence {αn}: • (Case 1) αn is constant, i.e., αn = α0. • (Case 2) The learning rate αn is tapered according to the schedule αn = α0(103 + 1)/(103 + n). We set the κ = cα0 where c = 1, 2, 4. Additionally, we also allow κ dependent on n and consider Case 3: αn is tapered as in Case 2, but κn = cαn. In order to simplify presentation, the results of Case 3 are reported in the Appendix. To begin, we set βn = αn, then later allow for βn = ζαn under ζ ∈ {1/4, 1/2, 1, 2} in order to investigate the effect of the two-timescale approach of the Gradient-based TD algorithms on Gradient-DD. In all cases, we set γ = 1. 5.1 RANDOM WALK TASK As a first test of Gradient-DD learning, we conducted a simple random walk task (Sutton & Barto, 2018) with tabular representation of the value function. The random walk task has a linear arrangement ofm states plus an absorbing terminal state at each end. Thus there arem+2 sequential states, S0, S1, · · · , Sm, Sm+1, where m = 20, 50, or 100. Every walk begins in the center state. At each step, the walk moves to a neighboring state, either to the right or to the left with equal probability. If either edge state (S0 or Sm+1) is entered, the walk terminates. A walk’s outcome is defined to be r = 0 at S0 and r = 1 at Sm+1. Our aim is to learn the value of each state V (s), where the true values are (1, · · · ,m)/(m+ 1). In all cases the approximate value function is initialized to the intermediate value V0(s) = 0.5. In order to investigate the effect of the initialization V0(s), we also initialize V0(s) = 0, and report the results in Figure 7 of the Appendix, where its performance is very similar as the initialization V0(s) = 0.5. We first compare the methods by plotting the empirical RMS error from the final episode during training as a function of step size α in Figure 2, where 5000 episodes are used. From the figure, we can make several observations. (1) Emphatic TD works well but is sensitive to α. It prefers very small α even in the tapering case, and this preference becomes strong as the state space becomes large in size. (2) Gradient-DD works well and is robust to α, as is conventional TD learning. (3) TDC performs similarly to the GTD2 method, but requires slightly larger α than GTD2. (4) Gradient-DD performs similarly to conventional TD learning and better than the GTD2 method. This advantage is consistent in different settings. (5) The range of α leading to effective learning for Gradient-DD is roughly similar to that for GTD2. Next we look closely at the performance during training, which we show in Figure 3, where each method and parameter setting was run for 5000 episodes. From the observations in Figure 2, in order to facilitate comparison of these methods, we set α0 = 0.1 for 10 spaces, α0 = 0.2 for 20 spaces, and α0 = 0.5 for 50 spaces. Because Emphatic TD requires the step size α to be especially small as shown in Figure 2, the plotted values of α0 for Emphatic TD are tuned relative to the values used in the algorithm defined in Sutton & Mahmood (2016), where the step sizes of Emphatic TD α(ETD)0 are chosen from {0.5%, 0.1%, 0.05%, 0.01%} by the smallest area under the performance curve. Additionally we also tune α0 for TDC because TDC requires αn larger a little than GTD2 as shown in Figure 2. The step sizes for TDC are set as α(TDC)n = aαn, where a is chosen from {1, 1.5, 2, 3} by the smallest area under the performance curve. From the results shown in Figure 3a, we draw several observations. (1) For all conditions tested, Gradient-DD converges much more rapidly than GTD2 and TDC. The results indicate that GradientDD even converges faster than TD learning in some cases, though it is not as fast in the beginning episodes. (2) The advantage of Gradient-DD learning over other methods grows as the state space increases in size. (3) Gradient-DD learning is robust to the choice of c, which controls the size κ of the second-order update, as long as c is not too large. (Empirically c = 2 is a good choice.) (4) Gradient-DD has consistent and good performance under both the constant step size setting and under the tapered step size setting. In summary, compared with GTD2 learning and other methods, Gradient-DD learning in this task leads to improved learning with good convergence. In addition to investigating the effects of the learning rate, size of the state space, and magnitude of the regularization parameter, we also investigated the effect of using distinct values for the two learning rates, αn and βn. To do this, we set βn = ζαn with ζ ∈ {1/4, 1/2, 1, 2} and report the results in Figure 8 of the appendix. The results show that comparably good performance of Gradient-DD is obtained under these various βn settings. 5.2 BOYAN-CHAIN TASK We next investigate Gradient-DD learning on the Boyan-chain problem, which is a standard task for testing linear value-function approximation (Boyan, 2002). In this task we allow for 4p − 3 states, with p = 20, each of which is represented by a p-dimensional feature vector. The p-dimensional representation for every fourth state from the start is [1, 0, · · · , 0] for state s1, [0, 1, 0, · · · , 0] for s5, · · · , and [0, 0, · · · , 0, 1] for the terminal state s4p−3. The representations for the remaining states are obtained by linearly interpolating between these. The optimal coefficients of the feature vector are (−4(p − 1),−4(p − 2), · · · , 0)/5. Simulations with p = 50 and 100 give similar results to those from the random walk task, and hence are not shown here. In each state, except for the last one before the end, there are two possible actions: move forward one step or move forward two steps with equal probability 0.5. Both actions lead to reward -0.3. The last state before the end just has one action of moving forward to the terminal with reward -0.2. As in the random-walk task, α0 used in Emphatic TD is tuned from {0.5%, 0.2%, 0.1%, 0.05%}. We report the results in Figure 4, which leads to conclusions similar to those already drawn from Figure 3. (1) Gradient-DD has much faster convergence than GTD2 and TDC, and generally converges to better values despite being somewhat slower than TD learning at the beginning episodes. (2) Gradient-DD is competitive with Emphatic TD. The improvement over other methods grows as the state space becomes larger. (3) As κ increases, the performance of Gradient-DD improves. Additionally, the performance of Gradient-DD is robust to changes in κ as long as κ is not very large. Empirically a good choice is to set κ = α or 2α. (4) Comparing the performance with constant step size versus that with tapered step size, the Gradient-DD method performs better with tapered step size than it does with constant step size. 5.3 BAIRD’S COUNTEREXAMPLE We also verify the performance of Gradient-DD on Baird’s off-policy counterexample (Baird, 1995), for which TD learning famously diverges. We consider three cases: 7-state, 100-state and 500-state. We set α = 0.02 (but α = 10−5 for ETD), β = α and γ = 0.99. We set κ = 0.2 for GDD1, κ = 0.4 for GDD2 and κ = 0.8 for GDD3. For the initial parameter values (1, · · · , 1, 10, 1)>. We measure the performance by the empirical RMS errors as function of sweep, and report the results in Figure 5. The figure demonstrates that Gradient-DD works as well on this well-known counterexample as GTD2 does, and even works better than GTD2 for the 100-state case. We also observe that the performance improvement of Gradient-DD increases as the state spaces increases. We also note that, because the linear approximation leaves a residual error in the value estimation due to the projection error, the RMS errors in this task do not go to zero. Interestingly, Gradient-DD reduces this residual error as the size of the state space increases. 6 CONCLUSION AND DISCUSSION In this work, we have proposed Gradient-DD learning, a new gradient descent-based TD learning algorithm. The algorithm is based on a modification of the projected Bellman error objective function for value function approximation by introducing a second-order difference term. The algorithm significantly improves upon existing methods for gradient-based TD learning, obtaining better convergence performance than conventional linear TD learning. Since GTD learning was originally proposed, the Gradient-TD family of algorithms has been extended for incorporating eligibility traces and learning optimal policies (Maei & Sutton, 2010; Geist & Scherrer, 2014), as well as for application to neural networks (Maei, 2011). Additionally, many variants of the vanilla Gradient-TD methods have been proposed, including HTD (Hackman, 2012) and Proximal Gradient-TD (Liu et al., 2016). Because Gradient-DD just modifies the objective error of GTD2 by considering an additional squared-bias term, it may be extended and combined with these other methods, potentially broadening its utility for more complicated tasks. In this work we have focused on value function prediction in the two simple cases of tabular representations and linear approximation. An especially interesting direction for future study will be the application of Gradient-DD learning to tasks requiring more complex representations, including neural network implementations. Such approaches are especially useful in cases where state spaces are large, and indeed we have found in our results that Gradient-DD seems to confer the greatest advantage over other methods in such cases. Intuitively, we expect that this is because the difference between the optimal update direction and that chosen by gradient descent becomes greater in higher-dimensional spaces (cf. Fig. 1). This performance benefit in large state spaces suggests that Gradient-DD may be of practical use for these more challenging cases. 6.1 ON THE EQUIVALENCE OF EQNS. (7) & (8) The Karush-Kuhn-Tucker conditions of Eqn. (8) are the following system of equations d dwJ(w) + κ d dw (‖Vw −Vwn−1‖ 2 D − µ) = 0; κ(‖Vw −Vwn−1‖2D − µ) = 0; ‖Vw −Vwn−1‖2D ≤ µ; κ ≥ 0. These equations are equivalent to d dwJ(w) + κ d dw‖Vw −Vwn−1‖ 2 D = 0 and κ > 0, if ‖Vw −Vwn−1‖2D = µ; d dwJ(w) = 0 and κ = 0, if ‖Vw −Vwn−1‖ 2 D < µ. Thus, for any µ > 0, there exists a κ ≥ 0 such that ddwJ(w) + µ d dw‖Vw −Vwn−1‖ 2 D = 0. 6.2 EIGENVALUES OF J Let λ be an eigenvalue of the matrix J. We have that |λI− J| = ∣∣∣∣ λI +√ζG κH√ζα−1G λI + α−1(I + κH) ∣∣∣∣ = ∣∣∣∣ λI +√ζG κH−λα−1I λI + α−1I ∣∣∣∣ = ∣∣∣∣ λI +√ζG κH0 λI + α−1I + κα−1λ(λI +√ζG)−1H ∣∣∣∣ =|(λI + √ ζG)(λI + α−1I) + κα−1λH|. From the assumption E(xnx>n ) = I and the definition of H, some eigenvalues of the matrix J, λ, are solutions to |λI− J| =(λ+ λG)(λ+ α−1) = 0; and other eigenvalues of the matrix J, λ, are solutions to |λI− J| =(λ+ λG)(λ+ α−1) + κα−1λ =λ2 + [α−1(1 + κ) + λG]λ+ α −1λG = 0. Note λG > 0. the pair solutions to the equation above are λ =− 1 2 [α−1(1 + κ) + λG]± 1 2 √ [α−1(1 + κ) + λG]2 − 4α−1λG =− 1 2 [α−1(1 + κ) + λG]± 1 2 √ [α−1(1 + κ)− λG]2 + 4α−1λGκ. Thus, the smaller eigenvalues of the pairs are λm =− 1 2 [α−1(1 + κ) + λG]− 1 2 √ [α−1(1 + κ)− λG]2 + 4α−1λGκ <− 1 2 [α−1(1 + κ) + λG]− 1 2 √ [α−1(1 + κ)− λG]2, where the inequality is from λG > 0. When α−1(1 + κ)− λG > 0, then λm <− 1 2 [α−1(1 + κ) + λG]− 1 2 (α−1(1 + κ)− λG) =− α−1(1 + κ) <− λG, When α−1(1 + κ)− λG ≤ 0, then λm <− 1 2 [α−1(1 + κ) + λG] + 1 2 (α−1(1 + κ)− λG) =− λG, CONVERGENCE WITH CONSTANT STEP SIZES At last we apply the ODE method of stochastic approximation to obtain the convergence performance. Theorem 1 Consider the update rules (10) with constant step size sequences κ, α and β satisfying κ ≥ 0, β = ζα, ζ > 0, α ∈ (0, 1) and β > 0. Let the TD fixed point be w∗, such that Vw∗ = ΠBVw∗ . Suppose that (A1) (xn, rn,xn+1) is an i.i.d. sequence with uniformly bounded second moments, and (A2) E[(xn − γxn+1)x>n ] and E(xnx>n ) are non-singular. Then for any > 0, there exists b1 <∞ such that lim sup n→∞ P (‖wn −w∗‖ > ) ≤ b1α. Proof From the constant step sizes in the conditions, we denote κn = κ and αn = α. Thus, Eqn. (12) equals (I + κHn)(ρn+1 − ρn)− κHn(ρn+1 − 2ρn + ρn−1) =− √ ζα(Gnρn − gn+1). (A.1) Denoting ψn+1 = α −1(ρn+1 − ρn), Eqn. (A.1) is rewritten as[ ρn+1 − ρn ψn+1 −ψn ] =α [ I + κHn −καHn I −αI ]−1 [ −√ζ(Gnρn − gn+1) ψn ] =α [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] [ ρn ψn ] + α [ √ ζgn+1√ ζα−1gn+1 ] , (A.2) where the second step is from[ I + κHn −καHn I −αI ]−1 = [ I −κHn α−1I −α−1(I + κHn) ] . Denoting G = E(Gn), g = E(gn) and H = E(Hn), then the TD fixed point of Eqn. (A.1) is given by −Gρ+ g = 0 (A.3) We apply the ordinary differential equation approach of the stochastic approximation in Theorem 1 (Theorem 2.3 of (Borkar & Meyn, 2000)) into Eqn. (A.2). Note that (Sutton et al., 2009a) and (Sutton et al., 2009b) also applied Theorem 2.3 of (Borkar & Meyn, 2000) in using the gradientdescent method for temporal-difference learning to obtain their convergence results. For simplifying notation, denote Jn = [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] , J = [ − √ ζG −κH − √ ζα−1G −α−1(I + κH) ] , yn = [ ρn ψn ] , hn = [ √ ζgn+1√ ζα−1gn+1 ] , and h = [ √ ζg√ ζα−1g ] . Eqn. (A.2) is rewritten as yn+1 = yn + α(f(yn) + h + Mn+1), (A.4) where f(yn) = Jyn and Mn+1 = (Jn − J)yn + hn − h. Now we verify the conditions (c1-c4) of Lemma 1. Firstly, Condition (c1) is satisfied under the assumption of constant step sizes. Secondly, f(y) is Lipschitz and f∞(y) = Gy. Following Sutton et al. (2009a), the Assumption A2 implies the real parts of all the eigenvalues of G are positive. Therefore, Condition (c2) is satisfied. BecauseE(Mn+1|Fn) = 0 and E(‖Mn+1‖2|Fn) ≤ c0(1+‖yn‖2), whereFn = σ(yi,Mi, i ≤ n), is a martingale difference sequence, we have that ‖Mn+1‖2 ≤ 2(‖Jn − J‖2‖yn‖2 + ‖hn − h‖2). (A.5) From the assumption A1, Eqn. (A.5) follows that there are constants cj and ch such that E(‖Jn − J‖2|Fn) ≤ cj ; E(‖hn+1 − h‖2) ≤ ch. Thus, Condition (c3) is satisfied. Finally, Condition (c4) is satisfied by noting that y∗ = G−1g is the unique globally asymptotically stable equilibrium. Theorem 1 bounds the estimation error of w in probability. Note that the convergence of GradientDD learning provided in Theorem 1 is a somewhat weaker result than the statement that wn → w∗ with probability 1 as n → ∞. The technical reason for this is the condition on step sizes. In Theorem 1, we consider the case of constant step sizes, with αn = α and κn = κ. This restriction is imposed so that Eqn. (12) can be written as a system of first-order difference equations, which cannot be done rigorously when step sizes are tapered as in (Sutton et al., 2009b). As shown below, however, we find empirically in numerical experiments that the algorithm does in fact converge with tapered step sizes and even obtains much better performance in this case than with fixed step sizes. AN ODE RESULT ON STOCHASTIC APPROXIMATION We introduce an ODE result on stochastic approximation in the following lemma, then prove Theorem 1 by applying this result. Lemma 1 (Theorem 2.3 of Borkar & Meyn (2000)) Consider the stochastic approximation algorithm described by the d-dimensional recursion yn+1 = yn + an[f(yn) + Mn+1]. Suppose the following conditions hold: (c1) The sequence {αn} satisfies for some constant 0 < α < ᾱ < 1, α < αn < ᾱ; (c2) The function f is Lipschitz, and there exists a function f∞ such that limr→∞ fr(y) = f∞(y), where the scaled function fr : Rd → Rd is given by fr(y) = f(ry)/r. Furthermore, the ODE ẏ = f∞(y) has the origin as a globally asymptotically stable equilibrium; (c3) The sequence {Mn,Fn}, with Fn = σ(yi,Mi, i ≤ n), is a martingale difference sequence. Moreover, for some c0 <∞ and any initial condition y0, E(‖Mn+1‖2|Fn) ≤ c0(1 + ‖yn‖2). (c4) The ODE ẏ(t) = f(y(t)) has a unique globally asymptotically stable equilibrium y∗. Then for any > 0, there exists b1 <∞ such that lim sup n→∞ P (‖yn − y∗‖ > ) ≤ b1ᾱ. 6.3 ADDITIONAL EMPIRICAL RESULTS
1. What is the main contribution of the paper, and what are the strengths and weaknesses of the proposed method? 2. How does the reviewer assess the significance and impact of the paper's findings, particularly in comparison to existing works? 3. What are the limitations and potential improvements of the proposed algorithm, especially regarding its applicability to various environments and function approximations? 4. Are there any concerns about the experimental design, such as the number of independent runs, statistical significance testing, and representation of results? 5. How does the reviewer evaluate the novelty and generalizability of the proposed approach, particularly in relation to previous methods like semi-gradient TD? 6. Are there any suggestions for additional experiments or analyses that could further support the paper's claims or provide new insights?
Review
Review Summary of Contributions The paper proposes the gradient descent TD difference learning (GDD) algorithm which adds a term to the MSPBE objective to constrain how quickly a value function can change. They argue that their approach has a quicker convergence rate, and empirically demonstrate in several examples with linear function approximation that it substantially improves over existing gradient-based TD methods. Review I like the simplicity of the proposed method, and its intuitive interpretation as a value-based trust region. However, I have the following questions and concerns: There doesn't seem to be any information regarding how many independent runs were performed in the empirical evaluation, and there was no no reported statistical significance testing. Can the authors clarify this information, and comment on the significance of the results? While it led to improvements over GTD2, it largely didn't improve over regular (semi-gradient) TD apart from Baird's counterexample, which was designed to make TD fail. As such, I don't think the addition of a new parameter was convincingly justified. Some of the results seemed to suggest that the improvement grew as the state space/complexity increased, that it may be the case that the evaluation falls a bit short on exploring more complex environments. While the breadth of the ablation studies is really nice, we observe similar trends in many neighbouring figures that the space in the main text from showcasing the many different configurations could be summarized with representative examples, and the additional space could have been used to provide some additional experiments/insights (like those suggested in the discussion). From how modular the addition of the term is to the objective, have the authors tried incorporating the regularization to semi-gradient TD? Is there anything about the semi-gradient update that bars its use? TD generally performed really well in the paper's evaluation (outside of Baird's counterexample) that it would make a stronger case if the extension was demonstrated to be more generally applicable, and that it consistently improved over the methods it was applied to. This sort of ties into what was described in 2), where what was presented seems to fall a bit short, and how the space could have showcased a bit more. While the paper's focus was on the case of linear function approximation, can the authors comment on how readily the approach can be extended to the non-linear case? GTD methods have not seen as much adoption as their approximate dynamic programming counterparts when combining TD methods with non-linear function approximation, that it can raise questions as to how the methods scale to more complicated settings. Given the above, I am erring toward rejection at this time. I think 1) is a rather significant issue that needs to be addressed, and I'm willing to raise my score if that, and my other concerns, can be sufficiently addressed. ----- Post Discussion ----- Taking the other reviews and the authors' response into account, I still maintain my score. While I agree that it's good to be thorough in something clear and simple, it can still be done to a point of redundancy, and consequently seem less thorough in the overall picture and claims made. I'm still largely unsure on the choice to only apply the supposedly modular extension to GTD2, and not try it with TD which seemed like a clearer winner (apart from Baird's counterexample). As others suggested, there are additional methods which might be good to compare to, and other evaluation metrics might make more sense for the claims being made. Many of my concerns were largely brushed off as future work, that little got addressed- without having to carry out the experiments, high level comments/current thoughts could be provided regarding how readily the approach can extend to the scenarios suggested, or if there are nuances that need to be worked out, etc.
ICLR
Title Gradient descent temporal difference-difference learning Abstract Off-policy algorithms, in which a behavior policy differs from the target policy and is used to gain experience for learning, have proven to be of great practical value in reinforcement learning. However, even for simple convex problems such as linear value function approximation, these algorithms are not guaranteed to be stable. To address this, alternative algorithms that are provably convergent in such cases have been introduced, the most well known being gradient descent temporal difference (GTD) learning. This algorithm and others like it, however, tend to converge much more slowly than conventional temporal difference learning. In this paper we propose gradient descent temporal difference-difference (GradientDD) learning in order to improve GTD learning by introducing second-order differences in successive parameter updates. We investigate this algorithm in the framework of linear value function approximation, analytically showing its improvement over GTD learning. Studying the model empirically on the random walk and Boyan-chain prediction tasks, we find substantial improvement over GTD learning and, in several cases, better performance even than conventional TD learning. 1 INTRODUCTION Off-policy algorithms for value function learning enable an agent to use a behavior policy that differs from the target policy in order to gain experience for learning. However, because off-policy methods learn a value function for a target policy given data due to a different behavior policy, they often exhibit greater variance in parameter updates. When applied to problems involving function approximation, off-policy methods are slower to converge than on-policy methods and may even diverge (Baird, 1995; Sutton & Barto, 2018). Two general approaches have been investigated to address the challenge of developing stable and effective off-policy temporal-difference algorithms. One approach is to use importance sampling methods to warp the update distribution back to the on-policy distribution (Precup et al., 2000; Mahmood et al., 2014). This approach is useful for decreasing the variance of parameter updates, but it does not address stability issues. The second main approach to addressing the challenge of off-policy learning is to develop true gradient descent-based methods that are guaranteed to be stable regardless of the update distribution. Sutton et al. (2009a;b) proposed the first off-policy gradientdescent-based temporal difference (GTD and GTD2, respectively) algorithms. These algorithms are guaranteed to be stable, with computational complexity scaling linearly with the size of the function approximator. Empirically, however, their convergence is much slower than conventional temporal difference (TD) learning, limiting their practical utility (Ghiassian et al., 2020; White & White, 2016). Building on this work, extensions to the GTD family of algorithms (see Ghiassian et al. (2018) for a review) have allowed for incorporating eligibility traces (Maei & Sutton, 2010; Geist & Scherrer, 2014), non-linear function approximation such as with a neural network (Maei, 2011), and reformulation of the optimization as a saddle point problem (Liu et al., 2015; Du et al., 2017). However, due to their slow convergence, none of these stable off-policy methods are commonly used in practice. In this work, we introduce a new gradient descent algorithm for temporal difference learning with linear value function approximation. This algorithm, which we call gradient descent temporal difference-difference (Gradient-DD) learning, is an acceleration technique that employs second- order differences in successive parameter updates. The basic idea of Gradient-DD is to modify the error objective function by additionally considering the prediction error obtained in last time step, then to derive a gradient-descent algorithm based on this modified objective function. In addition to exploiting the Bellman equation to get the solution, this modified error objective function avoids drastic changes in the value function estimate by encouraging local search around the current estimate. Algorithmically, the Gradient-DD approach only adds an additional term to the update rule of the GTD2 method, and the extra computational cost is negligible. We show mathematically that applying this method significantly improves the convergence rate relative to the GTD2 method for linear function approximation. This result is supported by numerical experiments, which also show that Gradient-DD obtains better convergence in many cases than conventional TD learning. 1.1 RELATED WORK In related approaches to ours, some previous studies have attempted to improve Gradient-TD algorithms by adding regularization terms to the objective function. Liu et al. (2012) have used l1 regularization on weights to learn sparse representations of value functions, and Ghiassian et al. (2020) has used l2 regularization on weights. Unlike these references, our approach modifies the error objective function by regularizing the evaluation error obtained in the most recent time step. With this modification, our method provides a learning rule that contains second-order differences in successive parameter updates. Our approach is similar to trust region policy optimization (Peters & Schaal, 2008; Schulman et al., 2015) or relative entropy policy search (Peters et al., 2010), which penalize large changes being learned in policy learning. In these methods, constrained optimization is used to update the policy by considering the constraint on some measure between the new policy and the old policy. Here, however, our aim here is to look for the optimal value function, and the regularization term uses the previous value function estimate to avoid drastic changes in the updating process. 2 GRADIENT DESCENT METHOD FOR OFF-POLICY TEMPORAL DIFFERENCE LEARNING 2.1 PROBLEM DEFINITION AND BACKGROUND In this section, we formalize the problem of learning the value function for a given policy under the Markov Decision Process (MDP) framework. In this framework, the agent interacts with the environment over a sequence of discrete time steps, t = 1, 2, . . .. At each time step the agent observes a partial summary of the state st ∈ S and selects an action at ∈ A. In response, the environment emits a reward rt ∈ R and transitions the agent to its next state st+1 ∈ S. The state and action sets are finite. State transitions are stochastic and dependent on the immediately preceding state and action. Rewards are stochastic and dependent on the preceding state and action, as well as on the next state. The process generating the agent’s actions is termed the behavior policy. In off-policy learning, this behavior policy is in general different from the target policy π : S → A. The objective is to learn an approximation to the state-value function under the target policy in a particular environment: V (s) = Eπ [ ∞∑ t=1 γt−1rt|s1 = s ] , (1) where γ ∈ [0, 1) is the discount rate. In problems for which the state space is large, it is practical to approximate the value function. In this paper we consider linear function approximation, where states are mapped to feature vectors with fewer components than the number of states. Specifically, for each state s ∈ S there is a corresponding feature vector x(s) ∈ Rp, with p ≤ |S|, such that the approximate value function is given by Vw(s) := w >x(s). (2) The goal is then to learn the parameters w such that Vw(s) ≈ V (s). 2.2 GRADIENT TEMPORAL DIFFERENCE LEARNING A major breakthrough for the study of the convergence properties of MDP systems came with the introduction of the GTD and GTD2 learning algorithms (Sutton et al., 2009a;b). We begin by briefly recapitulating the GTD algorithms, which we will then extend in the following sections. To begin, we introduce the Bellman operator B such that the true value function V ∈ R|S| satisfies the Bellman equation: V = R + γPV =: BV, where R is the reward vector with components E(rn+1|sn = s), and P is a matrix of state transition probabilities. In temporal difference methods, an appropriate objective function should minimize the difference between the approximate value function and the solution to the Bellman equation. Having defined the Bellman operator, we next introduce the projection operator Π, which takes any value function V and projects it to the nearest value function within the space of approximate value functions of the form (2). Letting X be the matrix whose rows are x(s), the approximate value function can be expressed as Vw = Xw. We will also assume that there exists a limiting probability distribution such that ds = limn→∞ p(sn = s) (or, in the episodic case, ds is the proportion of time steps spent in state s). The projection operator is then given by Π = X(X>DX)−1X>D, where the matrix D is diagonal, with diagonal elements ds. The natural measure of how closely the approximation Vw satisfies the Bellman equation is the mean-squared Bellman error: MSBE(w) = ‖Vw −BVw‖2D, (3) where the norm is weighted by D, such that ‖V‖2D = V>DV. However, because the Bellman operator follows the underlying state dynamics of the Markov chain, irrespective of the structure of the linear function approximator, BVw will typically not be representable as Vw for any w. An alternative objective function, therefore, is the mean squared projected Bellman error (MSPBE), which we define as J(w) = ‖Vw −ΠBVw‖2D. (4) Following (Sutton et al., 2009b), our objective is to minimize this error measure. As usual in stochastic gradient descent, the weights at each time step are then updated by ∆w = −α∇wJ(w), where α > 0, and −1 2 ∇wJ(w) =− E[(γxn+1 − xn)x>n ][E(xnx>n )]−1E(δnxn) ≈− E[(γxn+1 − xn)x>n ]η. (5) For notational simplicity, we have denoted the feature vector associated with sn as xn = x(sn). We have also introduced the temporal difference error δn = rn + (γxn+1 − xn)>wn, as well as η, a linear predictor to approximate [E(xnx>n )] −1E(δnxn). Because the factors in Eqn. (5) can be directly sampled, the resulting updates in each step are δn =rn + (γxn+1 − xn)>wn ηn+1 =ηn + βn(δn − x>n ηn)xn wn+1 =wn − αn(γxn+1 − xn)(x>n ηn). (6) These updates define the GTD2 learning algorithm, which we will build upon in the following section. 3 GRADIENT DESCENT TEMPORAL DIFFERENCE-DIFFERENCE LEARNING In order to improve the GTD2 algorithm described above, in this section we modify the objective function via additionally considering the approximation error Vw−Vwn−1 given the previous time step n− 1. Specifically, we modify Eqn. (4) as follows: JGDD(w|wn−1) = J(w) + κ‖Vw −Vwn−1‖2D, (7) Figure 1: Schematic diagram of Gradient-DD learning with w ∈ R2. Rather than updating w directly along the gradient of the MSPBE (arrow), the update rule selects wn that minimizes the MSPBE while satisfying the constraint ‖Vw −Vwn−1‖2D ≤ µ (shaded ellipse). where κ ≥ 0 is a parameter of the regularization. Minimizing Eqn. (7) is equivalent to the following optimization arg min w J(w) s.t. ‖Vw −Vwn−1‖2D ≤ µ (8) where µ > 0 is a parameter which becomes large when κ is small, so that the MSPBE objective is recovered as µ→∞, equivalent to κ→ 0 in Eqn. (7). We show in the Appendix that for any µ > 0, there exist κ ≥ 0 such that the solution of Eqn. (7) and that of Eqn. (8) are the same. Eqns. (7) and (8) represent a tradeoff between minimizing the MSPBE error and preventing the estimated value function from changing too drastically. Rather than simply minimizing the optimal prediction from the projected Bellman equation, the agent makes use of the most recent update to look for the solution. Figure 1 gives a schematic view of the effect of the regularization. Rather than directly following the direction of the MSPBE gradient, the update chooses a w that minimizes the MSPBE while following the constraint that the estimated value function should not change too greatly. In effect, the regularization term encourages searching around the estimate at previous time step, especially when the state space is large. With these considerations in mind, the negative gradient of JGDD(w|wn−1) is − 1 2 ∇wJGDD(w|wn−1) =− E[(γxn+1 − xn)x>n ][E(xnx>n )]−1E(δnxn)− κE[(x>nwn − x>nwn−1)xn] ≈− E[(γxn+1 − xn)x>n ]ηn − κE[(x>nwn − x>nwn−1)xn]. (9) Because the terms in Eqn. (9) can be directly sampled, the stochastic gradient descent updates are given by δn =rn + (γxn+1 − xn)>wn ηn+1 =ηn + βn(δn − x>n ηn)xn wn+1 =wn − κn(x>nwn − x>nwn−1)xn − αn(γxn+1 − xn)(x>n ηn). (10) These update equations define the Gradient-DD method, in which the GTD2 update equations (6) are generalized by including a second-order update term in the third update equation, where this term originates from the squared bias term in the objective (7). In the following sections, we shall analytically and numerically investigate the convergence and performance of Gradient-DD learning. 4 IMPROVED CONVERGENCE RATE In this section we analyze the convergence rate of Gradient-DD learning. Note that the second-order update in the last line in Eqn. (10) can be rewritten as a system of first-order difference equations: (I + κnxnx > n )(wn+1 −wn) =κnxnx>n (un+1 − un)− αn(γxn+1 − xn)(x>n ηn); un+1 =wn+1 −wn. (11) Let βn = ζαn, ζ > 0. We consider constant step sizes in the updates, i.e., κn = κ and αn = α. Denote Hn = [ 0 0 0 xnx > n ] and Gn = [ √ ζxnx > n xn(xn − γxn+1)> −(xn − γxn+1)x>n 0 ] . We rewrite the update rules of two iterations in Eqn. (11) as a single iteration in a combined parameter vector with 2n components, ρn = (η > n / √ ζ,w>n ) >, and a new reward-related vector with 2n components, gn+1 = (rnx > n ,0 >)>, as follows: ρn+1 =ρn − κHn(ρn − ρn−1) + √ ζα(Gnρn + gn+1), (12) Denoting ψn+1 = α −1(ρn+1 − ρn), Eqn. (12) is rewritten as[ ρn+1 − ρn ψn+1 −ψn ] =α [ I + κHn −καHn I −αI ]−1 [ −√ζ(Gnρn − gn+1) ψn ] =α [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] [ ρn ψn ] + α [ √ ζgn+1√ ζα−1gn+1 ] , (13) where the second step is from [ I + κHn −καHn I −αI ]−1 = [ I −κHn α−1I −α−1(I + κHn) ] . De- note Jn = [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] . Eqn. (13) tells us that Jn is the update matrix of the Gradient-DD algorithm. (Note that Gn is the update matrix of the GTD2 algorithm.) Therefore, assuming the stochastic approximation in Eqn. (13) goes to the solution of an associated ordinary differential equation (ODE) under some regularity conditions (a convergence property is provided in the appendix by following Borkar & Meyn (2000)), we can analyze the improved convergence rate of Gradient-DD learning by comparing the eigenvalues of the matrices E(Gn) denoted by G, and E(Jn) denoted by J (Atkinson et al., 2008). Obviously, J = [ − √ ζG −κH − √ ζα−1G −α−1(I + κH) ] , where H = E(Hn). To simplify, we consider the case that the matrix E(xnx>n ) = I. Let λG be a real eigenvalue of the matrix √ ζG. (Note that G is defined here with opposite sign relative to G in Maei (2011).) From Maei (2011), the eigenvalues of the matrix −G are strictly negative. In other words, λG > 0. Let λ be an eigenvalue of the matrix J, i.e. a solution to the equation |λI− J| =(λ+ λG)(λ+ α−1) + κα−1λ = λ2 + [α−1(1 + κ) + λG]λ+ α−1λG = 0. (14) The smaller eigenvalues λm of the pair solutions to Eqn. (14) are λm < −λG, where details of the above derivations are given in the appendix. This explains the enhanced speed of convergence in Gradient-DD learning. We shall illustrate this enhanced speed of convergence in numerical experiments in Section 5. Additionally, we also show a convergence property of Gradient-DD under constant step sizes by applying the ordinary differential equation method of stochastic approximation (Borkar & Meyn, 2000). Let the TD fixed point be w∗, such that Vw∗ = ΠBVw∗ . Under some conditions, we prove that, for any > 0, there exists b1 < ∞ such that lim sup n→∞ P (‖wn − w∗‖ > ) ≤ b1α. Details are provided in the appendix. For tapered step sizes, which would be necessary to obtain an even stronger convergence proof, the analysis framework in Borkar & Meyn (2000) does not apply into the Gradient-DD algorithm. Although theoretical investigation of the convergence under tapered step sizes is a question to be studied, we find empirically in numerical experiments that the algorithm does in fact converge with tapered step sizes and even obtains much better performance in this case than with fixed step sizes. 5 EMPIRICAL STUDY In this section, we assess the practical utility of the Gradient-DD method in numerical experiments. To validate performance of Gradient-DD learning, we compare Gradient-DD learning with GTD2 learning, TDC learning (TD with gradient correction (Sutton et al., 2009b)), TD learning, and Emphatic TD learning (Sutton & Mahmood, 2016) in tabular representation using a random-walk task and in linear representation using the Boyan-chain task. For each method and each task, we performed a scan over the step sizes αn and the parameter κ so that the comprehensive performance of the different algorithms can be compared. We considered two choices of step size sequence {αn}: • (Case 1) αn is constant, i.e., αn = α0. • (Case 2) The learning rate αn is tapered according to the schedule αn = α0(103 + 1)/(103 + n). We set the κ = cα0 where c = 1, 2, 4. Additionally, we also allow κ dependent on n and consider Case 3: αn is tapered as in Case 2, but κn = cαn. In order to simplify presentation, the results of Case 3 are reported in the Appendix. To begin, we set βn = αn, then later allow for βn = ζαn under ζ ∈ {1/4, 1/2, 1, 2} in order to investigate the effect of the two-timescale approach of the Gradient-based TD algorithms on Gradient-DD. In all cases, we set γ = 1. 5.1 RANDOM WALK TASK As a first test of Gradient-DD learning, we conducted a simple random walk task (Sutton & Barto, 2018) with tabular representation of the value function. The random walk task has a linear arrangement ofm states plus an absorbing terminal state at each end. Thus there arem+2 sequential states, S0, S1, · · · , Sm, Sm+1, where m = 20, 50, or 100. Every walk begins in the center state. At each step, the walk moves to a neighboring state, either to the right or to the left with equal probability. If either edge state (S0 or Sm+1) is entered, the walk terminates. A walk’s outcome is defined to be r = 0 at S0 and r = 1 at Sm+1. Our aim is to learn the value of each state V (s), where the true values are (1, · · · ,m)/(m+ 1). In all cases the approximate value function is initialized to the intermediate value V0(s) = 0.5. In order to investigate the effect of the initialization V0(s), we also initialize V0(s) = 0, and report the results in Figure 7 of the Appendix, where its performance is very similar as the initialization V0(s) = 0.5. We first compare the methods by plotting the empirical RMS error from the final episode during training as a function of step size α in Figure 2, where 5000 episodes are used. From the figure, we can make several observations. (1) Emphatic TD works well but is sensitive to α. It prefers very small α even in the tapering case, and this preference becomes strong as the state space becomes large in size. (2) Gradient-DD works well and is robust to α, as is conventional TD learning. (3) TDC performs similarly to the GTD2 method, but requires slightly larger α than GTD2. (4) Gradient-DD performs similarly to conventional TD learning and better than the GTD2 method. This advantage is consistent in different settings. (5) The range of α leading to effective learning for Gradient-DD is roughly similar to that for GTD2. Next we look closely at the performance during training, which we show in Figure 3, where each method and parameter setting was run for 5000 episodes. From the observations in Figure 2, in order to facilitate comparison of these methods, we set α0 = 0.1 for 10 spaces, α0 = 0.2 for 20 spaces, and α0 = 0.5 for 50 spaces. Because Emphatic TD requires the step size α to be especially small as shown in Figure 2, the plotted values of α0 for Emphatic TD are tuned relative to the values used in the algorithm defined in Sutton & Mahmood (2016), where the step sizes of Emphatic TD α(ETD)0 are chosen from {0.5%, 0.1%, 0.05%, 0.01%} by the smallest area under the performance curve. Additionally we also tune α0 for TDC because TDC requires αn larger a little than GTD2 as shown in Figure 2. The step sizes for TDC are set as α(TDC)n = aαn, where a is chosen from {1, 1.5, 2, 3} by the smallest area under the performance curve. From the results shown in Figure 3a, we draw several observations. (1) For all conditions tested, Gradient-DD converges much more rapidly than GTD2 and TDC. The results indicate that GradientDD even converges faster than TD learning in some cases, though it is not as fast in the beginning episodes. (2) The advantage of Gradient-DD learning over other methods grows as the state space increases in size. (3) Gradient-DD learning is robust to the choice of c, which controls the size κ of the second-order update, as long as c is not too large. (Empirically c = 2 is a good choice.) (4) Gradient-DD has consistent and good performance under both the constant step size setting and under the tapered step size setting. In summary, compared with GTD2 learning and other methods, Gradient-DD learning in this task leads to improved learning with good convergence. In addition to investigating the effects of the learning rate, size of the state space, and magnitude of the regularization parameter, we also investigated the effect of using distinct values for the two learning rates, αn and βn. To do this, we set βn = ζαn with ζ ∈ {1/4, 1/2, 1, 2} and report the results in Figure 8 of the appendix. The results show that comparably good performance of Gradient-DD is obtained under these various βn settings. 5.2 BOYAN-CHAIN TASK We next investigate Gradient-DD learning on the Boyan-chain problem, which is a standard task for testing linear value-function approximation (Boyan, 2002). In this task we allow for 4p − 3 states, with p = 20, each of which is represented by a p-dimensional feature vector. The p-dimensional representation for every fourth state from the start is [1, 0, · · · , 0] for state s1, [0, 1, 0, · · · , 0] for s5, · · · , and [0, 0, · · · , 0, 1] for the terminal state s4p−3. The representations for the remaining states are obtained by linearly interpolating between these. The optimal coefficients of the feature vector are (−4(p − 1),−4(p − 2), · · · , 0)/5. Simulations with p = 50 and 100 give similar results to those from the random walk task, and hence are not shown here. In each state, except for the last one before the end, there are two possible actions: move forward one step or move forward two steps with equal probability 0.5. Both actions lead to reward -0.3. The last state before the end just has one action of moving forward to the terminal with reward -0.2. As in the random-walk task, α0 used in Emphatic TD is tuned from {0.5%, 0.2%, 0.1%, 0.05%}. We report the results in Figure 4, which leads to conclusions similar to those already drawn from Figure 3. (1) Gradient-DD has much faster convergence than GTD2 and TDC, and generally converges to better values despite being somewhat slower than TD learning at the beginning episodes. (2) Gradient-DD is competitive with Emphatic TD. The improvement over other methods grows as the state space becomes larger. (3) As κ increases, the performance of Gradient-DD improves. Additionally, the performance of Gradient-DD is robust to changes in κ as long as κ is not very large. Empirically a good choice is to set κ = α or 2α. (4) Comparing the performance with constant step size versus that with tapered step size, the Gradient-DD method performs better with tapered step size than it does with constant step size. 5.3 BAIRD’S COUNTEREXAMPLE We also verify the performance of Gradient-DD on Baird’s off-policy counterexample (Baird, 1995), for which TD learning famously diverges. We consider three cases: 7-state, 100-state and 500-state. We set α = 0.02 (but α = 10−5 for ETD), β = α and γ = 0.99. We set κ = 0.2 for GDD1, κ = 0.4 for GDD2 and κ = 0.8 for GDD3. For the initial parameter values (1, · · · , 1, 10, 1)>. We measure the performance by the empirical RMS errors as function of sweep, and report the results in Figure 5. The figure demonstrates that Gradient-DD works as well on this well-known counterexample as GTD2 does, and even works better than GTD2 for the 100-state case. We also observe that the performance improvement of Gradient-DD increases as the state spaces increases. We also note that, because the linear approximation leaves a residual error in the value estimation due to the projection error, the RMS errors in this task do not go to zero. Interestingly, Gradient-DD reduces this residual error as the size of the state space increases. 6 CONCLUSION AND DISCUSSION In this work, we have proposed Gradient-DD learning, a new gradient descent-based TD learning algorithm. The algorithm is based on a modification of the projected Bellman error objective function for value function approximation by introducing a second-order difference term. The algorithm significantly improves upon existing methods for gradient-based TD learning, obtaining better convergence performance than conventional linear TD learning. Since GTD learning was originally proposed, the Gradient-TD family of algorithms has been extended for incorporating eligibility traces and learning optimal policies (Maei & Sutton, 2010; Geist & Scherrer, 2014), as well as for application to neural networks (Maei, 2011). Additionally, many variants of the vanilla Gradient-TD methods have been proposed, including HTD (Hackman, 2012) and Proximal Gradient-TD (Liu et al., 2016). Because Gradient-DD just modifies the objective error of GTD2 by considering an additional squared-bias term, it may be extended and combined with these other methods, potentially broadening its utility for more complicated tasks. In this work we have focused on value function prediction in the two simple cases of tabular representations and linear approximation. An especially interesting direction for future study will be the application of Gradient-DD learning to tasks requiring more complex representations, including neural network implementations. Such approaches are especially useful in cases where state spaces are large, and indeed we have found in our results that Gradient-DD seems to confer the greatest advantage over other methods in such cases. Intuitively, we expect that this is because the difference between the optimal update direction and that chosen by gradient descent becomes greater in higher-dimensional spaces (cf. Fig. 1). This performance benefit in large state spaces suggests that Gradient-DD may be of practical use for these more challenging cases. 6.1 ON THE EQUIVALENCE OF EQNS. (7) & (8) The Karush-Kuhn-Tucker conditions of Eqn. (8) are the following system of equations d dwJ(w) + κ d dw (‖Vw −Vwn−1‖ 2 D − µ) = 0; κ(‖Vw −Vwn−1‖2D − µ) = 0; ‖Vw −Vwn−1‖2D ≤ µ; κ ≥ 0. These equations are equivalent to d dwJ(w) + κ d dw‖Vw −Vwn−1‖ 2 D = 0 and κ > 0, if ‖Vw −Vwn−1‖2D = µ; d dwJ(w) = 0 and κ = 0, if ‖Vw −Vwn−1‖ 2 D < µ. Thus, for any µ > 0, there exists a κ ≥ 0 such that ddwJ(w) + µ d dw‖Vw −Vwn−1‖ 2 D = 0. 6.2 EIGENVALUES OF J Let λ be an eigenvalue of the matrix J. We have that |λI− J| = ∣∣∣∣ λI +√ζG κH√ζα−1G λI + α−1(I + κH) ∣∣∣∣ = ∣∣∣∣ λI +√ζG κH−λα−1I λI + α−1I ∣∣∣∣ = ∣∣∣∣ λI +√ζG κH0 λI + α−1I + κα−1λ(λI +√ζG)−1H ∣∣∣∣ =|(λI + √ ζG)(λI + α−1I) + κα−1λH|. From the assumption E(xnx>n ) = I and the definition of H, some eigenvalues of the matrix J, λ, are solutions to |λI− J| =(λ+ λG)(λ+ α−1) = 0; and other eigenvalues of the matrix J, λ, are solutions to |λI− J| =(λ+ λG)(λ+ α−1) + κα−1λ =λ2 + [α−1(1 + κ) + λG]λ+ α −1λG = 0. Note λG > 0. the pair solutions to the equation above are λ =− 1 2 [α−1(1 + κ) + λG]± 1 2 √ [α−1(1 + κ) + λG]2 − 4α−1λG =− 1 2 [α−1(1 + κ) + λG]± 1 2 √ [α−1(1 + κ)− λG]2 + 4α−1λGκ. Thus, the smaller eigenvalues of the pairs are λm =− 1 2 [α−1(1 + κ) + λG]− 1 2 √ [α−1(1 + κ)− λG]2 + 4α−1λGκ <− 1 2 [α−1(1 + κ) + λG]− 1 2 √ [α−1(1 + κ)− λG]2, where the inequality is from λG > 0. When α−1(1 + κ)− λG > 0, then λm <− 1 2 [α−1(1 + κ) + λG]− 1 2 (α−1(1 + κ)− λG) =− α−1(1 + κ) <− λG, When α−1(1 + κ)− λG ≤ 0, then λm <− 1 2 [α−1(1 + κ) + λG] + 1 2 (α−1(1 + κ)− λG) =− λG, CONVERGENCE WITH CONSTANT STEP SIZES At last we apply the ODE method of stochastic approximation to obtain the convergence performance. Theorem 1 Consider the update rules (10) with constant step size sequences κ, α and β satisfying κ ≥ 0, β = ζα, ζ > 0, α ∈ (0, 1) and β > 0. Let the TD fixed point be w∗, such that Vw∗ = ΠBVw∗ . Suppose that (A1) (xn, rn,xn+1) is an i.i.d. sequence with uniformly bounded second moments, and (A2) E[(xn − γxn+1)x>n ] and E(xnx>n ) are non-singular. Then for any > 0, there exists b1 <∞ such that lim sup n→∞ P (‖wn −w∗‖ > ) ≤ b1α. Proof From the constant step sizes in the conditions, we denote κn = κ and αn = α. Thus, Eqn. (12) equals (I + κHn)(ρn+1 − ρn)− κHn(ρn+1 − 2ρn + ρn−1) =− √ ζα(Gnρn − gn+1). (A.1) Denoting ψn+1 = α −1(ρn+1 − ρn), Eqn. (A.1) is rewritten as[ ρn+1 − ρn ψn+1 −ψn ] =α [ I + κHn −καHn I −αI ]−1 [ −√ζ(Gnρn − gn+1) ψn ] =α [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] [ ρn ψn ] + α [ √ ζgn+1√ ζα−1gn+1 ] , (A.2) where the second step is from[ I + κHn −καHn I −αI ]−1 = [ I −κHn α−1I −α−1(I + κHn) ] . Denoting G = E(Gn), g = E(gn) and H = E(Hn), then the TD fixed point of Eqn. (A.1) is given by −Gρ+ g = 0 (A.3) We apply the ordinary differential equation approach of the stochastic approximation in Theorem 1 (Theorem 2.3 of (Borkar & Meyn, 2000)) into Eqn. (A.2). Note that (Sutton et al., 2009a) and (Sutton et al., 2009b) also applied Theorem 2.3 of (Borkar & Meyn, 2000) in using the gradientdescent method for temporal-difference learning to obtain their convergence results. For simplifying notation, denote Jn = [ − √ ζGn −κHn − √ ζα−1Gn −α−1(I + κHn) ] , J = [ − √ ζG −κH − √ ζα−1G −α−1(I + κH) ] , yn = [ ρn ψn ] , hn = [ √ ζgn+1√ ζα−1gn+1 ] , and h = [ √ ζg√ ζα−1g ] . Eqn. (A.2) is rewritten as yn+1 = yn + α(f(yn) + h + Mn+1), (A.4) where f(yn) = Jyn and Mn+1 = (Jn − J)yn + hn − h. Now we verify the conditions (c1-c4) of Lemma 1. Firstly, Condition (c1) is satisfied under the assumption of constant step sizes. Secondly, f(y) is Lipschitz and f∞(y) = Gy. Following Sutton et al. (2009a), the Assumption A2 implies the real parts of all the eigenvalues of G are positive. Therefore, Condition (c2) is satisfied. BecauseE(Mn+1|Fn) = 0 and E(‖Mn+1‖2|Fn) ≤ c0(1+‖yn‖2), whereFn = σ(yi,Mi, i ≤ n), is a martingale difference sequence, we have that ‖Mn+1‖2 ≤ 2(‖Jn − J‖2‖yn‖2 + ‖hn − h‖2). (A.5) From the assumption A1, Eqn. (A.5) follows that there are constants cj and ch such that E(‖Jn − J‖2|Fn) ≤ cj ; E(‖hn+1 − h‖2) ≤ ch. Thus, Condition (c3) is satisfied. Finally, Condition (c4) is satisfied by noting that y∗ = G−1g is the unique globally asymptotically stable equilibrium. Theorem 1 bounds the estimation error of w in probability. Note that the convergence of GradientDD learning provided in Theorem 1 is a somewhat weaker result than the statement that wn → w∗ with probability 1 as n → ∞. The technical reason for this is the condition on step sizes. In Theorem 1, we consider the case of constant step sizes, with αn = α and κn = κ. This restriction is imposed so that Eqn. (12) can be written as a system of first-order difference equations, which cannot be done rigorously when step sizes are tapered as in (Sutton et al., 2009b). As shown below, however, we find empirically in numerical experiments that the algorithm does in fact converge with tapered step sizes and even obtains much better performance in this case than with fixed step sizes. AN ODE RESULT ON STOCHASTIC APPROXIMATION We introduce an ODE result on stochastic approximation in the following lemma, then prove Theorem 1 by applying this result. Lemma 1 (Theorem 2.3 of Borkar & Meyn (2000)) Consider the stochastic approximation algorithm described by the d-dimensional recursion yn+1 = yn + an[f(yn) + Mn+1]. Suppose the following conditions hold: (c1) The sequence {αn} satisfies for some constant 0 < α < ᾱ < 1, α < αn < ᾱ; (c2) The function f is Lipschitz, and there exists a function f∞ such that limr→∞ fr(y) = f∞(y), where the scaled function fr : Rd → Rd is given by fr(y) = f(ry)/r. Furthermore, the ODE ẏ = f∞(y) has the origin as a globally asymptotically stable equilibrium; (c3) The sequence {Mn,Fn}, with Fn = σ(yi,Mi, i ≤ n), is a martingale difference sequence. Moreover, for some c0 <∞ and any initial condition y0, E(‖Mn+1‖2|Fn) ≤ c0(1 + ‖yn‖2). (c4) The ODE ẏ(t) = f(y(t)) has a unique globally asymptotically stable equilibrium y∗. Then for any > 0, there exists b1 <∞ such that lim sup n→∞ P (‖yn − y∗‖ > ) ≤ b1ᾱ. 6.3 ADDITIONAL EMPIRICAL RESULTS
1. What is the focus of the paper, and what is the proposed variant of the GTD2 algorithm? 2. What is the purpose of the additional regularization term in the objective function? 3. What is the claimed improvement of the proposed algorithm over GTD, and is it convincing? 4. Are there any concerns regarding the convergence analysis in Section 4? 5. Why can the eigenvalues of matrix J_n be written as a block matrix before equation 14? 6. Do the authors ignore complex values of the eigenvalues of G_n, and if so, why? 7. What does Figure 3 show about the performance of the GDD algorithm, and how does it compare to the conventional TD algorithm? 8. Is the contribution of the paper considered marginal, and why?
Review
Review This paper proposes a variant of the GTD2 algorithm by adding an additional regularization term to the objective function, and the new algorithm is named as Gradient-DD (GDD). The regularization ensures that the value function does not change drastically between consecutive iterations. The authors show that the update rule of GDD can be written as a difference equation and aim to further show the convergence via Lyapunov based analysis. An simulation study is provided to compare the proposed GDD algorithm with TD, ETD, and GTD. The paper is well written in general. The idea of extra regularization on the distance between two value functions sounds reasonable to me since it resembles the constraint in trust region optimization for policy gradient methods. However, the claimed improved convergence over GTD is not rigorously proved and thus not convincing. In Section 4, the convergence analysis is not derived in a rigorous way. It would help the readers to understand the improved convergence if the authors could complete the analysis and show the convergence rate. Why can the eigenvalues of matrix J_n be written as the block matrix before eq (14)? It seems to me that G and H are diagonal matrices with the diagonal elements being the eigenvalues of G_n and H_n. Ideally the eigenvalues of J_n, which is denoted as J in this paper, should also be a diagonal matrix. Furthermore, since G_n is not symmetric, G may have some complex values as its eigenvalues. This is ignored from the current analysis without any explanation. In the experiment part, Figure 3 shows that the RMS error of the GDD algorithm will blow up when step size is large. It seems that the proposed algorithm may not be as robust as the conventional TD algorithm? #########Edits after the rebuttal######### Thank you for the responses. After reading them and the discussion with other reviewers, I still think the current contribution of this paper is marginal and I keep my score as 5.
ICLR
Title On Low Rank Directed Acyclic Graphs and Causal Structure Learning Abstract Despite several important advances in recent years, learning causal structures represented by directed acyclic graphs (DAGs) remains a challenging task in high dimensional settings when the graphs to be learned are not sparse. In this paper, we propose to exploit a low rank assumption regarding the (weighted) adjacency matrix of a DAG causal model to mitigate this problem. We demonstrate how to adapt existing methods for causal structure learning to take advantage of this assumption and establish several useful results relating interpretable graphical conditions to the low rank assumption. In particular, we show that the maximum rank is highly related to hubs, suggesting that scale-free networks which are frequently encountered in real applications tend to be low rank. We also provide empirical evidence for the utility of our low rank adaptations, especially on relatively large and dense graphs. Not only do they outperform existing algorithms when the low rank condition is satisfied, the performance is also competitive even though the rank of the underlying DAG may not be as low as is assumed. 1 INTRODUCTION An important goal in many sciences is to discover the underlying causal structures in various domains, both for the purpose of explaining and understanding phenomena, and for the purpose of predicting effects of interventions (Pearl, 2009). Due to the relative abundance of passively observed data as opposed to experimental data, how to learn causal structures from purely observational data has been vigorously investigated (Peters et al., 2017; Spirtes et al., 2000). In this context, causal structures are usually represented by directed acyclic graphs (DAGs) over a set of random variables. For this task, existing methods can be roughly categorized into two classes: constraint- and scorebased. The former use statistical tests to extract from data a number of constraints in the form of conditional (in)dependence and seek to identify the class of causal structures compatible with those constraints (Meek, 1995; Spirtes et al., 2000; Zhang, 2008). The latter employ a score function to evaluate candidate causal structures relative to data and seek to locate the causal structure (or a class of causal structures) with the optimal score. Due to the combinatorial nature of the acyclicity constraint (Chickering, 1996; He et al., 2015), most score-based methods rely on local heuristics to perform the search. A particular example is the greedy equivalence search (GES) algorithm (Chickering, 2002) that can find an optimal solution with infinite data and proper model assumptions. Recently, Zheng et al. (2018) introduced a smooth acyclicity constraint w.r.t. graph adjacency matrix, and the task on linear data models was then formulated as a continuous optimization problem with least-squares loss. This change of perspective allows using deep learning techniques to model causal mechanisms and has already given rise to several new algorithms for causal structure learning with non-linear data, e.g., Yu et al. (2019); Ng et al. (2019b;a); Ke et al. (2019); Lachapelle et al. (2020); Zheng et al. (2020), among others. While these new algorithms represent the current state of the art in many settings, their performance generally degrades when the target DAG becomes large and relatively dense, as seen from the empirical results reported in the referred works and also in this paper. This issue is of course a challenge to other approaches. Ramsey et al. (2017) proposed fast GES for impressively large problems, but it works reasonably well only when the large structure is very sparse. The max-min hill-climbing (MMHC) (Tsamardinos et al., 2006) relies on local learning methods that often do not perform well when the target node has a large neighborhood. How to improve the performance on relatively large and dense DAGs is therefore an important question. In this work, we study the potential of exploiting a kind of low rank assumption on the DAG structure to help address this problem. The rank of a graph that concerns us is the algebraic rank of its associated weighted adjacency matrix. Similar to the role of a sparsity assumption on graph structures, we treat the low rank assumption as methodological and it is not restricted to a particular DAG learning method. However, unlike sparsity assumption, it is much less apparent when DAGs tend to be low rank and how low rank DAGs behave. Thus, besides demonstrating the utility of exploiting a low rank assumption in causal structure learning, another important goal is to improve our understanding of the low rank assumption by relating the rank of a graph to its graphical structure. Such a result also enables us to characterize the rank of a graph from several structural priors and helps to choose rank related hyperparameters for the learning algorithm. Our contributions are summarized as follows: • We show how to adapt existing causal structure learning methods to take advantage of the low rank assumption, and provide a strategy to select rank related hyperparameters utilizing the lower and upper bounds on the true rank, if they are available. • To improve our understanding of low rank DAGs, we establish some lower bounds on the rank of a DAG in terms of simple graphical conditions, which imply necessary conditions for DAGs to be low rank. • We also show that the maximum possible rank of weighted adjacency matrices associated with a directed graph is highly related to hubs in the graph, which suggests that scale-free networks tend to be low rank. From this result, we derive several graphical conditions to bound the rank of a DAG from above, providing simple sufficient conditions for low rank. • Empirically, we demonstrate that the low rank adaptations are indeed useful. Not only do they outperform the original algorithms when the low rank condition is satisfied, the performance is also very competitive even when the true rank is not as low as is assumed. Related Work The low rank assumption is frequently adopted in graph-based applications (Smith et al., 2012; Zhou et al., 2013; Yao & Kwok, 2016; Frot et al., 2019), matrix completion and factorization (Recht, 2011; Koltchinskii et al., 2011; Cao et al., 2015; Davenport & Romberg, 2016), network sciences (Hsieh et al., 2012; Huang et al., 2013; Zhang et al., 2017) and so on, but to our best knowledge, has not been used on the DAG structures in the context of learning causal DAGs. We notice two works Barik & Honorio (2019); Tichavskỳ & Vomlel (2018) that assume low rank conditional probability tables in learning Bayesian networks, which are different from ours. Also related are existing works that studied the rank of real weighted matrices described by a given simple directed/undirected graph. However, most works only considered the zero-nonzero pattern of off-diagonal entries (see, e.g., Fallat & Hogben (2007); Hogben (2010); Mitchell et al. (2010)), whereas we also take into account the diagonal entries. This difference is crucial: if one only considers the off-diagonal entries, then the maximum rank over all possible weighted matrices is trivial and is always equal to the number of vertices. Consequently, many works focus on the minimum rank of a given graph, but to characterize exactly the minimum rank remains open, except for some special graph structures like trees (Hogben, 2010). Apart from these works, Edmonds (1967) studied algebraically the maximum rank for matrices with a common zero-nonzero pattern. In Section 4, we use this result to relate the maximum possible rank to a more interpretable graphical condition, which further implies several structural conditions of DAGs that may be easier to obtain in practice. 2 PRELIMINARIES 2.1 GRAPH TERMINOLOGY A graph G is defined as a pair (V,E), where V = {X1, X2, · · · , Xd} is the vertex set and E ⊂ V2 denotes the edge set. We are particularly interested in directed (acyclic) graphs in the context of causal structure learning. For any S ⊂ V, we use pa(S,G), ch(S,G), and adj(S,G) to denote the union of all parents, children, and adjacent vertices of the nodes of S in G, respectively. A graph is called weighted if every edge in the graph is associated with a non-zero value. We will work with weighted graphs and treat unweighted graphs as a special case where the edge weights are set to 1. Weighted graphs can be treated algebraically via weighted adjacency matrices. Specifically, the weighted adjacency matrix of a weighted graph G is a matrix W ∈ Rd×d, where W (i, j) is the weight of edge Xi → Xj and W (i, j) 6= 0 if and only if Xi → Xj exists in G. The binary adjacency matrix A ∈ {0, 1}d×d is such that A(i, j) = 1 if Xi → Xj in G and A(i, j) = 0 otherwise. The rank of a weighted graph is defined as the rank of the associated weighted adjacency matrix. 2.2 CAUSAL STRUCTURE LEARNING AND RECENT GRADIENT-BASED METHODS A commonly used model in causal structure learning is the structural equation model (SEM) that describes data generating procedure. In a slight abuse of notation, we also use Xi’s to denote random variables associated with the nodes in a graph G. Assuming G being a DAG, then the SEM is given by Xi = fi (pa(Xi,G), i) , i = 1, 2, . . . , d, where fi is a deterministic function and i’s are jointly independent noises. The SEM induces a marginal distribution P (X) over X = [X1, X2, · · · , Xd]T , and G and P (X) are said to form a causal Bayesian network (Pearl, 2009; Spirtes et al., 2000). The problem of causal structure learning is to infer the underlying causal DAG G based on the marginal distribution P (X), or more practically, an empirical version consisting of a number of i.i.d. observations from P (X). We next briefly review recently developed gradient-based methods that rely on a smooth characterization of acyclicity of directed graphs. These methods aim to find a DAG that optimizes a score function and can be categorized into two classes. The first class of methods explicitly associates the target causal model with a weighted adjacency matrix W and then estimate W by solving optimization problems in the following form: min W,φ EX∼P (X) S ( X,h(X;W,φ) ) , subject to trace ( eW◦W ) − d = 0, (1) where h : Rd → Rd is a model function parameterized by W (and other possible parameter φ) that aims to reconstruct X , S(·, ·) denotes a score function between the true and reconstructed variables, notation ◦ denotes the element-wise product, and eM is the matrix exponential of a square matrix M . The constraint was proposed by Zheng et al. (2018), which is smooth and holds if and only if W indicates a DAG. Methods in this class include: NOTEARS (Zheng et al., 2018), which targets linear models, with h(X;W,φ) = WTX and S(·, ·) being the Frobenius norm or equivalently the least-squares loss; and DAG-GNN (Yu et al., 2019) and the graph autoencoder approach (Ng et al., 2019b), where neural networks are used for the function h with φ being the weights of neural networks, and the score function can be chosen as the evidence lower bound (Kingma & Welling, 2013). A sparsity inducing term may be further added when the causal graph is assumed to be sparse. These objectives are equivalent to or are variants of some well studied score functions like the penalized maximum likelihood (Chickering, 2002; Van de Geer et al., 2013; Loh & Bühlmann, 2014). The second class uses certain functions, with parameter θ, to construct a weighted adjacency matrix W (θ) (or a binary one A(θ)) to represent the causal structure. These methods can be summarized as min θ, φ EX∼P (X) S ( X,h(X;W (θ), φ) ) , subject to trace ( eW (θ) ◦W (θ) ) − d = 0. (2) For example, GraN-DAG (Lachapelle et al., 2020) and NOTEARS-MLP (Zheng et al., 2020) respectively use neural network path products and partial derivatives between variables to construct W (θ). The binary matrix A(θ) can be obtained by sampling according to some distributions with learnable parameters, as used by Kalainathan et al. (2018); Ke et al. (2019); Ng et al. (2019a); Zhu et al. (2020). Before ending this section, we remark that while the gradient-based methods intend to learn a causal DAG, the learned DAG may not be identical to the underlying one for general SEMs due to the Markov equivalence (Spirtes et al., 2000; Peters et al., 2017). For such cases, one may convert the obtained DAG to its corresponding Completed Partially Directed Acyclic Graph (CPDAG) as the estimate. Nevertheless, if the SEM is identifiable and a proper score function is used, then the exact solution to the optimization problem is consistent, i.e., same as the true graph with probability 1; see, e.g., Shimizu et al. (2006); Peters & Bühlmann (2013); Peters et al. (2014); Zhang & Hyvärinen (2009). For further details and other technical issues like parameter optimization of the gradient-based methods, we refer the reader to the cited works and references therein. 3 EXPLOITING LOW RANK ASSUMPTION IN CAUSAL STRUCTURE LEARNING This section shows how to adapt existing gradient-based methods to take advantage of the low rank assumption, by providing a way for each class to utilize this assumption using techniques from the matrix completion literature. We remark that our adaptations with the low rank assumption are not restricted to a particular learning algorithm; other DAG learning methods may potentially combine one of the proposed modifications for learning low rank causal graphs, too. Matrix Factorization Since the weighted adjacency matrix W is explicitly optimized in the first class of methods, we can then apply the matrix factorization technique. Specifically, with an estimate r̂ for the graph rank, we can factorize W as W = UV T with U, V ∈ Rd×r̂. Problem (1) is then to optimize U and V that minimizes the score function under the DAG constraint, and has the same solution W (obtained from the product UV T ) as the original one if r̂ is greater than or equal to the true rank. Furthermore, if r̂ d, we have a much reduced number of parameters to optimize. Nuclear Norm For the second class of methods, the adjacency matrix W (θ) is not an explicit parameter to be optimized. In such a case, we can adopt a commonly used technique to add a nuclear norm term λ‖W (θ)‖∗, with λ > 0 being a tuning parameter, to the objective to induce low-rankness. The optimization procedures in these recent structure learning methods can directly incorporate the two adaptations as they are all gradient-based, though some extra care needs to be taken. Appendix C provides a detailed description of the optimization procedure and our implementation. The second approach is also feasible for the first class of methods, but we find that it does not work as well as the matrix factorization approach, possibly due to the singular value decomposition to compute the (sub-)gradient w.r.t. W at each optimization step. An acute reader may have noticed that we assumed a proper rank estimate r̂ or a proper penalty parameter λ. Yet knowing exactly the rank of the graph to be learned can be difficult in practice. Similar to the sparsity assumption, one may determine the hyperparameters r̂ and λ assisted by a validation dataset (or by cross-validation if the observed dataset is not sufficiently large). Alternatively, we can try different choices of the hyperparameters and then apply traditional score-based method where the search space is restricted to the resulting DAGs. However, since we are more concerned with relatively large and dense problems, the possible ranks may be too many to choose. As such, a lower bound rl and an upper bound ru on the graph rank would be beneficial—we need only consider ranks in [rl, ru] in the matrix factorization method, while the bounds are still useful by providing qualitative information for the nuclear norm approach: the lower an upper bound, the higher the tuning parameter λ should be chosen. Moreover, a lower bound can also justify the low rank assumption, i.e., if the lower bound is high, then the low rank assumption is likely to fail to hold. 4 GRAPHICAL BOUNDS ON RANKS Obtaining exact algebraic information of a DAG such as its rank and eigenvalues may be infeasible in practice, because it may require a full knowledge of the graph to be learned. On the other hand, structural information, such as graph connectivity, distributions of in-degrees and out-degrees, and an estimate of number of hubs, is sometimes more accessible. As such, this section is devoted to relating the rank of a graph to more easily interpretable graphical conditions, for the sake of a better understanding of what kinds of DAGs tend to satisfy the low rank assumption and for lower and upper bounds on the graph rank from certain structural priors. 4.1 PROBLEM SETTING Consider a DAG G = (V,E) with weighted adjacency matrix W and binary adjacency matrix A. We aim to seek upper and lower bounds on rank(W ) using only the graphical structure. Specifically, we focus on the weighted adjacency matrices with the same binary adjacency matrix A, i.e.,WA = {W ∈ Rd×d ; sign(|W |) = A}, where sign(·) and | · | are point-wise sign and absolute value functions, respectively. Notice that there exist trivial upper bound d− 1 and lower bound 0 for any DAG, but they are generally too loose for our purpose. In the following, we investigate the maximum rank max{rank(W );W ∈ WA} and minimum rank min{rank(W );W ∈ WA} to find tighter upper and lower bounds for any W ∈ WA. Before introducing two useful graph concepts, we comment that low rank DAGs are not necessarily sparse and vice versa; see a discussion in Appendix A. Definition 1 (Height). Given a DAG G = (V,E) and a vertex Xi ∈ V, the height of Xi, denoted by l(Xi), is defined as the length of the longest directed path starting from Xi. The height of G, denoted by l(G), is the length of the longest path in G. Definition 2 (Head-tail vertex cover). Let G = (V,E) be a directed graph and H,T be two subsets of V. (H,T) is called a head-tail vertex cover of G if every edge in G has its head vertex in H or its tail vertex in T. The size of a head-tail vertex cover (H,T) is defined as |H|+ |T|. As an example, Figure 1c is a head-tail vertex cover of G in Figure 1a, where H = {X2, X4, X8} (red nodes) and T = {X8, X9, X10} (blue nodes). The size of this vertex cover is 6. 4.2 LOWER BOUNDS We first study lower bounds on the rank of a weighted DAG. Define V−1 = ∅ and Vs = {Xi; l(Xi) = s} for s = 0, 1, . . . , l(G). Denote by Gs,s−1 the induced subgraph of G over Vs∪Vs−1. Let C(Gs,s−1) be the set of non-singleton connected components of Gs,s−1 and |C(Gs,s−1)| the cardinality. We have the following lower bounds. Theorem 1. Let G be a DAG with binary adjacency matrix A. Then min{rank(W ) ; W ∈ WA} ≥ ∑l(G) s=1 |C(Gs,s−1)| ≥ l(G). (3) All the proofs in this paper are provided in Appendix B. Theorem 1 shows that rank(W ) is greater than or equal to the sum of the number of non-singleton connected components in each Gs,s−1. As Gs,s−1 has at least one non-singleton connected component, we obtain the second inequality. In other words, the rank of a weighted DAG is at least as high as the length of the longest directed path. As an example, consider the graph shown in Figure 1. One can verify that min{rank(W );W ∈ WA} = 6, |C(G1,0)| = 2, |C(G2,1)| = 1, |C(G3,2)| = 1, and l(G) = 3. Thus, we have min{rank(W );W ∈ WA} = 6 > 2 + 1 + 1 = 4 > 3. We remark that the bounds in Theorem 1 may be loose in some cases. To characterize the minimum rank exactly is an on-going research problem (Hogben, 2010). 4.3 UPPER BOUNDS We turn to the more important issue for our purpose, regarding upper bounds on rank(W ). The next theorem shows that max{rank(W );W ∈ WA} can be characterized exactly in graphical terms. Theorem 2. Let G be a directed graph with binary adjacency matrix A. Then max{rank(W );W ∈ WA} is equal to the minimum size of the head-tail vertex cover of G, that is, max{rank(W ) ; W ∈ WA} = min{|H|+ |T| ; (H,T) is a head-tail vertex cover of G}. We comment that Theorem 2 holds for all directed graphs (not only DAGs), which may be of independent interest to other applications. A head-tail vertex cover of minimum size is called a minimum head-tail vertex cover, which in general is not unique. For a head-tail vertex cover (H,T), the vertices in H cover all the edges pointing towards these vertices while the vertices in T cover the edges pointing away. A head-tail cover of a relatively small size then indicates the presence of hubs, that is, vertices with relatively high in-degrees or out-degrees. Therefore, Theorem 2 suggests that the maximum rank of a weighted DAG is highly related to the presence of hubs: a DAG with many hubs tends to have low rank. Intuitively, a hub of high in-degree (out-degree) is a common effect (cause) of a number of direct causes (effect variables), comprising many V-structures (inverted V-structures). For example, in Figure 1a, X8 is a hub of V-structures and X9 is a hub of inverted V-structures. Such features are fairly common in real graph structures. Appendix A presents a real network, called pathfinder, which describes the causal relations among 109 variables (Heckerman et al., 1992) with the center node being the parent of a large number of other nodes. The famous scale-free (SF) graphs also tend to have hubs. A scale-free graph is one whose distribution of degree k follows a power law: P (k) ∼ k−γ , where γ is the power parameter typically within [2, 3] and P (k) denotes the fraction of nodes with degree k (Nikolova & Aluru, 2012). It is observed that many real-world networks are scale-free, and some of them, such as gene regulatory networks, protein networks, and financial system network, may be viewed as causal networks (Guelzim et al., 2002; Barabasi & Oltvai, 2004; Hartemink, 2005; Eguı́luz et al., 2005; Gao & en Ren, 2013; Ramsey et al., 2017). In particular, Barabasi & Oltvai (2004) claimed that most protein networks, some of which are directed and acyclic due to irreversible reactions, are the results of growth processes and preferential attachments, probably due to the gene duplication. Empirically, the ranks of scale-free graphs are relatively low, especially in comparison to Erdös-Rényi (ER) random graphs (Mihail & Papadimitriou, 2002). Figure 2 provides a simulated example where γ is chosen from {2, 3} and each reported value is over 100 random runs. As graph becomes denser, the graph rank also increases. However, for scale-free graphs with a relatively large γ, the increase of their ranks is much slower than that of Erdös-Rényi graphs; indeed, their ranks tend to stay fairly low even when the graph degree is large. Theorem 2 can also be used to generate a low rank graph, or more precisely, a random DAG with a given rank r and a properly specified graph degree. Here we briefly describe the idea and leave the detailed algorithm to Appendix C.1: first generate a graph with r edges and rank r; a random edge is sampled without replacement and would be added to the graph, if adding this edge does not increase the size of the minimum head-tail vertex cover; repeat the previous step until the pre-specified degree is reached or no edge could be added to the graph; finally, assign the edge weights randomly according to a continuous distribution and the weighted graph will have rank r with high probability. The next two theorems report some looser but simpler upper bounds on rank(W ). Theorem 3. Let G be a DAG with binary adjacency matrix A, and denote the set of vertices with at least one parent by Vch and those with at least one child by Vpa. Then we have max{rank(W ) ; W ∈ WA} ≤ ∑l(G) s=1 min (|Vs|, |ch(Vs)|) ≤ |Vpa|,∑l(G)−1 s=0 min (|Vs|, |pa(Vs)|) ≤ |Vch|, |V| −max{|Vs| ; 0 ≤ s ≤ l(G)}. (4) Since Vch and Vpa are the non-root and the non-leaf vertices, respectively, the first two inequalities of (4) indicate that the maximum rank is bounded from above by the number of non-root vertices and also by the number of non-leaf vertices. The last inequality of (4) is a generalization of the first two, which implies that the rank is likely to be low if most vertices have the same height. Theorem 4. Let G be a DAG with binary adjacency matrix A. Denote by skeleton(A) and moral(A) the binary adjacency matrices of the skeleton and moral graph of G, respectively. Then we have max{rank(W ) ; W ∈ WA} ≤max{rank(W ) ; sign(|W |) = skeleton(A)} ≤max{rank(W ) ; sign(|W |) = moral(A)}. The skeleton of a DAG is the undirected graph obtained by removing all the arrowheads, and the moral graph is the undirected graph where two vertices are adjacent if they are adjacent or if they share a common child in the DAG. This result is useful when the skeleton or the moral graph can be accurately estimated and the corresponding rank is low. In practice, we may use all available structural priors to obtain upper bounds on the underlying rank and choose the lowest one as our estimate. 5 EXPERIMENTS This section reports empirical results of the low rank adaptations of existing methods, compared with their original versions. We choose NOTEARS (Zheng et al., 2018) for linear SEMs by adopting the matrix factorization approach, denoted as NOTEARS-low-rank, and use the nuclear norm approach in combination with GraN-DAG (Lachapelle et al., 2020) for a non-linear data model. Again we remark that the two methods are only demonstrations of the utility of low rank assumption, which can be potentially combined with other methods as well. For more information, we also include several benchmark methods: fast GES (Ramsey et al., 2017), PC (Spirtes et al., 2000), MMHC (Tsamardinos et al., 2006), ICA-LiNGAM (Shimizu et al., 2006) specifically designed with non-Gaussian noises, for linear SEMs;1 and DAG-GNN (Yu et al., 2019), NOTEARS-MLP (Zheng et al., 2020), and CAM (Bühlmann et al., 2014) for the non-linear case. Their implementations are described in Appendix C. We consider randomly sampled DAGs with specified ranks (the generating procedure was described in Section 4.3 and is given as Algorithm 1 in Appendix C.1), scale-free graphs, and a real network structure. For linear SEMs, the weights are uniformly sampled from [−2,−0.5] ∪ [0.5, 2] and the noises are either standard Gaussian or standard exponential. For non-linear SEMs, we use additive Gaussian noise model with functions sampled from Gaussian processes with RBF kernel of bandwidth one. These data models are known to be identifiable (Shimizu et al., 2006; Peters & Bühlmann, 2013; Peters et al., 2014). From each SEM, we then generate n = 3, 000 observations. We repeat ten times over different seeds for each experiment setting. Detailed information about the setup can be found in Appendix C.3. Below we mainly report structural Hamming distance (SHD) which takes into account both false positives and false negatives, and a smaller SHD indicates a better estimate. 5.1 LINEAR SEMS WITH RANK-SPECIFIED GRAPHS We first consider linear SEMs on rank-specified graphs, with number of nodes d ∈ {100, 300}, rank r = d0.1de, and average degree k ∈ {2, 4, 6, 8}. The true rank is assumed to be known and is used as the rank parameter r̂ in NOTEARS-low-rank. For a better visualization, Figure 3 only reports the average SHDs, while the true positive rate, false discovery rate, and running time are left to Appendix D. We also show the results after using the interquartile range rule to remove outlier SHDs. We observe that the low rank assumption can greatly improve the performance of NOTEARS, reducing the SHDs by at least a half. For this data model, the fast GES has much higher SHDs (see also Appendix D). PC is too slow (for example, it did not finish in 16 hours for a dataset with 100 nodes and degree 6), because some nodes may have a high in-degree. For the same reason, the skeleton may not be well estimated by MMHC; its performance is slightly worse than the fast GES and is not reported. For more information regarding the role of sparsity, we include NOTEARS with an `1 penalty, named NOTEARS-L1. Here the `1 penalty weight is chosen from {0.01, 0.02, 0.05, 0.1, 0.2, 0.5}. Instead 1Here we choose ICA-LiNGAM, other than alternative LiNGAM methods like DirectLiNGAM (Shimizu et al., 2011), based on our empirical observation. Specifically, an implementation of ICA-LiNGAM has a noticeably better performance than DirectLiNGAM for relatively dense graphs. Please find a detailed discussion and an empirical comparison in Appendix D.4. of relying on an additional validation dataset, we treat NOTEARS-L1 favorably by picking the lowest SHD obtained from different weights for each dataset. As seen from Figure 3a, NOTEARS-L1 is slightly better than NOTEARS when the average degree is 2, but is largely outperformed with relatively dense graphs. This observation was also reported in Zheng et al. (2018). We conjecture that it is because our experiments consider relatively sufficient data and dense graphs. Moreover, the thresholding procedure controls false discoveries and may have a similar effect to the `1 penalty. Appendix D.1 studies graphs with higher ranks, where it is observed that the advantage of NOTEARSlow-rank over NOTEARS decreases when the rank of the underlying DAG increases. Nevertheless, NOTEARS-low-rank is still competitive when the true rank is dd/2e and the factorized matrix has the same number of parameters as NOTEARS. We also conduct an empirical analysis with different sample sizes in Appendix D.2, which shows that NOTEARS-low-rank performs reasonably well when the sample size is small and tends to have a better performance with a larger number of samples. Due to space limit, please find further details in the appendix. 5.2 LINEAR SEMS WITH SCALE-FREE GRAPHS We next consider scale-free graphs with d = 100 nodes, average degree k = 6, and power γ = 2.5. For this experiment, the minimum, maximum, and mean ranks of generated graphs are 14, 24, and 18.7, respectively. Here we choose the rank parameter r̂ from {20, 30, 40} for NOTEARS-low-rank. As seen from Figure 4, NOTEARSlow-rank with rank parameter r̂ = 20 performs the best, even though there are graphs with ranks greater than 20. 5.3 SENSITIVITY OF RANK PARAMETERS AND VALIDATION So far we have assumed that the true rank or an accurate estimate is known. In this experiment, we conduct an empirical analysis with different rank parameters for linear Gaussian data model on rank-specified graphs with 100 nodes, degree 8, and rank 10. We also include the validation based approach where 2, 000 samples are chosen as training dataset and the rest as validation dataset. We use the derived lower and upper bounds in Theorems 1 and 3 to obtain a range of possible rank parameters, assuming that the corresponding structural priors are available. Within this range, we then select 7 evenly distributed rank parameters used with NOTEARS-low-rank to learn causal graphs. Finally, we evaluate each learned DAG using the validation dataset and choose the DAG with the best score as our estimate. As seen from Figure 5, NOTEARS-low-rank performs the best when the rank parameter is identical to the true rank, while the rank parameter chosen by validation has almost the same performance. Compared with NOTEARS on the same datasets, the low rank version performs well across a range of rank parameters. Although this validation approach increases the total running time that depends on the number of candidate rank parameters, we believe that it is acceptable given the gained accuracy and also the fact that this strategy has been frequently adopted for tuning hyperparameters in practice. 5.4 NON-LINEAR SEMS For non-linear data models, we pick rank-specified graphs with 50 nodes, rank 5, and average degree k ∈ {2, 4, 6, 8}. To our knowledge, the selected benchmark methods CAM, NOTEARS-MLP, and GraN-DAG are state-of-the-art methods on this data model. As a demonstration of the low rank assumption, we apply the nuclear norm approach to GraN-DAG and choose from {0.3, 0.5, 1.0} as penalty weights. For validation, we use the same splitting ratio as in Section 5.3 and consider more penalty weights from {0.1, 0.2, 0.3, 0.5, 1, 2, 5}. Similarly, the learned graph that achieves the best score on the validation dataset is chosen as final estimate. Figure 6 (and Appendix D.6 with a more detailed result) shows that adding a nuclear norm can improve the performance of GraN-DAG across a large range of weights when the graph is relatively dense. For degree 8, the low rank version with validation achieves average SHD 77.4, while the SHDs of CAM, NOTEARS-MLP, and original GraN-DAG are 131.9, 119.4, and 109.4, respectively. 5.5 REAL NETWORK We apply the proposed method to the arth150 gene network, which is a DAG containing 107 genes and 150 edges. Its maximum rank is 40. Since the real dataset has only 22 samples, we instead use simulated data from linear Gaussian SEMs. We pick r̂ from {36, 40, 44} and also use validation to select the rank parameter. We apply NOTEARS-L1 where the `1 penalty weight is chosen from {0.05, 0.1, 0.2}, and similarly treat this method favorably by picking the lowest SHD for each dataset. The mean and median SHDs are shown in Figure 7. Using Student’s t-test, we find that with significance level 0.1, the results obtained with r̂ = 44 and the validation approach are significantly better than NOTEARS. This experiment demonstrates again the utility of the low rank assumption, even when the true rank of the graph is not very low. 6 CONCLUDING REMARKS This paper studies the potential of low rank assumption in causal structure learning. Empirically, we show that the low rank adaptations perform noticeably better than existing algorithms when the low rank condition is satisfied, and also deliver competitive performances when the rank is not as low as is assumed. Theoretically, we provide an improved understanding of what kinds of graphs tend to be low rank and a possibility to obtain bounds on the underlying rank from several structural priors. We treat the present work as our first step to incorporate low-rankness into causal DAG learning. A future direction is to approximate a high rank DAG with a low rank one (possibly adding an additional DAG that is sparse). While there is a rich literature on low rank approximations of matrices and combining low-rankness with sparsity, it is non-trivial to us to conclude under what conditions such an approximation is guaranteed to be effective to learn causal DAGs. Another direction is to compare the low rank assumption to other structural or parametric priors affecting model selection through marginal likelihood (Eggeling et al., 2019; Silander et al., 2007). Finally, it is also interesting to investigate if a low rank DAG model implies any useful behavior in the data. Appendix A EXAMPLES AND DISCUSSIONS We provide more examples and discussions in this section. Minimum rank of the graph in Figure 1 We first show that the minimum rank of the DAG structure in Figure 1 is 6. It is clear that the 6-th to 10-th rows of A are always linearly independent, so it suffices to show that the 11-th row is linearly independent of the 6-th to 10-th rows. To see this, notice that if the 11-th row is a linear combination of the 6-th to 10-th rows, then A(11, 1) would be non-zero, which is a contradiction. The pathfinder and arth150 networks Figure 8 visualizes the pathfinder and arth150 networks that are mentioned in Sections 4.3 and 5, respectively. Both networks can be found at http: //www.bnlearn.com/bnrepository. As one can see, these two networks contain hubs: the center note in the pathfinder network has a large number of children, while the arth150 network contains many ‘small’ hubs, each of which has 5 ∼ 10 children. We also notice that nearly all the hubs in the two networks have high out-degrees. Sparse DAGs and low rank DAGs A sparse DAG does not necessarily indicate a low rank DAG, and vice versa. For example, a directed linear graph with d vertices has only d − 1 edges, i.e. X1 → X2 → · · · → Xd, while the rank of its binary adjacency matrix is d − 1. According to Theorems 1 and 2, the maximum and minimum ranks of a directed linear graph are equal to its number of edges. Thus, directed linear graphs are sparse but have high ranks. On the other hand, for some non-sparse graphs, we can assign the edge weights so that the resulting graphs have low ranks. A simple example would be a fully connected directed balanced bipartite graph, as shown in Figure 9. The definition of bipartite graphs can be found in Appendix B.1. A bipartite graph is called balanced if its two parts contain the same number of vertices. The rank of a fully connected balanced bipartite graph with d vertices is 1 if all the edge weights are the same (e.g., the binary adjacency matrix), but the number of edges is d2/4. We also notice that there exist some connections between the maximum rank and the graph degree, or more precisely, the total number of edges in the graph, according to Theorem 2. Intuitively, if the graph is dense, then we need more vertices to cover all the edges. Thus, the size of the minimum head-tail vertex cover should be large. Explicitly providing a formula to characterize these two graph parameters is an interesting problem, which will be explored in the future. B PROOFS In this section, we present proofs for the theorems given in the main content. B.1 PRELIMINARIES A bipartite graph is a graph whose vertex set V can be partitioned into two disjoint subsets V0 and V1, such that the vertices within each subset are not adjacent to one another. V0 and V1 are called the parts of the graph. A matching of a graph is a subset of its edges where no two of them share a common endpoint. A vertex cover of a graph is a subset of the vertex set where every edge in the graph has at least one endpoint in the subset. The size of a matching (vertex cover) is the number of edges (vertices) in the matching (vertex cover). A maximum matching of a graph is a matching of the largest possible size and a minimum vertex cover is a vertex cover of the smallest possible size. An important result about bipartite graphs is König’s theorem (Dénes, 1931), which states that the size of a minimum vertex cover is equal to the size of a maximum matching in a bipartite graph. Based on the heights of vertices in V, we can define a weak ordering among the vertices: Xi Xj if and only if l(Xi) > l(Xj), and Xi ∼ Xj if and only if l(Xi) = l(Xj). Given this weak ordering, we can group the vertices by their heights, and the resulting graph shows a hierarchical structure; see Figure 1 in the main text for an example. This hierarchical representation has some simple and nice properties. Let Vs = {Xi; l(Xi) = s}, s = 0, 1, . . . , l(G), and let V−1 = ∅. We have: (1) for any given s ∈ {0, 1, . . . , l(G)} and two distinct vertices X1, X2 ∈ Vs, X1 and X2 are not adjacent, and (2) for any given s ∈ {1, 2, . . . , l(G)} and Xi ∈ Vs, there is at least one vertex in Vs−1 which is a child of Xi. If we denote the induced subgraph of G over Vs ∪Vs−1 by Gs,s−1, then Gs,s−1 is a bipartite graph with Vs and Vs−1 as parts, and singletons in Gs,s−1 (i.e., vertices that are not endpoints of any edge) only appear in Vs−1. For ease of presentation, we occasionally use index i to represent variable Xi in the following sections. B.2 PROOF OF THEOREM 1 Proof. Let G = (V,E). Consider an equivalence relation, denoted by ∼, among vertices in V defined as follows: for any Xi, Xj ∈ V, Xi ∼ Xj if and only if l(Xi) = l(Xj) and Xi and Xj are connected. Here, connected means that there is a path between Xi and Xj . Below we use C(Xi) to denote the equivalence class containing Xi. Next, we define a weak ordering π on V/ ∼, i.e., the equivalence classes induced by ∼, by letting C(Xi) π C(Xj) if and only if l(Xi) ≥ l(Xj). Then, we extend π to a total ordering ρ on V/ ∼. The ordering ρ also induces a weak ordering (denoted by ρ̄) on V: Xi ρ̄ Xj if and only if C(Xi) ρ C(Xj). Finally, we extend ρ̄ to a total ordering γ on V. It can be verified that γ is a topological ordering of G, that is, if we relabel the vertices according to γ, then Xi ∈ pa(Xj ,G) if and only if i > j and Xi and Xj are adjacent, and the adjacency matrix of G becomes lower triangular. Assume that the vertices of G are relabeled according to γ and we will consider the binary adjacency matrix A of the resulting graph throughout the rest of this proof. Note that relabelling is equivalent to applying a permutation onto the adjacency matrix, which does not change the rank. Let V0 = {1, 2, . . . , k1 − 1} for some k1 ≥ 2. Then the k1-th row of A, denoted by A(k1, ·), is the first non-zero row vector of A. Letting S = {A(k1, ·)}, then S contains a subset of linearly independent vector(s) of the first k1 rows of A. Suppose that we have visited the first m rows of A and S = {A(k1, ·), A(k2, ·), . . . , A(kt, ·)} contains a subset of linearly independent vector(s) of the first m rows ofA, where k1 ≤ m < d. IfXm+1 Xkt , then we addA(m+1, ·) to S; otherwise, we keep S unchanged. We claim that the vectors in S are still linearly independent after the above step. Clearly, if we do not add any new vector, then S contains only linearly independent vectors. To show the other case, note that if l(Xm+1) > l(Xkt) ≥ · · · ≥ l(Xk1), then there is an index i ∈ Vl(Xm+1)−1 such that A(m + 1, i) 6= 0, by the definition of height. Since l(Xm+1) > l(Xkt), we have l(Xkt) ≤ l(Xm+1)− 1 and thus A(kj , i) = 0 for all j = 1, 2, . . . , t. Therefore, A(m+ 1, ·) cannot be linearly represented by {A(kj , ·); j = 1, 2, . . . , t} and the vectors in S are linearly independent. On the other hand, if l(Xm+1) = l(Xkt), then the definition of the equivalence relation ∼ implies that Xm+1 and Xkt are disconnected, which means that Xm+1 and Xkt do not share a common child in Vl(Xm+1)−1. Consequently, there is an index i ∈ Vl(Xm+1)−1 such that A(m + 1, i) 6= 0 but A(kt, i) = 0. Similarly, we can show that A(kj , i) = 0 for all j = 1, 2, . . . , t. Thus, the vectors in S are still linearly independent. After visiting all the rows in A, the number of vectors in S is equal to ∑l(G) s=1 |C(Gs,s−1)| based on the definition of ∼. The second inequality can be shown by noting that C(Gs,s−1) has at least one elements. The proof is complete. B.3 PROOF OF THEOREM 2 Proof. Denote the directed graph by G = (V,E). Edmonds (1967, Theorem 1) showed that max{rank(W );W ∈ WA} is equal to the maximum number of nonzero entries of A, no two of which lie in a common row or column. Therefore, it suffices to show that the latter quantity is equal to the size of the minimum head-tail vertex cover. Let V ′ = V′0 ∪V′1, where V′0 = V × {0} = {(Xi, 0);Xi ∈ V} and V′1 = V × {1} = {(Xi, 1);Xi ∈ V}. Now define a bipartite graph B = (V′ ,E′) where E′ = {(Xi, 0) → (Xj , 1); (Xi, Xj) ∈ E}. Denote byM a set of nonzero entries of A so that no two entries lie in the same row or column. Notice thatM can be viewed as an edge set and no two edges inM share a common endpoint. Thus,M is a matching of B. Conversely, it can be shown by similar arguments that any matching of B corresponds to a set of nonzero entries of A, no two of which lie in a common row or column. Therefore, max{rank(W ),W ∈ WA} equals the size of the maximum matching of B, and further the size of the minimum vertex cover of B according to König’s theorem. Note that any vertex cover of B can be equivalently transformed to a head-tail vertex cover of G, by letting H and T be the subsets of the vertex cover containing all variables in V′0 and of the vertex cover containing all variables in V ′ 1, respectively. Thus, max{rank(W ),W ∈ WA} is equal to the size of the minimum head-tail vertex cover. B.4 PROOF OF THEOREM 3 Proof. We start with the first inequality in Equation (4). Let h1, . . . , hp denote the heights where |Vs| < |ch(Vs)|, and t1, . . . , tq the height where |Vs| > |ch(Vs)|. Let H = ∪pi=1Vhi and T = ∪qi=1Vti . It is straightforward to see that (H,T) is a head-tail vertex cover. Thus, Equation (4) holds according to Theorem 2. The second inequality can be shown similarly and its proof is omitted. For the third inequality, let m = argmax{|Vs| : 0 ≤ s ≤ l(G)}, and define H = ∪i>mVi and T = ∪i<mVi. Then (H,T) is also a head-tail vertex cover and the third inequality follows from Theorem 2, too. B.5 PROOF OF THEOREM 4 Proof. Notice that Theorem 2 holds for all directed graphs. This theorem then follows by treating the skeleton and the moral graph as directed graphs with loops, i.e., an undirected edge Xi −Xj is treated as two directed edges Xi → Xj and Xj → Xi. C IMPLEMENTATION DETAILS In this section, we present an algorithm to generate a random DAG with a given rank, a low rank version of NOTEARS and GraN-DAG, and also a description of our experimental settings. C.1 GENERATING RANDOM DAGS In Section 4.3, we briefly discuss the idea of generating a random DAG with a given rank. We now describe the detailed procedure in Algorithm 1. In particular, we aim to generate a random DAG with d nodes, average degree k, and rank r. The first part of Algorithm 1 after initialization is to sample a number N , representing the total number of edges, from a binomial distribution B(d(d − 1)/2, p) Algorithm 1 Generating random DAGs Require: Number of nodes d, average degree k, and rank r. Ensure: A randomly sampled DAG with the number of nodes d, average degree k, and rank r. 1: Set M = empty graph, Mp = ∅, and R = {(i, j); i < j, i, j = 1, 2, ..., d}. 2: Set p = k/(d− 1). 3: Sample a numberN ∼ B(d(d−1)/2, p), whereB(n, p) is a binomial distribution with parameters n and p. 4: if N < r then 5: return FAIL 6: end if 7: Sample r indices from 1, . . . , d− 1 and store them in Mp in descending order. 8: for each i in Mp do 9: Sample an index j from i+ 1 to d. 10: Add edge (i, j) to M and remove (i, j) from R. 11: end for 12: while R 6= ∅ and |M | < N do 13: Sample an edge (i, j) from R and remove it from R. 14: if adding (i, j) to M does not change the size of the minimum head-tail vertex cover of M then 15: Add (i, j) to M . 16: end if 17: end while 18: if |M | < N then 19: return FAIL 20: end if 21: return M where p = k/(d− 1). If N < r, Algorithm 1 would return FAIL since a graph with N < r edges could never have rank r. Otherwise, Algorithm 1 samples an initial graph with r edges and rank r, by choosing r edges such that no two of them share the same head points or the same tail points, i.e., each row and each column of the corresponding adjacency matrix have at most one non-zero entry. Then, Algorithm 1 sequentially samples an edge from R containing all possible edges and checks whether adding this edge to the graph changes the size of the minimum head-tail vertex cover. If not, the edge will be added to the graph; otherwise, it will be removed from R. This is because if a graph G is a super-graph of another graphH, then the size of the minimum head-tail cover of G is no less than that ofH. We repeat the above sampling procedure until there is no edge in R or the number of edges in the resulting graph reaches N . If the latter happens, the algorithm will return the generated graph; otherwise, it will return FAIL. The theoretic basis of Algorithm 1 is Theorem 2. Note that the algorithm may not return a valid graph if the desired number N of edges cannot be reached. This could happen if the input rank is too low while the input average degree is too high. With our experiment settings, we find it rare for Algorithm 1 to fail to return a desired graph. C.2 OPTIMIZATION For this part, we consider a dataset consisting of n i.i.d. observations from P (X) and consequently the expectations in Problems (1) and (2) are replaced by empirical means. Denote the design matrix by X ∈ Rn×d, where each row of X corresponds to an observation and each column represents a variable. Here we use NOTEARS (Zheng et al., 2018) and Gran-DAG (Lachapelle et al., 2020) from each class of methods as examples and will describe their low rank versions in the following. Other gradient-based methods and their optimization procedures can be similarly modified to incorporate the low rank assumption. Algorithm 2 Optimization procedure for NOTEARS-low-rank Require: Design matrix X, starting point (U0, V0, α0), rate c ∈ (0, 1), tolerance > 0, and threshold w > 0. Ensure: Locally optimal parameter W ∗. 1: for t = 1, 2, . . . do 2: (Solve primal) Ut+1, Vt+1 ← arg minU,V Lρ(U, V, αt) with ρ such that g(Ut+1V Tt+1) < cg(UtV T t ). 3: (Dual ascent) αt+1 ← αt + ρg(Ut+1V Tt+1). 4: if g(Ut+1V Tt+1) < then 5: Set U∗ = Ut+1 and V ∗ = Vt+1. 6: break 7: end if 8: end for 9: (Thresholding) Set W ∗ = U∗V ∗T ◦ 1(|U∗V ∗T | > w). 10: return W ∗ C.2.1 NOTEARS WITH LOW RANK ASSUMPTION Following Section 3, the optimization problem in our work can be written as min W 1 2n ∥∥X−XUV T∥∥2 F , subject to trace ( eUV T ◦UV T ) − d = 0, (5) where U, V ∈ Rd×r̂ and ◦ is the point-wise product. The constraint in Problem (5) holds if and only if UV T is a weighted adjacency matrix of a DAG. This problem can then be solved by standard numeric optimization methods such as the augmented Lagrangian method (Bertsekas, 1999). In particular, the augmented Lagrangian is given by Lρ(U, V, α) = 1 2n ∥∥X−XUV T∥∥2 F + αg(UV T ) + ρ 2 |g(UV T )|2, where g(UV T ) := trace ( eUV T ◦UV T ) − d, α is the Lagrange multiplier, and ρ > 0 is the penalty parameter. The optimization procedure is summarized in Algorithm 2, similar to Zheng et al. (2018, Algorithm 1). Notice that here we do not include the `1 penalty term (except for the first and last experiments in Sections 5.1 and 5.5, respectively), for the following reasons: (1) the thresholding procedure can also control false discoveries; (2) we consider relatively sufficient data for the experiments and NOTEARS with thresholding has been shown in Zheng et al. (2018) to perform consistently well even when the graph is sparse; (3) we are more concerned with relatively large and dense graphs, so a sparsity assumption may be harmful, as shown also by Zheng et al. (2018); (4) the `1 penalty term requires a tuning parameter, which itself is not easy to choose. Zheng et al. (2018) used L-BFGS to solve the unconstrained subproblem in Step 2. We alternatively use the Newton conjugate gradient method that is written in C. Empirically, these two optimizers behave similarly in terms of the estimate performance, while the latter can run much faster thanks to its C implementation. The DAG constraint may not be satisfied exactly using iterative numeric methods, so it is a common practice to pick a small tolerance, followed by a thresholding procedure on the estimated entries to obtain exact DAGs. In our implementation, we choose U0 and V0 to be the first r̂ columns of the d× d identity matrices. Other parameter choices are: α0 = 0, c = 0.25, = 10−6, and w = 0.3, similar to those used in related methods on the same datasets (e.g., Zheng et al. (2018); Yu et al. (2019); Zhu et al. (2020)). The chosen threshold w = 0.3 works well in our experiments and in the experiments of related works that use the same data model. In case the thresholded matrix is not a DAG, one may further increase the threshold until the resulting matrix corresponds to a DAG. After obtaining W ∗, we add an additional pruning step: we use linear regression to refit the dataset based on the structure indicated by W ∗ and then apply another thresholding (with w = 0.3) to the refitted weighted adjacency matrix. Both the Newton conjugate gradient optimizer and the pruning technique are also applied to NOTEARS, which not only accelerate the optimization but also improve its performance by obtaining a much lower SHD, particularly for large and dense graphs. See Appendix D.3 for an empirical comparison. C.2.2 GRAN-DAG WITH LOW RANK ASSUMPTION We next consider a low rank version of GraN-DAG. The optimization problem can be written as min θ − 1 n n∑ l=1 d∑ i=1 log p ( X (l) i | pa(Xi,W (θ)) (l); θ ) + λ‖W (θ)‖∗ subject to trace ( eW (θ) ) − d = 0, (6) where X(l)i is the l-th sample of variable Xi and pa(Xi,W (θ)) (l) means the l-th sample of Xi’s parents indicated by the adjacency matrix W (θ). Here, θ denotes the parameters of neural networks and W (θ) with non-negative entries is obtained from the neural network path products. Problem (6) can be solved similarly using augmented Lagrangian. The procedure is similar to Algorithm 2 and is the same to that used by GraN-DAG, with slight modifications: (1) the subproblem in Step 2 is approximately solved using first-order methods; (2) the thresholding at Step 9 is replaced by a variable selection method proposed by Bühlmann et al. (2014). The same variable selection or pruning method is adopted by two other benchmark methods CAM and NOTEARS-MLP in our experiment. Please refer to Lachapelle et al. (2020) and Bühlmann et al. (2014) for further details. C.3 EXPERIMENT SETUP In our experiments, we consider three data models: linear Gaussian SEMs, linear non-Gaussian SEMs (linear exponential SEMs), and non-linear SEMs (Gaussian processes). Given a randomly generated DAG G, the associated SEM is generated as follows: Linear Gaussian A linear Gaussian SEM is given by Xi = ∑ Xj∈pa(Xi,G) W (j, i)Xj + i, i = 1, 2, . . . , d, (7) where pa(Xi,G) denotes Xi’s parents in G and i’s are jointly independent standard Gaussian noises. In our experiments, the weights W (i, j)’s are uniformly sampled from [−2,−0.5] ∪ [0.5, 2]. Linear Exponential A linear exponential SEM is also generated according to Equation (7), where i’s are replaced by jointly independent Exp(1) random variables. The weightsW (i, j)’s are sampled from [−2,−0.5] ∪ [0.5, 2] uniformly, too. Gaussian Processes We consider the following additive noise model: Xi = fi(pa(Xi,G)) + i, i = 1, 2, . . . , d, (8) where i’s are jointly independent standard Gaussian noises and fi’s are functions sampled from Gaussian processes with RBF kernel of bandwidth one. We sample 3, 000 observations according the SEM. The reported results of each setting are summarized over 10 repetitions with different seeds. The experiments are run on a Linux workstation with 16-core Intel Xeon 3.20GHz CPU and 128GB RAM. C.4 BENCHMARK METHODS Existing causal structure learning methods used in our experiments all have available implementations, as listed below: • GES and PC: an implementation of both methods is available through the py-causal package at https://github.com/bd2kccd/py-causal. We note that, the implementation of py-causal package is based on the CMU TETRAD project, in which the version of GES is indeed the fast GES algorithm proposed by Ramsey et al. (2017). • MMHC (Tsamardinos et al., 2006): an implementation is available in the bnlearn package at https://CRAN.R-project.org/package=bnlearn. • CAM (Peters et al., 2014): its codes are available through the CRAN R package repository at https://cran.r-project.org/web/packages/CAM. • NOTEARS (Zheng et al., 2018) and NOTEARS-MLP (Zheng et al., 2020): codes are available at the first author’s github repository https://github.com/xunzheng/ notears. • GraN-DAG (Lachapelle et al., 2020): an implementation is available at the first author’s github repository https://github.com/kurowasan/GraN-DAG. Note that for graphs of 50 nodes or more, GraN-DAG performs a preliminary neighborhood selection step to avoid overfitting. • DAG-GNN (Yu et al., 2019): the codes are available at the first author’s github repository https://github.com/fishmoon1234/DAG-GNN. • ICA-LiNGAM (Shimizu et al., 2006): an implementation is available at https://sites. google.com/site/sshimizu06/lingam. In the experiments, we mostly use default hyperparameters unless otherwise stated. D ADDITIONAL EXPERIMENTAL RESULTS D.1 LINEAR SEMS WITH HIGHER RANKS This experiment considers graphs of higher ranks. We use rank-specified random graphs with d = 100 nodes and rank r ∈ {30, 35, 40, 45, 50} on linear Gaussian SEMs. The results are shown in Figures 10a and 10b with degrees 2 and 8, respectively. We observe that when the rank of the underlying graph becomes higher, the advantage of NOTEARS-low-rank over NOTEARS decreases. Nonetheless, NOTEARS-low-rank with rank r = 50 is still comparable to NOTEARS, and has a lower average SHD after removing outlier SHDs using the interquartile range rule. D.2 NOTEARS-LOW-RANK WITH DIFFERENT SAMPLE SIZES We next empirically study the consistency of NOTEARSlow-rank. Again, we use rank-specified random graphs (sampled according to Algorithm 1) with d = 100 nodes, degree k = 8, rank r = 10, and linear Gaussian SEMs. We also assume that the true rank is known. We fix the rank parameter r̂ = 10 and use different sample sizes ranging from 200 to 5, 000. From Figure 11, NOTEARSlow-rank performs reasonably well when the sample size is small and tends to have a better performance with a larger number of samples. D.3 FURTHER PRUNING We compare the empirical results before and after applying the additional pruning technique described in Appendix C.2. The graphs are rank-specified with d ∈ {100, 300} nodes, rank r = d0.1de, and degree k ∈ {2, 4, 6, 8}. We again use linear Gaussian data model with equal noise variances to generate the datasets. The average SHDs are reported in Figure 12. We see that applying an additional pruning step indeed improves the final performance of both NOTEARS and NOTEARS-low-rank, especially on relatively large and dense graphs. D.4 AN EMPIRICAL COMPARISON BETWEEN ICA-LINGAM AND DIRECTLINGAM To our best knowledge, there are two Python implementations of ICA-LiNGAM (Shimizu et al., 2006) released by the authors, available at https://sites.google.com/site/sshimizu06/ lingam and https://github.com/cdt15/lingam, respectively, where the latter is a Python package containing several LiNGAM related methods. In the following, we use ICALiNGAM-pre and ICA-LiNGAM-cdt to denote these two implementations, respectively. For DirectLiNGAM (Shimizu et al., 2011), we only find a Python implementation available at the previously mentioned Python package containing ICA-LiNGAM-cdt. Here we run DirectLiNGAM, ICA-LiNGAM-cdt, and ICA-LiNGAM-pre on linear exponential data models with 100-node and rank-10 graphs. The mean SHDs are reported below in Table 1. Based on this experimental result as well as our past experience, DirectLiNGAM usually has a (slightly) better performance than ICA-LiNGAM-cdt, while ICA-LiNGAM-pre has a noticeably (if not much) better performance for relatively dense and large graphs. We are more concerned with relatively large and dense graphs and hence report the results achieved by ICA-LiNGAM-pre in the main paper. D.5 DETAILED EMPIRICAL RESULTS FOR EXPERIMENT 1 WITH LINEAR GAUSSIAN SEMS Table 2 reports detailed results including true positive rates (TPRs), false discovery rates (FDRs), structural Hamming distances (SHDs), and running time on rank-specified graphs with linear Gaussian data model. Here the true rank is assumed to be known and is used as the rank parameter in NOTEARSlow-rank. We also test (fast) GES, MMHC, and PC. However, PC is too slow since some nodes may have a high in-degree (i.e., hubs) in large, dense, and low rank graphs. For the same reason, the skeleton may not be correctly estimated by MMHC, which has a similar performance to that of GES. Therefore, we only include the results of GES for comparison. We treat GES favorably by regarding undirected edges as true positives if the true graph has a directed edge in place of the undirected ones. D.6 DETAILED RESULTS FOR EXPERIMENT 4 WITH NON-LINEAR SEMS Table 3 reports the detailed SHDs for each method in Section 5.4. We also mark in bold the best results from methods with or without low rank modifications.
1. What is the main contribution of the paper regarding low-rank DAGs? 2. What are the strengths and weaknesses of the proposed method for learning SEMs under a low-rank assumption? 3. How does the reviewer assess the relevance and usefulness of the obtained bounds on the rank of DAGs? 4. Do you have any concerns or suggestions regarding the experimental setup and results? 5. How does the reviewer evaluate the overall quality and impact of the paper?
Review
Review Summary The paper develops several useful lower and upper bounds on the rank of DAGs — specifically minimum and maximum rank of all weighted matrices that induce the same DAG — in terms of various graphical properties like head-tail vertex cover, number of non-root and non-leaf vertices. The paper also bounds the rank of DAG in terms of the rank of its skeleton and moral graph. The paper proposes learning low-rank linear or non-linear structural equation models (SEMs) by adding simple norm constraints or matrix factorization to existing SEM learning methods. Through experiments on synthetic and real world data the authors demonstrate that when the underlying SEM is low-rank, exploiting this low-rank assumption in the learning process can lead to better performance. The authors also demonstrate that the rank can be estimated using the obtained bounds from a validation set. Strengths The main contribution of the paper is a strong justification for learning SEMs under a low-rank assumption by showing that graphs with many hubs are low-rank. Existing theoretical results for learning SEMs show a polynomial dependence of the sample complexity on the maximum degree of the true SEM. Therefore, learning SEMs subject to rank constraints rather than sparsity constraints can be useful for graphs with hubs. The bounds on the rank of DAGs are generally useful beyond learning SEMs. Weakness The paper does not propose any novel algorithms for learning low-rank DAGs, other than merely augmenting existing methods with nuclear norm constraint or using matrix factorization. The method for estimating the rank from the validation set is crude and computationally expensive. Questions to address in rebuttal In Figure 2, is degree (x-axis) the maximum degree of a node graphs ? Figure 2 shows that the rank increases with the degree and that the rank is always larger than the degree. Therefore, even for graphs with hubs learning SEMs subject to sparsity constraints might still give better results than learning SEMs subject to rank constraints? More details are needed on how the rank is estimated from the validation set with a complete algorithm. Post-rebuttal comments Hello everyone, I have read the author's response and I am leaning towards rejection. The paper can be divided into two halves. The first half where the authors obtain bounds on ranks of DAGs is the main contribution of the paper and is clearly interesting. The second half of the paper tries to shoehorn these bounds into an algorithm for learning causal DAGs from observational data which is disappointing and is clearly below standard for the following reasons: The bounds depend on the underlying DAG which is unknown and therefore cannot be estimated from samples. Therefore the authors propose using "structural priors" to obtain these bounds. The authors don't mention where they get these structural priors from. Furthermore the bounds are only useful to restrict the hyper-parameter search space in the matrix factorization approach which is applicable to linear SEMs. These bounds can only be used "qualitatively" to guide selection of regularization penalty in the nuclear norm approach which is necessary for non-linear SEM methods. The theoretical results would still be useful if the authors could adequately demonstrate that for certain family of graphs the maximum degree can be high while the rank can be low therefore learning DAGs subject to sparsity constraints (whose sample complexity depend on the maximum degree) can perform worse than learning DAGs with rank constraints. However, this is not clear since in experiments the authors only show the SHD as a function of "average degree" and not "maximum degree". Figure 2 again compares rank against average degree and not maximum degree. The experiments are only performed in the low-dimensional regime at a fixed sample size (3000 samples and 300 nodes).
ICLR
Title On Low Rank Directed Acyclic Graphs and Causal Structure Learning Abstract Despite several important advances in recent years, learning causal structures represented by directed acyclic graphs (DAGs) remains a challenging task in high dimensional settings when the graphs to be learned are not sparse. In this paper, we propose to exploit a low rank assumption regarding the (weighted) adjacency matrix of a DAG causal model to mitigate this problem. We demonstrate how to adapt existing methods for causal structure learning to take advantage of this assumption and establish several useful results relating interpretable graphical conditions to the low rank assumption. In particular, we show that the maximum rank is highly related to hubs, suggesting that scale-free networks which are frequently encountered in real applications tend to be low rank. We also provide empirical evidence for the utility of our low rank adaptations, especially on relatively large and dense graphs. Not only do they outperform existing algorithms when the low rank condition is satisfied, the performance is also competitive even though the rank of the underlying DAG may not be as low as is assumed. 1 INTRODUCTION An important goal in many sciences is to discover the underlying causal structures in various domains, both for the purpose of explaining and understanding phenomena, and for the purpose of predicting effects of interventions (Pearl, 2009). Due to the relative abundance of passively observed data as opposed to experimental data, how to learn causal structures from purely observational data has been vigorously investigated (Peters et al., 2017; Spirtes et al., 2000). In this context, causal structures are usually represented by directed acyclic graphs (DAGs) over a set of random variables. For this task, existing methods can be roughly categorized into two classes: constraint- and scorebased. The former use statistical tests to extract from data a number of constraints in the form of conditional (in)dependence and seek to identify the class of causal structures compatible with those constraints (Meek, 1995; Spirtes et al., 2000; Zhang, 2008). The latter employ a score function to evaluate candidate causal structures relative to data and seek to locate the causal structure (or a class of causal structures) with the optimal score. Due to the combinatorial nature of the acyclicity constraint (Chickering, 1996; He et al., 2015), most score-based methods rely on local heuristics to perform the search. A particular example is the greedy equivalence search (GES) algorithm (Chickering, 2002) that can find an optimal solution with infinite data and proper model assumptions. Recently, Zheng et al. (2018) introduced a smooth acyclicity constraint w.r.t. graph adjacency matrix, and the task on linear data models was then formulated as a continuous optimization problem with least-squares loss. This change of perspective allows using deep learning techniques to model causal mechanisms and has already given rise to several new algorithms for causal structure learning with non-linear data, e.g., Yu et al. (2019); Ng et al. (2019b;a); Ke et al. (2019); Lachapelle et al. (2020); Zheng et al. (2020), among others. While these new algorithms represent the current state of the art in many settings, their performance generally degrades when the target DAG becomes large and relatively dense, as seen from the empirical results reported in the referred works and also in this paper. This issue is of course a challenge to other approaches. Ramsey et al. (2017) proposed fast GES for impressively large problems, but it works reasonably well only when the large structure is very sparse. The max-min hill-climbing (MMHC) (Tsamardinos et al., 2006) relies on local learning methods that often do not perform well when the target node has a large neighborhood. How to improve the performance on relatively large and dense DAGs is therefore an important question. In this work, we study the potential of exploiting a kind of low rank assumption on the DAG structure to help address this problem. The rank of a graph that concerns us is the algebraic rank of its associated weighted adjacency matrix. Similar to the role of a sparsity assumption on graph structures, we treat the low rank assumption as methodological and it is not restricted to a particular DAG learning method. However, unlike sparsity assumption, it is much less apparent when DAGs tend to be low rank and how low rank DAGs behave. Thus, besides demonstrating the utility of exploiting a low rank assumption in causal structure learning, another important goal is to improve our understanding of the low rank assumption by relating the rank of a graph to its graphical structure. Such a result also enables us to characterize the rank of a graph from several structural priors and helps to choose rank related hyperparameters for the learning algorithm. Our contributions are summarized as follows: • We show how to adapt existing causal structure learning methods to take advantage of the low rank assumption, and provide a strategy to select rank related hyperparameters utilizing the lower and upper bounds on the true rank, if they are available. • To improve our understanding of low rank DAGs, we establish some lower bounds on the rank of a DAG in terms of simple graphical conditions, which imply necessary conditions for DAGs to be low rank. • We also show that the maximum possible rank of weighted adjacency matrices associated with a directed graph is highly related to hubs in the graph, which suggests that scale-free networks tend to be low rank. From this result, we derive several graphical conditions to bound the rank of a DAG from above, providing simple sufficient conditions for low rank. • Empirically, we demonstrate that the low rank adaptations are indeed useful. Not only do they outperform the original algorithms when the low rank condition is satisfied, the performance is also very competitive even when the true rank is not as low as is assumed. Related Work The low rank assumption is frequently adopted in graph-based applications (Smith et al., 2012; Zhou et al., 2013; Yao & Kwok, 2016; Frot et al., 2019), matrix completion and factorization (Recht, 2011; Koltchinskii et al., 2011; Cao et al., 2015; Davenport & Romberg, 2016), network sciences (Hsieh et al., 2012; Huang et al., 2013; Zhang et al., 2017) and so on, but to our best knowledge, has not been used on the DAG structures in the context of learning causal DAGs. We notice two works Barik & Honorio (2019); Tichavskỳ & Vomlel (2018) that assume low rank conditional probability tables in learning Bayesian networks, which are different from ours. Also related are existing works that studied the rank of real weighted matrices described by a given simple directed/undirected graph. However, most works only considered the zero-nonzero pattern of off-diagonal entries (see, e.g., Fallat & Hogben (2007); Hogben (2010); Mitchell et al. (2010)), whereas we also take into account the diagonal entries. This difference is crucial: if one only considers the off-diagonal entries, then the maximum rank over all possible weighted matrices is trivial and is always equal to the number of vertices. Consequently, many works focus on the minimum rank of a given graph, but to characterize exactly the minimum rank remains open, except for some special graph structures like trees (Hogben, 2010). Apart from these works, Edmonds (1967) studied algebraically the maximum rank for matrices with a common zero-nonzero pattern. In Section 4, we use this result to relate the maximum possible rank to a more interpretable graphical condition, which further implies several structural conditions of DAGs that may be easier to obtain in practice. 2 PRELIMINARIES 2.1 GRAPH TERMINOLOGY A graph G is defined as a pair (V,E), where V = {X1, X2, · · · , Xd} is the vertex set and E ⊂ V2 denotes the edge set. We are particularly interested in directed (acyclic) graphs in the context of causal structure learning. For any S ⊂ V, we use pa(S,G), ch(S,G), and adj(S,G) to denote the union of all parents, children, and adjacent vertices of the nodes of S in G, respectively. A graph is called weighted if every edge in the graph is associated with a non-zero value. We will work with weighted graphs and treat unweighted graphs as a special case where the edge weights are set to 1. Weighted graphs can be treated algebraically via weighted adjacency matrices. Specifically, the weighted adjacency matrix of a weighted graph G is a matrix W ∈ Rd×d, where W (i, j) is the weight of edge Xi → Xj and W (i, j) 6= 0 if and only if Xi → Xj exists in G. The binary adjacency matrix A ∈ {0, 1}d×d is such that A(i, j) = 1 if Xi → Xj in G and A(i, j) = 0 otherwise. The rank of a weighted graph is defined as the rank of the associated weighted adjacency matrix. 2.2 CAUSAL STRUCTURE LEARNING AND RECENT GRADIENT-BASED METHODS A commonly used model in causal structure learning is the structural equation model (SEM) that describes data generating procedure. In a slight abuse of notation, we also use Xi’s to denote random variables associated with the nodes in a graph G. Assuming G being a DAG, then the SEM is given by Xi = fi (pa(Xi,G), i) , i = 1, 2, . . . , d, where fi is a deterministic function and i’s are jointly independent noises. The SEM induces a marginal distribution P (X) over X = [X1, X2, · · · , Xd]T , and G and P (X) are said to form a causal Bayesian network (Pearl, 2009; Spirtes et al., 2000). The problem of causal structure learning is to infer the underlying causal DAG G based on the marginal distribution P (X), or more practically, an empirical version consisting of a number of i.i.d. observations from P (X). We next briefly review recently developed gradient-based methods that rely on a smooth characterization of acyclicity of directed graphs. These methods aim to find a DAG that optimizes a score function and can be categorized into two classes. The first class of methods explicitly associates the target causal model with a weighted adjacency matrix W and then estimate W by solving optimization problems in the following form: min W,φ EX∼P (X) S ( X,h(X;W,φ) ) , subject to trace ( eW◦W ) − d = 0, (1) where h : Rd → Rd is a model function parameterized by W (and other possible parameter φ) that aims to reconstruct X , S(·, ·) denotes a score function between the true and reconstructed variables, notation ◦ denotes the element-wise product, and eM is the matrix exponential of a square matrix M . The constraint was proposed by Zheng et al. (2018), which is smooth and holds if and only if W indicates a DAG. Methods in this class include: NOTEARS (Zheng et al., 2018), which targets linear models, with h(X;W,φ) = WTX and S(·, ·) being the Frobenius norm or equivalently the least-squares loss; and DAG-GNN (Yu et al., 2019) and the graph autoencoder approach (Ng et al., 2019b), where neural networks are used for the function h with φ being the weights of neural networks, and the score function can be chosen as the evidence lower bound (Kingma & Welling, 2013). A sparsity inducing term may be further added when the causal graph is assumed to be sparse. These objectives are equivalent to or are variants of some well studied score functions like the penalized maximum likelihood (Chickering, 2002; Van de Geer et al., 2013; Loh & Bühlmann, 2014). The second class uses certain functions, with parameter θ, to construct a weighted adjacency matrix W (θ) (or a binary one A(θ)) to represent the causal structure. These methods can be summarized as min θ, φ EX∼P (X) S ( X,h(X;W (θ), φ) ) , subject to trace ( eW (θ) ◦W (θ) ) − d = 0. (2) For example, GraN-DAG (Lachapelle et al., 2020) and NOTEARS-MLP (Zheng et al., 2020) respectively use neural network path products and partial derivatives between variables to construct W (θ). The binary matrix A(θ) can be obtained by sampling according to some distributions with learnable parameters, as used by Kalainathan et al. (2018); Ke et al. (2019); Ng et al. (2019a); Zhu et al. (2020). Before ending this section, we remark that while the gradient-based methods intend to learn a causal DAG, the learned DAG may not be identical to the underlying one for general SEMs due to the Markov equivalence (Spirtes et al., 2000; Peters et al., 2017). For such cases, one may convert the obtained DAG to its corresponding Completed Partially Directed Acyclic Graph (CPDAG) as the estimate. Nevertheless, if the SEM is identifiable and a proper score function is used, then the exact solution to the optimization problem is consistent, i.e., same as the true graph with probability 1; see, e.g., Shimizu et al. (2006); Peters & Bühlmann (2013); Peters et al. (2014); Zhang & Hyvärinen (2009). For further details and other technical issues like parameter optimization of the gradient-based methods, we refer the reader to the cited works and references therein. 3 EXPLOITING LOW RANK ASSUMPTION IN CAUSAL STRUCTURE LEARNING This section shows how to adapt existing gradient-based methods to take advantage of the low rank assumption, by providing a way for each class to utilize this assumption using techniques from the matrix completion literature. We remark that our adaptations with the low rank assumption are not restricted to a particular learning algorithm; other DAG learning methods may potentially combine one of the proposed modifications for learning low rank causal graphs, too. Matrix Factorization Since the weighted adjacency matrix W is explicitly optimized in the first class of methods, we can then apply the matrix factorization technique. Specifically, with an estimate r̂ for the graph rank, we can factorize W as W = UV T with U, V ∈ Rd×r̂. Problem (1) is then to optimize U and V that minimizes the score function under the DAG constraint, and has the same solution W (obtained from the product UV T ) as the original one if r̂ is greater than or equal to the true rank. Furthermore, if r̂ d, we have a much reduced number of parameters to optimize. Nuclear Norm For the second class of methods, the adjacency matrix W (θ) is not an explicit parameter to be optimized. In such a case, we can adopt a commonly used technique to add a nuclear norm term λ‖W (θ)‖∗, with λ > 0 being a tuning parameter, to the objective to induce low-rankness. The optimization procedures in these recent structure learning methods can directly incorporate the two adaptations as they are all gradient-based, though some extra care needs to be taken. Appendix C provides a detailed description of the optimization procedure and our implementation. The second approach is also feasible for the first class of methods, but we find that it does not work as well as the matrix factorization approach, possibly due to the singular value decomposition to compute the (sub-)gradient w.r.t. W at each optimization step. An acute reader may have noticed that we assumed a proper rank estimate r̂ or a proper penalty parameter λ. Yet knowing exactly the rank of the graph to be learned can be difficult in practice. Similar to the sparsity assumption, one may determine the hyperparameters r̂ and λ assisted by a validation dataset (or by cross-validation if the observed dataset is not sufficiently large). Alternatively, we can try different choices of the hyperparameters and then apply traditional score-based method where the search space is restricted to the resulting DAGs. However, since we are more concerned with relatively large and dense problems, the possible ranks may be too many to choose. As such, a lower bound rl and an upper bound ru on the graph rank would be beneficial—we need only consider ranks in [rl, ru] in the matrix factorization method, while the bounds are still useful by providing qualitative information for the nuclear norm approach: the lower an upper bound, the higher the tuning parameter λ should be chosen. Moreover, a lower bound can also justify the low rank assumption, i.e., if the lower bound is high, then the low rank assumption is likely to fail to hold. 4 GRAPHICAL BOUNDS ON RANKS Obtaining exact algebraic information of a DAG such as its rank and eigenvalues may be infeasible in practice, because it may require a full knowledge of the graph to be learned. On the other hand, structural information, such as graph connectivity, distributions of in-degrees and out-degrees, and an estimate of number of hubs, is sometimes more accessible. As such, this section is devoted to relating the rank of a graph to more easily interpretable graphical conditions, for the sake of a better understanding of what kinds of DAGs tend to satisfy the low rank assumption and for lower and upper bounds on the graph rank from certain structural priors. 4.1 PROBLEM SETTING Consider a DAG G = (V,E) with weighted adjacency matrix W and binary adjacency matrix A. We aim to seek upper and lower bounds on rank(W ) using only the graphical structure. Specifically, we focus on the weighted adjacency matrices with the same binary adjacency matrix A, i.e.,WA = {W ∈ Rd×d ; sign(|W |) = A}, where sign(·) and | · | are point-wise sign and absolute value functions, respectively. Notice that there exist trivial upper bound d− 1 and lower bound 0 for any DAG, but they are generally too loose for our purpose. In the following, we investigate the maximum rank max{rank(W );W ∈ WA} and minimum rank min{rank(W );W ∈ WA} to find tighter upper and lower bounds for any W ∈ WA. Before introducing two useful graph concepts, we comment that low rank DAGs are not necessarily sparse and vice versa; see a discussion in Appendix A. Definition 1 (Height). Given a DAG G = (V,E) and a vertex Xi ∈ V, the height of Xi, denoted by l(Xi), is defined as the length of the longest directed path starting from Xi. The height of G, denoted by l(G), is the length of the longest path in G. Definition 2 (Head-tail vertex cover). Let G = (V,E) be a directed graph and H,T be two subsets of V. (H,T) is called a head-tail vertex cover of G if every edge in G has its head vertex in H or its tail vertex in T. The size of a head-tail vertex cover (H,T) is defined as |H|+ |T|. As an example, Figure 1c is a head-tail vertex cover of G in Figure 1a, where H = {X2, X4, X8} (red nodes) and T = {X8, X9, X10} (blue nodes). The size of this vertex cover is 6. 4.2 LOWER BOUNDS We first study lower bounds on the rank of a weighted DAG. Define V−1 = ∅ and Vs = {Xi; l(Xi) = s} for s = 0, 1, . . . , l(G). Denote by Gs,s−1 the induced subgraph of G over Vs∪Vs−1. Let C(Gs,s−1) be the set of non-singleton connected components of Gs,s−1 and |C(Gs,s−1)| the cardinality. We have the following lower bounds. Theorem 1. Let G be a DAG with binary adjacency matrix A. Then min{rank(W ) ; W ∈ WA} ≥ ∑l(G) s=1 |C(Gs,s−1)| ≥ l(G). (3) All the proofs in this paper are provided in Appendix B. Theorem 1 shows that rank(W ) is greater than or equal to the sum of the number of non-singleton connected components in each Gs,s−1. As Gs,s−1 has at least one non-singleton connected component, we obtain the second inequality. In other words, the rank of a weighted DAG is at least as high as the length of the longest directed path. As an example, consider the graph shown in Figure 1. One can verify that min{rank(W );W ∈ WA} = 6, |C(G1,0)| = 2, |C(G2,1)| = 1, |C(G3,2)| = 1, and l(G) = 3. Thus, we have min{rank(W );W ∈ WA} = 6 > 2 + 1 + 1 = 4 > 3. We remark that the bounds in Theorem 1 may be loose in some cases. To characterize the minimum rank exactly is an on-going research problem (Hogben, 2010). 4.3 UPPER BOUNDS We turn to the more important issue for our purpose, regarding upper bounds on rank(W ). The next theorem shows that max{rank(W );W ∈ WA} can be characterized exactly in graphical terms. Theorem 2. Let G be a directed graph with binary adjacency matrix A. Then max{rank(W );W ∈ WA} is equal to the minimum size of the head-tail vertex cover of G, that is, max{rank(W ) ; W ∈ WA} = min{|H|+ |T| ; (H,T) is a head-tail vertex cover of G}. We comment that Theorem 2 holds for all directed graphs (not only DAGs), which may be of independent interest to other applications. A head-tail vertex cover of minimum size is called a minimum head-tail vertex cover, which in general is not unique. For a head-tail vertex cover (H,T), the vertices in H cover all the edges pointing towards these vertices while the vertices in T cover the edges pointing away. A head-tail cover of a relatively small size then indicates the presence of hubs, that is, vertices with relatively high in-degrees or out-degrees. Therefore, Theorem 2 suggests that the maximum rank of a weighted DAG is highly related to the presence of hubs: a DAG with many hubs tends to have low rank. Intuitively, a hub of high in-degree (out-degree) is a common effect (cause) of a number of direct causes (effect variables), comprising many V-structures (inverted V-structures). For example, in Figure 1a, X8 is a hub of V-structures and X9 is a hub of inverted V-structures. Such features are fairly common in real graph structures. Appendix A presents a real network, called pathfinder, which describes the causal relations among 109 variables (Heckerman et al., 1992) with the center node being the parent of a large number of other nodes. The famous scale-free (SF) graphs also tend to have hubs. A scale-free graph is one whose distribution of degree k follows a power law: P (k) ∼ k−γ , where γ is the power parameter typically within [2, 3] and P (k) denotes the fraction of nodes with degree k (Nikolova & Aluru, 2012). It is observed that many real-world networks are scale-free, and some of them, such as gene regulatory networks, protein networks, and financial system network, may be viewed as causal networks (Guelzim et al., 2002; Barabasi & Oltvai, 2004; Hartemink, 2005; Eguı́luz et al., 2005; Gao & en Ren, 2013; Ramsey et al., 2017). In particular, Barabasi & Oltvai (2004) claimed that most protein networks, some of which are directed and acyclic due to irreversible reactions, are the results of growth processes and preferential attachments, probably due to the gene duplication. Empirically, the ranks of scale-free graphs are relatively low, especially in comparison to Erdös-Rényi (ER) random graphs (Mihail & Papadimitriou, 2002). Figure 2 provides a simulated example where γ is chosen from {2, 3} and each reported value is over 100 random runs. As graph becomes denser, the graph rank also increases. However, for scale-free graphs with a relatively large γ, the increase of their ranks is much slower than that of Erdös-Rényi graphs; indeed, their ranks tend to stay fairly low even when the graph degree is large. Theorem 2 can also be used to generate a low rank graph, or more precisely, a random DAG with a given rank r and a properly specified graph degree. Here we briefly describe the idea and leave the detailed algorithm to Appendix C.1: first generate a graph with r edges and rank r; a random edge is sampled without replacement and would be added to the graph, if adding this edge does not increase the size of the minimum head-tail vertex cover; repeat the previous step until the pre-specified degree is reached or no edge could be added to the graph; finally, assign the edge weights randomly according to a continuous distribution and the weighted graph will have rank r with high probability. The next two theorems report some looser but simpler upper bounds on rank(W ). Theorem 3. Let G be a DAG with binary adjacency matrix A, and denote the set of vertices with at least one parent by Vch and those with at least one child by Vpa. Then we have max{rank(W ) ; W ∈ WA} ≤ ∑l(G) s=1 min (|Vs|, |ch(Vs)|) ≤ |Vpa|,∑l(G)−1 s=0 min (|Vs|, |pa(Vs)|) ≤ |Vch|, |V| −max{|Vs| ; 0 ≤ s ≤ l(G)}. (4) Since Vch and Vpa are the non-root and the non-leaf vertices, respectively, the first two inequalities of (4) indicate that the maximum rank is bounded from above by the number of non-root vertices and also by the number of non-leaf vertices. The last inequality of (4) is a generalization of the first two, which implies that the rank is likely to be low if most vertices have the same height. Theorem 4. Let G be a DAG with binary adjacency matrix A. Denote by skeleton(A) and moral(A) the binary adjacency matrices of the skeleton and moral graph of G, respectively. Then we have max{rank(W ) ; W ∈ WA} ≤max{rank(W ) ; sign(|W |) = skeleton(A)} ≤max{rank(W ) ; sign(|W |) = moral(A)}. The skeleton of a DAG is the undirected graph obtained by removing all the arrowheads, and the moral graph is the undirected graph where two vertices are adjacent if they are adjacent or if they share a common child in the DAG. This result is useful when the skeleton or the moral graph can be accurately estimated and the corresponding rank is low. In practice, we may use all available structural priors to obtain upper bounds on the underlying rank and choose the lowest one as our estimate. 5 EXPERIMENTS This section reports empirical results of the low rank adaptations of existing methods, compared with their original versions. We choose NOTEARS (Zheng et al., 2018) for linear SEMs by adopting the matrix factorization approach, denoted as NOTEARS-low-rank, and use the nuclear norm approach in combination with GraN-DAG (Lachapelle et al., 2020) for a non-linear data model. Again we remark that the two methods are only demonstrations of the utility of low rank assumption, which can be potentially combined with other methods as well. For more information, we also include several benchmark methods: fast GES (Ramsey et al., 2017), PC (Spirtes et al., 2000), MMHC (Tsamardinos et al., 2006), ICA-LiNGAM (Shimizu et al., 2006) specifically designed with non-Gaussian noises, for linear SEMs;1 and DAG-GNN (Yu et al., 2019), NOTEARS-MLP (Zheng et al., 2020), and CAM (Bühlmann et al., 2014) for the non-linear case. Their implementations are described in Appendix C. We consider randomly sampled DAGs with specified ranks (the generating procedure was described in Section 4.3 and is given as Algorithm 1 in Appendix C.1), scale-free graphs, and a real network structure. For linear SEMs, the weights are uniformly sampled from [−2,−0.5] ∪ [0.5, 2] and the noises are either standard Gaussian or standard exponential. For non-linear SEMs, we use additive Gaussian noise model with functions sampled from Gaussian processes with RBF kernel of bandwidth one. These data models are known to be identifiable (Shimizu et al., 2006; Peters & Bühlmann, 2013; Peters et al., 2014). From each SEM, we then generate n = 3, 000 observations. We repeat ten times over different seeds for each experiment setting. Detailed information about the setup can be found in Appendix C.3. Below we mainly report structural Hamming distance (SHD) which takes into account both false positives and false negatives, and a smaller SHD indicates a better estimate. 5.1 LINEAR SEMS WITH RANK-SPECIFIED GRAPHS We first consider linear SEMs on rank-specified graphs, with number of nodes d ∈ {100, 300}, rank r = d0.1de, and average degree k ∈ {2, 4, 6, 8}. The true rank is assumed to be known and is used as the rank parameter r̂ in NOTEARS-low-rank. For a better visualization, Figure 3 only reports the average SHDs, while the true positive rate, false discovery rate, and running time are left to Appendix D. We also show the results after using the interquartile range rule to remove outlier SHDs. We observe that the low rank assumption can greatly improve the performance of NOTEARS, reducing the SHDs by at least a half. For this data model, the fast GES has much higher SHDs (see also Appendix D). PC is too slow (for example, it did not finish in 16 hours for a dataset with 100 nodes and degree 6), because some nodes may have a high in-degree. For the same reason, the skeleton may not be well estimated by MMHC; its performance is slightly worse than the fast GES and is not reported. For more information regarding the role of sparsity, we include NOTEARS with an `1 penalty, named NOTEARS-L1. Here the `1 penalty weight is chosen from {0.01, 0.02, 0.05, 0.1, 0.2, 0.5}. Instead 1Here we choose ICA-LiNGAM, other than alternative LiNGAM methods like DirectLiNGAM (Shimizu et al., 2011), based on our empirical observation. Specifically, an implementation of ICA-LiNGAM has a noticeably better performance than DirectLiNGAM for relatively dense graphs. Please find a detailed discussion and an empirical comparison in Appendix D.4. of relying on an additional validation dataset, we treat NOTEARS-L1 favorably by picking the lowest SHD obtained from different weights for each dataset. As seen from Figure 3a, NOTEARS-L1 is slightly better than NOTEARS when the average degree is 2, but is largely outperformed with relatively dense graphs. This observation was also reported in Zheng et al. (2018). We conjecture that it is because our experiments consider relatively sufficient data and dense graphs. Moreover, the thresholding procedure controls false discoveries and may have a similar effect to the `1 penalty. Appendix D.1 studies graphs with higher ranks, where it is observed that the advantage of NOTEARSlow-rank over NOTEARS decreases when the rank of the underlying DAG increases. Nevertheless, NOTEARS-low-rank is still competitive when the true rank is dd/2e and the factorized matrix has the same number of parameters as NOTEARS. We also conduct an empirical analysis with different sample sizes in Appendix D.2, which shows that NOTEARS-low-rank performs reasonably well when the sample size is small and tends to have a better performance with a larger number of samples. Due to space limit, please find further details in the appendix. 5.2 LINEAR SEMS WITH SCALE-FREE GRAPHS We next consider scale-free graphs with d = 100 nodes, average degree k = 6, and power γ = 2.5. For this experiment, the minimum, maximum, and mean ranks of generated graphs are 14, 24, and 18.7, respectively. Here we choose the rank parameter r̂ from {20, 30, 40} for NOTEARS-low-rank. As seen from Figure 4, NOTEARSlow-rank with rank parameter r̂ = 20 performs the best, even though there are graphs with ranks greater than 20. 5.3 SENSITIVITY OF RANK PARAMETERS AND VALIDATION So far we have assumed that the true rank or an accurate estimate is known. In this experiment, we conduct an empirical analysis with different rank parameters for linear Gaussian data model on rank-specified graphs with 100 nodes, degree 8, and rank 10. We also include the validation based approach where 2, 000 samples are chosen as training dataset and the rest as validation dataset. We use the derived lower and upper bounds in Theorems 1 and 3 to obtain a range of possible rank parameters, assuming that the corresponding structural priors are available. Within this range, we then select 7 evenly distributed rank parameters used with NOTEARS-low-rank to learn causal graphs. Finally, we evaluate each learned DAG using the validation dataset and choose the DAG with the best score as our estimate. As seen from Figure 5, NOTEARS-low-rank performs the best when the rank parameter is identical to the true rank, while the rank parameter chosen by validation has almost the same performance. Compared with NOTEARS on the same datasets, the low rank version performs well across a range of rank parameters. Although this validation approach increases the total running time that depends on the number of candidate rank parameters, we believe that it is acceptable given the gained accuracy and also the fact that this strategy has been frequently adopted for tuning hyperparameters in practice. 5.4 NON-LINEAR SEMS For non-linear data models, we pick rank-specified graphs with 50 nodes, rank 5, and average degree k ∈ {2, 4, 6, 8}. To our knowledge, the selected benchmark methods CAM, NOTEARS-MLP, and GraN-DAG are state-of-the-art methods on this data model. As a demonstration of the low rank assumption, we apply the nuclear norm approach to GraN-DAG and choose from {0.3, 0.5, 1.0} as penalty weights. For validation, we use the same splitting ratio as in Section 5.3 and consider more penalty weights from {0.1, 0.2, 0.3, 0.5, 1, 2, 5}. Similarly, the learned graph that achieves the best score on the validation dataset is chosen as final estimate. Figure 6 (and Appendix D.6 with a more detailed result) shows that adding a nuclear norm can improve the performance of GraN-DAG across a large range of weights when the graph is relatively dense. For degree 8, the low rank version with validation achieves average SHD 77.4, while the SHDs of CAM, NOTEARS-MLP, and original GraN-DAG are 131.9, 119.4, and 109.4, respectively. 5.5 REAL NETWORK We apply the proposed method to the arth150 gene network, which is a DAG containing 107 genes and 150 edges. Its maximum rank is 40. Since the real dataset has only 22 samples, we instead use simulated data from linear Gaussian SEMs. We pick r̂ from {36, 40, 44} and also use validation to select the rank parameter. We apply NOTEARS-L1 where the `1 penalty weight is chosen from {0.05, 0.1, 0.2}, and similarly treat this method favorably by picking the lowest SHD for each dataset. The mean and median SHDs are shown in Figure 7. Using Student’s t-test, we find that with significance level 0.1, the results obtained with r̂ = 44 and the validation approach are significantly better than NOTEARS. This experiment demonstrates again the utility of the low rank assumption, even when the true rank of the graph is not very low. 6 CONCLUDING REMARKS This paper studies the potential of low rank assumption in causal structure learning. Empirically, we show that the low rank adaptations perform noticeably better than existing algorithms when the low rank condition is satisfied, and also deliver competitive performances when the rank is not as low as is assumed. Theoretically, we provide an improved understanding of what kinds of graphs tend to be low rank and a possibility to obtain bounds on the underlying rank from several structural priors. We treat the present work as our first step to incorporate low-rankness into causal DAG learning. A future direction is to approximate a high rank DAG with a low rank one (possibly adding an additional DAG that is sparse). While there is a rich literature on low rank approximations of matrices and combining low-rankness with sparsity, it is non-trivial to us to conclude under what conditions such an approximation is guaranteed to be effective to learn causal DAGs. Another direction is to compare the low rank assumption to other structural or parametric priors affecting model selection through marginal likelihood (Eggeling et al., 2019; Silander et al., 2007). Finally, it is also interesting to investigate if a low rank DAG model implies any useful behavior in the data. Appendix A EXAMPLES AND DISCUSSIONS We provide more examples and discussions in this section. Minimum rank of the graph in Figure 1 We first show that the minimum rank of the DAG structure in Figure 1 is 6. It is clear that the 6-th to 10-th rows of A are always linearly independent, so it suffices to show that the 11-th row is linearly independent of the 6-th to 10-th rows. To see this, notice that if the 11-th row is a linear combination of the 6-th to 10-th rows, then A(11, 1) would be non-zero, which is a contradiction. The pathfinder and arth150 networks Figure 8 visualizes the pathfinder and arth150 networks that are mentioned in Sections 4.3 and 5, respectively. Both networks can be found at http: //www.bnlearn.com/bnrepository. As one can see, these two networks contain hubs: the center note in the pathfinder network has a large number of children, while the arth150 network contains many ‘small’ hubs, each of which has 5 ∼ 10 children. We also notice that nearly all the hubs in the two networks have high out-degrees. Sparse DAGs and low rank DAGs A sparse DAG does not necessarily indicate a low rank DAG, and vice versa. For example, a directed linear graph with d vertices has only d − 1 edges, i.e. X1 → X2 → · · · → Xd, while the rank of its binary adjacency matrix is d − 1. According to Theorems 1 and 2, the maximum and minimum ranks of a directed linear graph are equal to its number of edges. Thus, directed linear graphs are sparse but have high ranks. On the other hand, for some non-sparse graphs, we can assign the edge weights so that the resulting graphs have low ranks. A simple example would be a fully connected directed balanced bipartite graph, as shown in Figure 9. The definition of bipartite graphs can be found in Appendix B.1. A bipartite graph is called balanced if its two parts contain the same number of vertices. The rank of a fully connected balanced bipartite graph with d vertices is 1 if all the edge weights are the same (e.g., the binary adjacency matrix), but the number of edges is d2/4. We also notice that there exist some connections between the maximum rank and the graph degree, or more precisely, the total number of edges in the graph, according to Theorem 2. Intuitively, if the graph is dense, then we need more vertices to cover all the edges. Thus, the size of the minimum head-tail vertex cover should be large. Explicitly providing a formula to characterize these two graph parameters is an interesting problem, which will be explored in the future. B PROOFS In this section, we present proofs for the theorems given in the main content. B.1 PRELIMINARIES A bipartite graph is a graph whose vertex set V can be partitioned into two disjoint subsets V0 and V1, such that the vertices within each subset are not adjacent to one another. V0 and V1 are called the parts of the graph. A matching of a graph is a subset of its edges where no two of them share a common endpoint. A vertex cover of a graph is a subset of the vertex set where every edge in the graph has at least one endpoint in the subset. The size of a matching (vertex cover) is the number of edges (vertices) in the matching (vertex cover). A maximum matching of a graph is a matching of the largest possible size and a minimum vertex cover is a vertex cover of the smallest possible size. An important result about bipartite graphs is König’s theorem (Dénes, 1931), which states that the size of a minimum vertex cover is equal to the size of a maximum matching in a bipartite graph. Based on the heights of vertices in V, we can define a weak ordering among the vertices: Xi Xj if and only if l(Xi) > l(Xj), and Xi ∼ Xj if and only if l(Xi) = l(Xj). Given this weak ordering, we can group the vertices by their heights, and the resulting graph shows a hierarchical structure; see Figure 1 in the main text for an example. This hierarchical representation has some simple and nice properties. Let Vs = {Xi; l(Xi) = s}, s = 0, 1, . . . , l(G), and let V−1 = ∅. We have: (1) for any given s ∈ {0, 1, . . . , l(G)} and two distinct vertices X1, X2 ∈ Vs, X1 and X2 are not adjacent, and (2) for any given s ∈ {1, 2, . . . , l(G)} and Xi ∈ Vs, there is at least one vertex in Vs−1 which is a child of Xi. If we denote the induced subgraph of G over Vs ∪Vs−1 by Gs,s−1, then Gs,s−1 is a bipartite graph with Vs and Vs−1 as parts, and singletons in Gs,s−1 (i.e., vertices that are not endpoints of any edge) only appear in Vs−1. For ease of presentation, we occasionally use index i to represent variable Xi in the following sections. B.2 PROOF OF THEOREM 1 Proof. Let G = (V,E). Consider an equivalence relation, denoted by ∼, among vertices in V defined as follows: for any Xi, Xj ∈ V, Xi ∼ Xj if and only if l(Xi) = l(Xj) and Xi and Xj are connected. Here, connected means that there is a path between Xi and Xj . Below we use C(Xi) to denote the equivalence class containing Xi. Next, we define a weak ordering π on V/ ∼, i.e., the equivalence classes induced by ∼, by letting C(Xi) π C(Xj) if and only if l(Xi) ≥ l(Xj). Then, we extend π to a total ordering ρ on V/ ∼. The ordering ρ also induces a weak ordering (denoted by ρ̄) on V: Xi ρ̄ Xj if and only if C(Xi) ρ C(Xj). Finally, we extend ρ̄ to a total ordering γ on V. It can be verified that γ is a topological ordering of G, that is, if we relabel the vertices according to γ, then Xi ∈ pa(Xj ,G) if and only if i > j and Xi and Xj are adjacent, and the adjacency matrix of G becomes lower triangular. Assume that the vertices of G are relabeled according to γ and we will consider the binary adjacency matrix A of the resulting graph throughout the rest of this proof. Note that relabelling is equivalent to applying a permutation onto the adjacency matrix, which does not change the rank. Let V0 = {1, 2, . . . , k1 − 1} for some k1 ≥ 2. Then the k1-th row of A, denoted by A(k1, ·), is the first non-zero row vector of A. Letting S = {A(k1, ·)}, then S contains a subset of linearly independent vector(s) of the first k1 rows of A. Suppose that we have visited the first m rows of A and S = {A(k1, ·), A(k2, ·), . . . , A(kt, ·)} contains a subset of linearly independent vector(s) of the first m rows ofA, where k1 ≤ m < d. IfXm+1 Xkt , then we addA(m+1, ·) to S; otherwise, we keep S unchanged. We claim that the vectors in S are still linearly independent after the above step. Clearly, if we do not add any new vector, then S contains only linearly independent vectors. To show the other case, note that if l(Xm+1) > l(Xkt) ≥ · · · ≥ l(Xk1), then there is an index i ∈ Vl(Xm+1)−1 such that A(m + 1, i) 6= 0, by the definition of height. Since l(Xm+1) > l(Xkt), we have l(Xkt) ≤ l(Xm+1)− 1 and thus A(kj , i) = 0 for all j = 1, 2, . . . , t. Therefore, A(m+ 1, ·) cannot be linearly represented by {A(kj , ·); j = 1, 2, . . . , t} and the vectors in S are linearly independent. On the other hand, if l(Xm+1) = l(Xkt), then the definition of the equivalence relation ∼ implies that Xm+1 and Xkt are disconnected, which means that Xm+1 and Xkt do not share a common child in Vl(Xm+1)−1. Consequently, there is an index i ∈ Vl(Xm+1)−1 such that A(m + 1, i) 6= 0 but A(kt, i) = 0. Similarly, we can show that A(kj , i) = 0 for all j = 1, 2, . . . , t. Thus, the vectors in S are still linearly independent. After visiting all the rows in A, the number of vectors in S is equal to ∑l(G) s=1 |C(Gs,s−1)| based on the definition of ∼. The second inequality can be shown by noting that C(Gs,s−1) has at least one elements. The proof is complete. B.3 PROOF OF THEOREM 2 Proof. Denote the directed graph by G = (V,E). Edmonds (1967, Theorem 1) showed that max{rank(W );W ∈ WA} is equal to the maximum number of nonzero entries of A, no two of which lie in a common row or column. Therefore, it suffices to show that the latter quantity is equal to the size of the minimum head-tail vertex cover. Let V ′ = V′0 ∪V′1, where V′0 = V × {0} = {(Xi, 0);Xi ∈ V} and V′1 = V × {1} = {(Xi, 1);Xi ∈ V}. Now define a bipartite graph B = (V′ ,E′) where E′ = {(Xi, 0) → (Xj , 1); (Xi, Xj) ∈ E}. Denote byM a set of nonzero entries of A so that no two entries lie in the same row or column. Notice thatM can be viewed as an edge set and no two edges inM share a common endpoint. Thus,M is a matching of B. Conversely, it can be shown by similar arguments that any matching of B corresponds to a set of nonzero entries of A, no two of which lie in a common row or column. Therefore, max{rank(W ),W ∈ WA} equals the size of the maximum matching of B, and further the size of the minimum vertex cover of B according to König’s theorem. Note that any vertex cover of B can be equivalently transformed to a head-tail vertex cover of G, by letting H and T be the subsets of the vertex cover containing all variables in V′0 and of the vertex cover containing all variables in V ′ 1, respectively. Thus, max{rank(W ),W ∈ WA} is equal to the size of the minimum head-tail vertex cover. B.4 PROOF OF THEOREM 3 Proof. We start with the first inequality in Equation (4). Let h1, . . . , hp denote the heights where |Vs| < |ch(Vs)|, and t1, . . . , tq the height where |Vs| > |ch(Vs)|. Let H = ∪pi=1Vhi and T = ∪qi=1Vti . It is straightforward to see that (H,T) is a head-tail vertex cover. Thus, Equation (4) holds according to Theorem 2. The second inequality can be shown similarly and its proof is omitted. For the third inequality, let m = argmax{|Vs| : 0 ≤ s ≤ l(G)}, and define H = ∪i>mVi and T = ∪i<mVi. Then (H,T) is also a head-tail vertex cover and the third inequality follows from Theorem 2, too. B.5 PROOF OF THEOREM 4 Proof. Notice that Theorem 2 holds for all directed graphs. This theorem then follows by treating the skeleton and the moral graph as directed graphs with loops, i.e., an undirected edge Xi −Xj is treated as two directed edges Xi → Xj and Xj → Xi. C IMPLEMENTATION DETAILS In this section, we present an algorithm to generate a random DAG with a given rank, a low rank version of NOTEARS and GraN-DAG, and also a description of our experimental settings. C.1 GENERATING RANDOM DAGS In Section 4.3, we briefly discuss the idea of generating a random DAG with a given rank. We now describe the detailed procedure in Algorithm 1. In particular, we aim to generate a random DAG with d nodes, average degree k, and rank r. The first part of Algorithm 1 after initialization is to sample a number N , representing the total number of edges, from a binomial distribution B(d(d − 1)/2, p) Algorithm 1 Generating random DAGs Require: Number of nodes d, average degree k, and rank r. Ensure: A randomly sampled DAG with the number of nodes d, average degree k, and rank r. 1: Set M = empty graph, Mp = ∅, and R = {(i, j); i < j, i, j = 1, 2, ..., d}. 2: Set p = k/(d− 1). 3: Sample a numberN ∼ B(d(d−1)/2, p), whereB(n, p) is a binomial distribution with parameters n and p. 4: if N < r then 5: return FAIL 6: end if 7: Sample r indices from 1, . . . , d− 1 and store them in Mp in descending order. 8: for each i in Mp do 9: Sample an index j from i+ 1 to d. 10: Add edge (i, j) to M and remove (i, j) from R. 11: end for 12: while R 6= ∅ and |M | < N do 13: Sample an edge (i, j) from R and remove it from R. 14: if adding (i, j) to M does not change the size of the minimum head-tail vertex cover of M then 15: Add (i, j) to M . 16: end if 17: end while 18: if |M | < N then 19: return FAIL 20: end if 21: return M where p = k/(d− 1). If N < r, Algorithm 1 would return FAIL since a graph with N < r edges could never have rank r. Otherwise, Algorithm 1 samples an initial graph with r edges and rank r, by choosing r edges such that no two of them share the same head points or the same tail points, i.e., each row and each column of the corresponding adjacency matrix have at most one non-zero entry. Then, Algorithm 1 sequentially samples an edge from R containing all possible edges and checks whether adding this edge to the graph changes the size of the minimum head-tail vertex cover. If not, the edge will be added to the graph; otherwise, it will be removed from R. This is because if a graph G is a super-graph of another graphH, then the size of the minimum head-tail cover of G is no less than that ofH. We repeat the above sampling procedure until there is no edge in R or the number of edges in the resulting graph reaches N . If the latter happens, the algorithm will return the generated graph; otherwise, it will return FAIL. The theoretic basis of Algorithm 1 is Theorem 2. Note that the algorithm may not return a valid graph if the desired number N of edges cannot be reached. This could happen if the input rank is too low while the input average degree is too high. With our experiment settings, we find it rare for Algorithm 1 to fail to return a desired graph. C.2 OPTIMIZATION For this part, we consider a dataset consisting of n i.i.d. observations from P (X) and consequently the expectations in Problems (1) and (2) are replaced by empirical means. Denote the design matrix by X ∈ Rn×d, where each row of X corresponds to an observation and each column represents a variable. Here we use NOTEARS (Zheng et al., 2018) and Gran-DAG (Lachapelle et al., 2020) from each class of methods as examples and will describe their low rank versions in the following. Other gradient-based methods and their optimization procedures can be similarly modified to incorporate the low rank assumption. Algorithm 2 Optimization procedure for NOTEARS-low-rank Require: Design matrix X, starting point (U0, V0, α0), rate c ∈ (0, 1), tolerance > 0, and threshold w > 0. Ensure: Locally optimal parameter W ∗. 1: for t = 1, 2, . . . do 2: (Solve primal) Ut+1, Vt+1 ← arg minU,V Lρ(U, V, αt) with ρ such that g(Ut+1V Tt+1) < cg(UtV T t ). 3: (Dual ascent) αt+1 ← αt + ρg(Ut+1V Tt+1). 4: if g(Ut+1V Tt+1) < then 5: Set U∗ = Ut+1 and V ∗ = Vt+1. 6: break 7: end if 8: end for 9: (Thresholding) Set W ∗ = U∗V ∗T ◦ 1(|U∗V ∗T | > w). 10: return W ∗ C.2.1 NOTEARS WITH LOW RANK ASSUMPTION Following Section 3, the optimization problem in our work can be written as min W 1 2n ∥∥X−XUV T∥∥2 F , subject to trace ( eUV T ◦UV T ) − d = 0, (5) where U, V ∈ Rd×r̂ and ◦ is the point-wise product. The constraint in Problem (5) holds if and only if UV T is a weighted adjacency matrix of a DAG. This problem can then be solved by standard numeric optimization methods such as the augmented Lagrangian method (Bertsekas, 1999). In particular, the augmented Lagrangian is given by Lρ(U, V, α) = 1 2n ∥∥X−XUV T∥∥2 F + αg(UV T ) + ρ 2 |g(UV T )|2, where g(UV T ) := trace ( eUV T ◦UV T ) − d, α is the Lagrange multiplier, and ρ > 0 is the penalty parameter. The optimization procedure is summarized in Algorithm 2, similar to Zheng et al. (2018, Algorithm 1). Notice that here we do not include the `1 penalty term (except for the first and last experiments in Sections 5.1 and 5.5, respectively), for the following reasons: (1) the thresholding procedure can also control false discoveries; (2) we consider relatively sufficient data for the experiments and NOTEARS with thresholding has been shown in Zheng et al. (2018) to perform consistently well even when the graph is sparse; (3) we are more concerned with relatively large and dense graphs, so a sparsity assumption may be harmful, as shown also by Zheng et al. (2018); (4) the `1 penalty term requires a tuning parameter, which itself is not easy to choose. Zheng et al. (2018) used L-BFGS to solve the unconstrained subproblem in Step 2. We alternatively use the Newton conjugate gradient method that is written in C. Empirically, these two optimizers behave similarly in terms of the estimate performance, while the latter can run much faster thanks to its C implementation. The DAG constraint may not be satisfied exactly using iterative numeric methods, so it is a common practice to pick a small tolerance, followed by a thresholding procedure on the estimated entries to obtain exact DAGs. In our implementation, we choose U0 and V0 to be the first r̂ columns of the d× d identity matrices. Other parameter choices are: α0 = 0, c = 0.25, = 10−6, and w = 0.3, similar to those used in related methods on the same datasets (e.g., Zheng et al. (2018); Yu et al. (2019); Zhu et al. (2020)). The chosen threshold w = 0.3 works well in our experiments and in the experiments of related works that use the same data model. In case the thresholded matrix is not a DAG, one may further increase the threshold until the resulting matrix corresponds to a DAG. After obtaining W ∗, we add an additional pruning step: we use linear regression to refit the dataset based on the structure indicated by W ∗ and then apply another thresholding (with w = 0.3) to the refitted weighted adjacency matrix. Both the Newton conjugate gradient optimizer and the pruning technique are also applied to NOTEARS, which not only accelerate the optimization but also improve its performance by obtaining a much lower SHD, particularly for large and dense graphs. See Appendix D.3 for an empirical comparison. C.2.2 GRAN-DAG WITH LOW RANK ASSUMPTION We next consider a low rank version of GraN-DAG. The optimization problem can be written as min θ − 1 n n∑ l=1 d∑ i=1 log p ( X (l) i | pa(Xi,W (θ)) (l); θ ) + λ‖W (θ)‖∗ subject to trace ( eW (θ) ) − d = 0, (6) where X(l)i is the l-th sample of variable Xi and pa(Xi,W (θ)) (l) means the l-th sample of Xi’s parents indicated by the adjacency matrix W (θ). Here, θ denotes the parameters of neural networks and W (θ) with non-negative entries is obtained from the neural network path products. Problem (6) can be solved similarly using augmented Lagrangian. The procedure is similar to Algorithm 2 and is the same to that used by GraN-DAG, with slight modifications: (1) the subproblem in Step 2 is approximately solved using first-order methods; (2) the thresholding at Step 9 is replaced by a variable selection method proposed by Bühlmann et al. (2014). The same variable selection or pruning method is adopted by two other benchmark methods CAM and NOTEARS-MLP in our experiment. Please refer to Lachapelle et al. (2020) and Bühlmann et al. (2014) for further details. C.3 EXPERIMENT SETUP In our experiments, we consider three data models: linear Gaussian SEMs, linear non-Gaussian SEMs (linear exponential SEMs), and non-linear SEMs (Gaussian processes). Given a randomly generated DAG G, the associated SEM is generated as follows: Linear Gaussian A linear Gaussian SEM is given by Xi = ∑ Xj∈pa(Xi,G) W (j, i)Xj + i, i = 1, 2, . . . , d, (7) where pa(Xi,G) denotes Xi’s parents in G and i’s are jointly independent standard Gaussian noises. In our experiments, the weights W (i, j)’s are uniformly sampled from [−2,−0.5] ∪ [0.5, 2]. Linear Exponential A linear exponential SEM is also generated according to Equation (7), where i’s are replaced by jointly independent Exp(1) random variables. The weightsW (i, j)’s are sampled from [−2,−0.5] ∪ [0.5, 2] uniformly, too. Gaussian Processes We consider the following additive noise model: Xi = fi(pa(Xi,G)) + i, i = 1, 2, . . . , d, (8) where i’s are jointly independent standard Gaussian noises and fi’s are functions sampled from Gaussian processes with RBF kernel of bandwidth one. We sample 3, 000 observations according the SEM. The reported results of each setting are summarized over 10 repetitions with different seeds. The experiments are run on a Linux workstation with 16-core Intel Xeon 3.20GHz CPU and 128GB RAM. C.4 BENCHMARK METHODS Existing causal structure learning methods used in our experiments all have available implementations, as listed below: • GES and PC: an implementation of both methods is available through the py-causal package at https://github.com/bd2kccd/py-causal. We note that, the implementation of py-causal package is based on the CMU TETRAD project, in which the version of GES is indeed the fast GES algorithm proposed by Ramsey et al. (2017). • MMHC (Tsamardinos et al., 2006): an implementation is available in the bnlearn package at https://CRAN.R-project.org/package=bnlearn. • CAM (Peters et al., 2014): its codes are available through the CRAN R package repository at https://cran.r-project.org/web/packages/CAM. • NOTEARS (Zheng et al., 2018) and NOTEARS-MLP (Zheng et al., 2020): codes are available at the first author’s github repository https://github.com/xunzheng/ notears. • GraN-DAG (Lachapelle et al., 2020): an implementation is available at the first author’s github repository https://github.com/kurowasan/GraN-DAG. Note that for graphs of 50 nodes or more, GraN-DAG performs a preliminary neighborhood selection step to avoid overfitting. • DAG-GNN (Yu et al., 2019): the codes are available at the first author’s github repository https://github.com/fishmoon1234/DAG-GNN. • ICA-LiNGAM (Shimizu et al., 2006): an implementation is available at https://sites. google.com/site/sshimizu06/lingam. In the experiments, we mostly use default hyperparameters unless otherwise stated. D ADDITIONAL EXPERIMENTAL RESULTS D.1 LINEAR SEMS WITH HIGHER RANKS This experiment considers graphs of higher ranks. We use rank-specified random graphs with d = 100 nodes and rank r ∈ {30, 35, 40, 45, 50} on linear Gaussian SEMs. The results are shown in Figures 10a and 10b with degrees 2 and 8, respectively. We observe that when the rank of the underlying graph becomes higher, the advantage of NOTEARS-low-rank over NOTEARS decreases. Nonetheless, NOTEARS-low-rank with rank r = 50 is still comparable to NOTEARS, and has a lower average SHD after removing outlier SHDs using the interquartile range rule. D.2 NOTEARS-LOW-RANK WITH DIFFERENT SAMPLE SIZES We next empirically study the consistency of NOTEARSlow-rank. Again, we use rank-specified random graphs (sampled according to Algorithm 1) with d = 100 nodes, degree k = 8, rank r = 10, and linear Gaussian SEMs. We also assume that the true rank is known. We fix the rank parameter r̂ = 10 and use different sample sizes ranging from 200 to 5, 000. From Figure 11, NOTEARSlow-rank performs reasonably well when the sample size is small and tends to have a better performance with a larger number of samples. D.3 FURTHER PRUNING We compare the empirical results before and after applying the additional pruning technique described in Appendix C.2. The graphs are rank-specified with d ∈ {100, 300} nodes, rank r = d0.1de, and degree k ∈ {2, 4, 6, 8}. We again use linear Gaussian data model with equal noise variances to generate the datasets. The average SHDs are reported in Figure 12. We see that applying an additional pruning step indeed improves the final performance of both NOTEARS and NOTEARS-low-rank, especially on relatively large and dense graphs. D.4 AN EMPIRICAL COMPARISON BETWEEN ICA-LINGAM AND DIRECTLINGAM To our best knowledge, there are two Python implementations of ICA-LiNGAM (Shimizu et al., 2006) released by the authors, available at https://sites.google.com/site/sshimizu06/ lingam and https://github.com/cdt15/lingam, respectively, where the latter is a Python package containing several LiNGAM related methods. In the following, we use ICALiNGAM-pre and ICA-LiNGAM-cdt to denote these two implementations, respectively. For DirectLiNGAM (Shimizu et al., 2011), we only find a Python implementation available at the previously mentioned Python package containing ICA-LiNGAM-cdt. Here we run DirectLiNGAM, ICA-LiNGAM-cdt, and ICA-LiNGAM-pre on linear exponential data models with 100-node and rank-10 graphs. The mean SHDs are reported below in Table 1. Based on this experimental result as well as our past experience, DirectLiNGAM usually has a (slightly) better performance than ICA-LiNGAM-cdt, while ICA-LiNGAM-pre has a noticeably (if not much) better performance for relatively dense and large graphs. We are more concerned with relatively large and dense graphs and hence report the results achieved by ICA-LiNGAM-pre in the main paper. D.5 DETAILED EMPIRICAL RESULTS FOR EXPERIMENT 1 WITH LINEAR GAUSSIAN SEMS Table 2 reports detailed results including true positive rates (TPRs), false discovery rates (FDRs), structural Hamming distances (SHDs), and running time on rank-specified graphs with linear Gaussian data model. Here the true rank is assumed to be known and is used as the rank parameter in NOTEARSlow-rank. We also test (fast) GES, MMHC, and PC. However, PC is too slow since some nodes may have a high in-degree (i.e., hubs) in large, dense, and low rank graphs. For the same reason, the skeleton may not be correctly estimated by MMHC, which has a similar performance to that of GES. Therefore, we only include the results of GES for comparison. We treat GES favorably by regarding undirected edges as true positives if the true graph has a directed edge in place of the undirected ones. D.6 DETAILED RESULTS FOR EXPERIMENT 4 WITH NON-LINEAR SEMS Table 3 reports the detailed SHDs for each method in Section 5.4. We also mark in bold the best results from methods with or without low rank modifications.
1. What is the primary contribution of the paper in Bayesian network structure learning? 2. What are the strengths of the proposed method, particularly in its similarity to NOTEARS? 3. What are the weaknesses of the paper regarding the requirement for prior knowledge or guessing of the rank? 4. How does the reviewer assess the value of the algorithm despite its limitations? 5. Are there any suggestions for future improvements by combining low-rankness with sparsity?
Review
Review This paper attempts to exploit the low-rankness of the adjacency matrix of the DAG in Bayesian network structure learning. The overall framework is similar to NOTEARS, except that the adjacency matrix W is decomposed into low rank components W = UV'. To justify the approach, the paper also includes lower and upper bounds of the rank of DAGs, albeit mostly theoretical and not applicable to real experiments. The paper is very solid in presenting mathematical facts and detailed algorithms. However, my main concern is about the fact that the algorithm requires knowledge (or guess) about the rank. In fact, the experiments in Section 5 already uses the ground truth rank information in NOTEARS-low-rank. Algorithm 1 is a great resource to be shared in the community, however in principal it shouldn't be needed to perform the experiments in Section 5. If one can gain accuracy benefit even without knowing the true rank, paying extra runtime cost is acceptable (Table 1). There are also many works on combining low-rankness with sparsity, which I suggest the authors to consider as future steps. Update: The authors have explained the issue raised in the review. It's not ideal that the algorithm requires the knowledge of rank beforehand, but it's okay if this point is clearly communicated in the paper. I would keep my current score.
ICLR
Title On Low Rank Directed Acyclic Graphs and Causal Structure Learning Abstract Despite several important advances in recent years, learning causal structures represented by directed acyclic graphs (DAGs) remains a challenging task in high dimensional settings when the graphs to be learned are not sparse. In this paper, we propose to exploit a low rank assumption regarding the (weighted) adjacency matrix of a DAG causal model to mitigate this problem. We demonstrate how to adapt existing methods for causal structure learning to take advantage of this assumption and establish several useful results relating interpretable graphical conditions to the low rank assumption. In particular, we show that the maximum rank is highly related to hubs, suggesting that scale-free networks which are frequently encountered in real applications tend to be low rank. We also provide empirical evidence for the utility of our low rank adaptations, especially on relatively large and dense graphs. Not only do they outperform existing algorithms when the low rank condition is satisfied, the performance is also competitive even though the rank of the underlying DAG may not be as low as is assumed. 1 INTRODUCTION An important goal in many sciences is to discover the underlying causal structures in various domains, both for the purpose of explaining and understanding phenomena, and for the purpose of predicting effects of interventions (Pearl, 2009). Due to the relative abundance of passively observed data as opposed to experimental data, how to learn causal structures from purely observational data has been vigorously investigated (Peters et al., 2017; Spirtes et al., 2000). In this context, causal structures are usually represented by directed acyclic graphs (DAGs) over a set of random variables. For this task, existing methods can be roughly categorized into two classes: constraint- and scorebased. The former use statistical tests to extract from data a number of constraints in the form of conditional (in)dependence and seek to identify the class of causal structures compatible with those constraints (Meek, 1995; Spirtes et al., 2000; Zhang, 2008). The latter employ a score function to evaluate candidate causal structures relative to data and seek to locate the causal structure (or a class of causal structures) with the optimal score. Due to the combinatorial nature of the acyclicity constraint (Chickering, 1996; He et al., 2015), most score-based methods rely on local heuristics to perform the search. A particular example is the greedy equivalence search (GES) algorithm (Chickering, 2002) that can find an optimal solution with infinite data and proper model assumptions. Recently, Zheng et al. (2018) introduced a smooth acyclicity constraint w.r.t. graph adjacency matrix, and the task on linear data models was then formulated as a continuous optimization problem with least-squares loss. This change of perspective allows using deep learning techniques to model causal mechanisms and has already given rise to several new algorithms for causal structure learning with non-linear data, e.g., Yu et al. (2019); Ng et al. (2019b;a); Ke et al. (2019); Lachapelle et al. (2020); Zheng et al. (2020), among others. While these new algorithms represent the current state of the art in many settings, their performance generally degrades when the target DAG becomes large and relatively dense, as seen from the empirical results reported in the referred works and also in this paper. This issue is of course a challenge to other approaches. Ramsey et al. (2017) proposed fast GES for impressively large problems, but it works reasonably well only when the large structure is very sparse. The max-min hill-climbing (MMHC) (Tsamardinos et al., 2006) relies on local learning methods that often do not perform well when the target node has a large neighborhood. How to improve the performance on relatively large and dense DAGs is therefore an important question. In this work, we study the potential of exploiting a kind of low rank assumption on the DAG structure to help address this problem. The rank of a graph that concerns us is the algebraic rank of its associated weighted adjacency matrix. Similar to the role of a sparsity assumption on graph structures, we treat the low rank assumption as methodological and it is not restricted to a particular DAG learning method. However, unlike sparsity assumption, it is much less apparent when DAGs tend to be low rank and how low rank DAGs behave. Thus, besides demonstrating the utility of exploiting a low rank assumption in causal structure learning, another important goal is to improve our understanding of the low rank assumption by relating the rank of a graph to its graphical structure. Such a result also enables us to characterize the rank of a graph from several structural priors and helps to choose rank related hyperparameters for the learning algorithm. Our contributions are summarized as follows: • We show how to adapt existing causal structure learning methods to take advantage of the low rank assumption, and provide a strategy to select rank related hyperparameters utilizing the lower and upper bounds on the true rank, if they are available. • To improve our understanding of low rank DAGs, we establish some lower bounds on the rank of a DAG in terms of simple graphical conditions, which imply necessary conditions for DAGs to be low rank. • We also show that the maximum possible rank of weighted adjacency matrices associated with a directed graph is highly related to hubs in the graph, which suggests that scale-free networks tend to be low rank. From this result, we derive several graphical conditions to bound the rank of a DAG from above, providing simple sufficient conditions for low rank. • Empirically, we demonstrate that the low rank adaptations are indeed useful. Not only do they outperform the original algorithms when the low rank condition is satisfied, the performance is also very competitive even when the true rank is not as low as is assumed. Related Work The low rank assumption is frequently adopted in graph-based applications (Smith et al., 2012; Zhou et al., 2013; Yao & Kwok, 2016; Frot et al., 2019), matrix completion and factorization (Recht, 2011; Koltchinskii et al., 2011; Cao et al., 2015; Davenport & Romberg, 2016), network sciences (Hsieh et al., 2012; Huang et al., 2013; Zhang et al., 2017) and so on, but to our best knowledge, has not been used on the DAG structures in the context of learning causal DAGs. We notice two works Barik & Honorio (2019); Tichavskỳ & Vomlel (2018) that assume low rank conditional probability tables in learning Bayesian networks, which are different from ours. Also related are existing works that studied the rank of real weighted matrices described by a given simple directed/undirected graph. However, most works only considered the zero-nonzero pattern of off-diagonal entries (see, e.g., Fallat & Hogben (2007); Hogben (2010); Mitchell et al. (2010)), whereas we also take into account the diagonal entries. This difference is crucial: if one only considers the off-diagonal entries, then the maximum rank over all possible weighted matrices is trivial and is always equal to the number of vertices. Consequently, many works focus on the minimum rank of a given graph, but to characterize exactly the minimum rank remains open, except for some special graph structures like trees (Hogben, 2010). Apart from these works, Edmonds (1967) studied algebraically the maximum rank for matrices with a common zero-nonzero pattern. In Section 4, we use this result to relate the maximum possible rank to a more interpretable graphical condition, which further implies several structural conditions of DAGs that may be easier to obtain in practice. 2 PRELIMINARIES 2.1 GRAPH TERMINOLOGY A graph G is defined as a pair (V,E), where V = {X1, X2, · · · , Xd} is the vertex set and E ⊂ V2 denotes the edge set. We are particularly interested in directed (acyclic) graphs in the context of causal structure learning. For any S ⊂ V, we use pa(S,G), ch(S,G), and adj(S,G) to denote the union of all parents, children, and adjacent vertices of the nodes of S in G, respectively. A graph is called weighted if every edge in the graph is associated with a non-zero value. We will work with weighted graphs and treat unweighted graphs as a special case where the edge weights are set to 1. Weighted graphs can be treated algebraically via weighted adjacency matrices. Specifically, the weighted adjacency matrix of a weighted graph G is a matrix W ∈ Rd×d, where W (i, j) is the weight of edge Xi → Xj and W (i, j) 6= 0 if and only if Xi → Xj exists in G. The binary adjacency matrix A ∈ {0, 1}d×d is such that A(i, j) = 1 if Xi → Xj in G and A(i, j) = 0 otherwise. The rank of a weighted graph is defined as the rank of the associated weighted adjacency matrix. 2.2 CAUSAL STRUCTURE LEARNING AND RECENT GRADIENT-BASED METHODS A commonly used model in causal structure learning is the structural equation model (SEM) that describes data generating procedure. In a slight abuse of notation, we also use Xi’s to denote random variables associated with the nodes in a graph G. Assuming G being a DAG, then the SEM is given by Xi = fi (pa(Xi,G), i) , i = 1, 2, . . . , d, where fi is a deterministic function and i’s are jointly independent noises. The SEM induces a marginal distribution P (X) over X = [X1, X2, · · · , Xd]T , and G and P (X) are said to form a causal Bayesian network (Pearl, 2009; Spirtes et al., 2000). The problem of causal structure learning is to infer the underlying causal DAG G based on the marginal distribution P (X), or more practically, an empirical version consisting of a number of i.i.d. observations from P (X). We next briefly review recently developed gradient-based methods that rely on a smooth characterization of acyclicity of directed graphs. These methods aim to find a DAG that optimizes a score function and can be categorized into two classes. The first class of methods explicitly associates the target causal model with a weighted adjacency matrix W and then estimate W by solving optimization problems in the following form: min W,φ EX∼P (X) S ( X,h(X;W,φ) ) , subject to trace ( eW◦W ) − d = 0, (1) where h : Rd → Rd is a model function parameterized by W (and other possible parameter φ) that aims to reconstruct X , S(·, ·) denotes a score function between the true and reconstructed variables, notation ◦ denotes the element-wise product, and eM is the matrix exponential of a square matrix M . The constraint was proposed by Zheng et al. (2018), which is smooth and holds if and only if W indicates a DAG. Methods in this class include: NOTEARS (Zheng et al., 2018), which targets linear models, with h(X;W,φ) = WTX and S(·, ·) being the Frobenius norm or equivalently the least-squares loss; and DAG-GNN (Yu et al., 2019) and the graph autoencoder approach (Ng et al., 2019b), where neural networks are used for the function h with φ being the weights of neural networks, and the score function can be chosen as the evidence lower bound (Kingma & Welling, 2013). A sparsity inducing term may be further added when the causal graph is assumed to be sparse. These objectives are equivalent to or are variants of some well studied score functions like the penalized maximum likelihood (Chickering, 2002; Van de Geer et al., 2013; Loh & Bühlmann, 2014). The second class uses certain functions, with parameter θ, to construct a weighted adjacency matrix W (θ) (or a binary one A(θ)) to represent the causal structure. These methods can be summarized as min θ, φ EX∼P (X) S ( X,h(X;W (θ), φ) ) , subject to trace ( eW (θ) ◦W (θ) ) − d = 0. (2) For example, GraN-DAG (Lachapelle et al., 2020) and NOTEARS-MLP (Zheng et al., 2020) respectively use neural network path products and partial derivatives between variables to construct W (θ). The binary matrix A(θ) can be obtained by sampling according to some distributions with learnable parameters, as used by Kalainathan et al. (2018); Ke et al. (2019); Ng et al. (2019a); Zhu et al. (2020). Before ending this section, we remark that while the gradient-based methods intend to learn a causal DAG, the learned DAG may not be identical to the underlying one for general SEMs due to the Markov equivalence (Spirtes et al., 2000; Peters et al., 2017). For such cases, one may convert the obtained DAG to its corresponding Completed Partially Directed Acyclic Graph (CPDAG) as the estimate. Nevertheless, if the SEM is identifiable and a proper score function is used, then the exact solution to the optimization problem is consistent, i.e., same as the true graph with probability 1; see, e.g., Shimizu et al. (2006); Peters & Bühlmann (2013); Peters et al. (2014); Zhang & Hyvärinen (2009). For further details and other technical issues like parameter optimization of the gradient-based methods, we refer the reader to the cited works and references therein. 3 EXPLOITING LOW RANK ASSUMPTION IN CAUSAL STRUCTURE LEARNING This section shows how to adapt existing gradient-based methods to take advantage of the low rank assumption, by providing a way for each class to utilize this assumption using techniques from the matrix completion literature. We remark that our adaptations with the low rank assumption are not restricted to a particular learning algorithm; other DAG learning methods may potentially combine one of the proposed modifications for learning low rank causal graphs, too. Matrix Factorization Since the weighted adjacency matrix W is explicitly optimized in the first class of methods, we can then apply the matrix factorization technique. Specifically, with an estimate r̂ for the graph rank, we can factorize W as W = UV T with U, V ∈ Rd×r̂. Problem (1) is then to optimize U and V that minimizes the score function under the DAG constraint, and has the same solution W (obtained from the product UV T ) as the original one if r̂ is greater than or equal to the true rank. Furthermore, if r̂ d, we have a much reduced number of parameters to optimize. Nuclear Norm For the second class of methods, the adjacency matrix W (θ) is not an explicit parameter to be optimized. In such a case, we can adopt a commonly used technique to add a nuclear norm term λ‖W (θ)‖∗, with λ > 0 being a tuning parameter, to the objective to induce low-rankness. The optimization procedures in these recent structure learning methods can directly incorporate the two adaptations as they are all gradient-based, though some extra care needs to be taken. Appendix C provides a detailed description of the optimization procedure and our implementation. The second approach is also feasible for the first class of methods, but we find that it does not work as well as the matrix factorization approach, possibly due to the singular value decomposition to compute the (sub-)gradient w.r.t. W at each optimization step. An acute reader may have noticed that we assumed a proper rank estimate r̂ or a proper penalty parameter λ. Yet knowing exactly the rank of the graph to be learned can be difficult in practice. Similar to the sparsity assumption, one may determine the hyperparameters r̂ and λ assisted by a validation dataset (or by cross-validation if the observed dataset is not sufficiently large). Alternatively, we can try different choices of the hyperparameters and then apply traditional score-based method where the search space is restricted to the resulting DAGs. However, since we are more concerned with relatively large and dense problems, the possible ranks may be too many to choose. As such, a lower bound rl and an upper bound ru on the graph rank would be beneficial—we need only consider ranks in [rl, ru] in the matrix factorization method, while the bounds are still useful by providing qualitative information for the nuclear norm approach: the lower an upper bound, the higher the tuning parameter λ should be chosen. Moreover, a lower bound can also justify the low rank assumption, i.e., if the lower bound is high, then the low rank assumption is likely to fail to hold. 4 GRAPHICAL BOUNDS ON RANKS Obtaining exact algebraic information of a DAG such as its rank and eigenvalues may be infeasible in practice, because it may require a full knowledge of the graph to be learned. On the other hand, structural information, such as graph connectivity, distributions of in-degrees and out-degrees, and an estimate of number of hubs, is sometimes more accessible. As such, this section is devoted to relating the rank of a graph to more easily interpretable graphical conditions, for the sake of a better understanding of what kinds of DAGs tend to satisfy the low rank assumption and for lower and upper bounds on the graph rank from certain structural priors. 4.1 PROBLEM SETTING Consider a DAG G = (V,E) with weighted adjacency matrix W and binary adjacency matrix A. We aim to seek upper and lower bounds on rank(W ) using only the graphical structure. Specifically, we focus on the weighted adjacency matrices with the same binary adjacency matrix A, i.e.,WA = {W ∈ Rd×d ; sign(|W |) = A}, where sign(·) and | · | are point-wise sign and absolute value functions, respectively. Notice that there exist trivial upper bound d− 1 and lower bound 0 for any DAG, but they are generally too loose for our purpose. In the following, we investigate the maximum rank max{rank(W );W ∈ WA} and minimum rank min{rank(W );W ∈ WA} to find tighter upper and lower bounds for any W ∈ WA. Before introducing two useful graph concepts, we comment that low rank DAGs are not necessarily sparse and vice versa; see a discussion in Appendix A. Definition 1 (Height). Given a DAG G = (V,E) and a vertex Xi ∈ V, the height of Xi, denoted by l(Xi), is defined as the length of the longest directed path starting from Xi. The height of G, denoted by l(G), is the length of the longest path in G. Definition 2 (Head-tail vertex cover). Let G = (V,E) be a directed graph and H,T be two subsets of V. (H,T) is called a head-tail vertex cover of G if every edge in G has its head vertex in H or its tail vertex in T. The size of a head-tail vertex cover (H,T) is defined as |H|+ |T|. As an example, Figure 1c is a head-tail vertex cover of G in Figure 1a, where H = {X2, X4, X8} (red nodes) and T = {X8, X9, X10} (blue nodes). The size of this vertex cover is 6. 4.2 LOWER BOUNDS We first study lower bounds on the rank of a weighted DAG. Define V−1 = ∅ and Vs = {Xi; l(Xi) = s} for s = 0, 1, . . . , l(G). Denote by Gs,s−1 the induced subgraph of G over Vs∪Vs−1. Let C(Gs,s−1) be the set of non-singleton connected components of Gs,s−1 and |C(Gs,s−1)| the cardinality. We have the following lower bounds. Theorem 1. Let G be a DAG with binary adjacency matrix A. Then min{rank(W ) ; W ∈ WA} ≥ ∑l(G) s=1 |C(Gs,s−1)| ≥ l(G). (3) All the proofs in this paper are provided in Appendix B. Theorem 1 shows that rank(W ) is greater than or equal to the sum of the number of non-singleton connected components in each Gs,s−1. As Gs,s−1 has at least one non-singleton connected component, we obtain the second inequality. In other words, the rank of a weighted DAG is at least as high as the length of the longest directed path. As an example, consider the graph shown in Figure 1. One can verify that min{rank(W );W ∈ WA} = 6, |C(G1,0)| = 2, |C(G2,1)| = 1, |C(G3,2)| = 1, and l(G) = 3. Thus, we have min{rank(W );W ∈ WA} = 6 > 2 + 1 + 1 = 4 > 3. We remark that the bounds in Theorem 1 may be loose in some cases. To characterize the minimum rank exactly is an on-going research problem (Hogben, 2010). 4.3 UPPER BOUNDS We turn to the more important issue for our purpose, regarding upper bounds on rank(W ). The next theorem shows that max{rank(W );W ∈ WA} can be characterized exactly in graphical terms. Theorem 2. Let G be a directed graph with binary adjacency matrix A. Then max{rank(W );W ∈ WA} is equal to the minimum size of the head-tail vertex cover of G, that is, max{rank(W ) ; W ∈ WA} = min{|H|+ |T| ; (H,T) is a head-tail vertex cover of G}. We comment that Theorem 2 holds for all directed graphs (not only DAGs), which may be of independent interest to other applications. A head-tail vertex cover of minimum size is called a minimum head-tail vertex cover, which in general is not unique. For a head-tail vertex cover (H,T), the vertices in H cover all the edges pointing towards these vertices while the vertices in T cover the edges pointing away. A head-tail cover of a relatively small size then indicates the presence of hubs, that is, vertices with relatively high in-degrees or out-degrees. Therefore, Theorem 2 suggests that the maximum rank of a weighted DAG is highly related to the presence of hubs: a DAG with many hubs tends to have low rank. Intuitively, a hub of high in-degree (out-degree) is a common effect (cause) of a number of direct causes (effect variables), comprising many V-structures (inverted V-structures). For example, in Figure 1a, X8 is a hub of V-structures and X9 is a hub of inverted V-structures. Such features are fairly common in real graph structures. Appendix A presents a real network, called pathfinder, which describes the causal relations among 109 variables (Heckerman et al., 1992) with the center node being the parent of a large number of other nodes. The famous scale-free (SF) graphs also tend to have hubs. A scale-free graph is one whose distribution of degree k follows a power law: P (k) ∼ k−γ , where γ is the power parameter typically within [2, 3] and P (k) denotes the fraction of nodes with degree k (Nikolova & Aluru, 2012). It is observed that many real-world networks are scale-free, and some of them, such as gene regulatory networks, protein networks, and financial system network, may be viewed as causal networks (Guelzim et al., 2002; Barabasi & Oltvai, 2004; Hartemink, 2005; Eguı́luz et al., 2005; Gao & en Ren, 2013; Ramsey et al., 2017). In particular, Barabasi & Oltvai (2004) claimed that most protein networks, some of which are directed and acyclic due to irreversible reactions, are the results of growth processes and preferential attachments, probably due to the gene duplication. Empirically, the ranks of scale-free graphs are relatively low, especially in comparison to Erdös-Rényi (ER) random graphs (Mihail & Papadimitriou, 2002). Figure 2 provides a simulated example where γ is chosen from {2, 3} and each reported value is over 100 random runs. As graph becomes denser, the graph rank also increases. However, for scale-free graphs with a relatively large γ, the increase of their ranks is much slower than that of Erdös-Rényi graphs; indeed, their ranks tend to stay fairly low even when the graph degree is large. Theorem 2 can also be used to generate a low rank graph, or more precisely, a random DAG with a given rank r and a properly specified graph degree. Here we briefly describe the idea and leave the detailed algorithm to Appendix C.1: first generate a graph with r edges and rank r; a random edge is sampled without replacement and would be added to the graph, if adding this edge does not increase the size of the minimum head-tail vertex cover; repeat the previous step until the pre-specified degree is reached or no edge could be added to the graph; finally, assign the edge weights randomly according to a continuous distribution and the weighted graph will have rank r with high probability. The next two theorems report some looser but simpler upper bounds on rank(W ). Theorem 3. Let G be a DAG with binary adjacency matrix A, and denote the set of vertices with at least one parent by Vch and those with at least one child by Vpa. Then we have max{rank(W ) ; W ∈ WA} ≤ ∑l(G) s=1 min (|Vs|, |ch(Vs)|) ≤ |Vpa|,∑l(G)−1 s=0 min (|Vs|, |pa(Vs)|) ≤ |Vch|, |V| −max{|Vs| ; 0 ≤ s ≤ l(G)}. (4) Since Vch and Vpa are the non-root and the non-leaf vertices, respectively, the first two inequalities of (4) indicate that the maximum rank is bounded from above by the number of non-root vertices and also by the number of non-leaf vertices. The last inequality of (4) is a generalization of the first two, which implies that the rank is likely to be low if most vertices have the same height. Theorem 4. Let G be a DAG with binary adjacency matrix A. Denote by skeleton(A) and moral(A) the binary adjacency matrices of the skeleton and moral graph of G, respectively. Then we have max{rank(W ) ; W ∈ WA} ≤max{rank(W ) ; sign(|W |) = skeleton(A)} ≤max{rank(W ) ; sign(|W |) = moral(A)}. The skeleton of a DAG is the undirected graph obtained by removing all the arrowheads, and the moral graph is the undirected graph where two vertices are adjacent if they are adjacent or if they share a common child in the DAG. This result is useful when the skeleton or the moral graph can be accurately estimated and the corresponding rank is low. In practice, we may use all available structural priors to obtain upper bounds on the underlying rank and choose the lowest one as our estimate. 5 EXPERIMENTS This section reports empirical results of the low rank adaptations of existing methods, compared with their original versions. We choose NOTEARS (Zheng et al., 2018) for linear SEMs by adopting the matrix factorization approach, denoted as NOTEARS-low-rank, and use the nuclear norm approach in combination with GraN-DAG (Lachapelle et al., 2020) for a non-linear data model. Again we remark that the two methods are only demonstrations of the utility of low rank assumption, which can be potentially combined with other methods as well. For more information, we also include several benchmark methods: fast GES (Ramsey et al., 2017), PC (Spirtes et al., 2000), MMHC (Tsamardinos et al., 2006), ICA-LiNGAM (Shimizu et al., 2006) specifically designed with non-Gaussian noises, for linear SEMs;1 and DAG-GNN (Yu et al., 2019), NOTEARS-MLP (Zheng et al., 2020), and CAM (Bühlmann et al., 2014) for the non-linear case. Their implementations are described in Appendix C. We consider randomly sampled DAGs with specified ranks (the generating procedure was described in Section 4.3 and is given as Algorithm 1 in Appendix C.1), scale-free graphs, and a real network structure. For linear SEMs, the weights are uniformly sampled from [−2,−0.5] ∪ [0.5, 2] and the noises are either standard Gaussian or standard exponential. For non-linear SEMs, we use additive Gaussian noise model with functions sampled from Gaussian processes with RBF kernel of bandwidth one. These data models are known to be identifiable (Shimizu et al., 2006; Peters & Bühlmann, 2013; Peters et al., 2014). From each SEM, we then generate n = 3, 000 observations. We repeat ten times over different seeds for each experiment setting. Detailed information about the setup can be found in Appendix C.3. Below we mainly report structural Hamming distance (SHD) which takes into account both false positives and false negatives, and a smaller SHD indicates a better estimate. 5.1 LINEAR SEMS WITH RANK-SPECIFIED GRAPHS We first consider linear SEMs on rank-specified graphs, with number of nodes d ∈ {100, 300}, rank r = d0.1de, and average degree k ∈ {2, 4, 6, 8}. The true rank is assumed to be known and is used as the rank parameter r̂ in NOTEARS-low-rank. For a better visualization, Figure 3 only reports the average SHDs, while the true positive rate, false discovery rate, and running time are left to Appendix D. We also show the results after using the interquartile range rule to remove outlier SHDs. We observe that the low rank assumption can greatly improve the performance of NOTEARS, reducing the SHDs by at least a half. For this data model, the fast GES has much higher SHDs (see also Appendix D). PC is too slow (for example, it did not finish in 16 hours for a dataset with 100 nodes and degree 6), because some nodes may have a high in-degree. For the same reason, the skeleton may not be well estimated by MMHC; its performance is slightly worse than the fast GES and is not reported. For more information regarding the role of sparsity, we include NOTEARS with an `1 penalty, named NOTEARS-L1. Here the `1 penalty weight is chosen from {0.01, 0.02, 0.05, 0.1, 0.2, 0.5}. Instead 1Here we choose ICA-LiNGAM, other than alternative LiNGAM methods like DirectLiNGAM (Shimizu et al., 2011), based on our empirical observation. Specifically, an implementation of ICA-LiNGAM has a noticeably better performance than DirectLiNGAM for relatively dense graphs. Please find a detailed discussion and an empirical comparison in Appendix D.4. of relying on an additional validation dataset, we treat NOTEARS-L1 favorably by picking the lowest SHD obtained from different weights for each dataset. As seen from Figure 3a, NOTEARS-L1 is slightly better than NOTEARS when the average degree is 2, but is largely outperformed with relatively dense graphs. This observation was also reported in Zheng et al. (2018). We conjecture that it is because our experiments consider relatively sufficient data and dense graphs. Moreover, the thresholding procedure controls false discoveries and may have a similar effect to the `1 penalty. Appendix D.1 studies graphs with higher ranks, where it is observed that the advantage of NOTEARSlow-rank over NOTEARS decreases when the rank of the underlying DAG increases. Nevertheless, NOTEARS-low-rank is still competitive when the true rank is dd/2e and the factorized matrix has the same number of parameters as NOTEARS. We also conduct an empirical analysis with different sample sizes in Appendix D.2, which shows that NOTEARS-low-rank performs reasonably well when the sample size is small and tends to have a better performance with a larger number of samples. Due to space limit, please find further details in the appendix. 5.2 LINEAR SEMS WITH SCALE-FREE GRAPHS We next consider scale-free graphs with d = 100 nodes, average degree k = 6, and power γ = 2.5. For this experiment, the minimum, maximum, and mean ranks of generated graphs are 14, 24, and 18.7, respectively. Here we choose the rank parameter r̂ from {20, 30, 40} for NOTEARS-low-rank. As seen from Figure 4, NOTEARSlow-rank with rank parameter r̂ = 20 performs the best, even though there are graphs with ranks greater than 20. 5.3 SENSITIVITY OF RANK PARAMETERS AND VALIDATION So far we have assumed that the true rank or an accurate estimate is known. In this experiment, we conduct an empirical analysis with different rank parameters for linear Gaussian data model on rank-specified graphs with 100 nodes, degree 8, and rank 10. We also include the validation based approach where 2, 000 samples are chosen as training dataset and the rest as validation dataset. We use the derived lower and upper bounds in Theorems 1 and 3 to obtain a range of possible rank parameters, assuming that the corresponding structural priors are available. Within this range, we then select 7 evenly distributed rank parameters used with NOTEARS-low-rank to learn causal graphs. Finally, we evaluate each learned DAG using the validation dataset and choose the DAG with the best score as our estimate. As seen from Figure 5, NOTEARS-low-rank performs the best when the rank parameter is identical to the true rank, while the rank parameter chosen by validation has almost the same performance. Compared with NOTEARS on the same datasets, the low rank version performs well across a range of rank parameters. Although this validation approach increases the total running time that depends on the number of candidate rank parameters, we believe that it is acceptable given the gained accuracy and also the fact that this strategy has been frequently adopted for tuning hyperparameters in practice. 5.4 NON-LINEAR SEMS For non-linear data models, we pick rank-specified graphs with 50 nodes, rank 5, and average degree k ∈ {2, 4, 6, 8}. To our knowledge, the selected benchmark methods CAM, NOTEARS-MLP, and GraN-DAG are state-of-the-art methods on this data model. As a demonstration of the low rank assumption, we apply the nuclear norm approach to GraN-DAG and choose from {0.3, 0.5, 1.0} as penalty weights. For validation, we use the same splitting ratio as in Section 5.3 and consider more penalty weights from {0.1, 0.2, 0.3, 0.5, 1, 2, 5}. Similarly, the learned graph that achieves the best score on the validation dataset is chosen as final estimate. Figure 6 (and Appendix D.6 with a more detailed result) shows that adding a nuclear norm can improve the performance of GraN-DAG across a large range of weights when the graph is relatively dense. For degree 8, the low rank version with validation achieves average SHD 77.4, while the SHDs of CAM, NOTEARS-MLP, and original GraN-DAG are 131.9, 119.4, and 109.4, respectively. 5.5 REAL NETWORK We apply the proposed method to the arth150 gene network, which is a DAG containing 107 genes and 150 edges. Its maximum rank is 40. Since the real dataset has only 22 samples, we instead use simulated data from linear Gaussian SEMs. We pick r̂ from {36, 40, 44} and also use validation to select the rank parameter. We apply NOTEARS-L1 where the `1 penalty weight is chosen from {0.05, 0.1, 0.2}, and similarly treat this method favorably by picking the lowest SHD for each dataset. The mean and median SHDs are shown in Figure 7. Using Student’s t-test, we find that with significance level 0.1, the results obtained with r̂ = 44 and the validation approach are significantly better than NOTEARS. This experiment demonstrates again the utility of the low rank assumption, even when the true rank of the graph is not very low. 6 CONCLUDING REMARKS This paper studies the potential of low rank assumption in causal structure learning. Empirically, we show that the low rank adaptations perform noticeably better than existing algorithms when the low rank condition is satisfied, and also deliver competitive performances when the rank is not as low as is assumed. Theoretically, we provide an improved understanding of what kinds of graphs tend to be low rank and a possibility to obtain bounds on the underlying rank from several structural priors. We treat the present work as our first step to incorporate low-rankness into causal DAG learning. A future direction is to approximate a high rank DAG with a low rank one (possibly adding an additional DAG that is sparse). While there is a rich literature on low rank approximations of matrices and combining low-rankness with sparsity, it is non-trivial to us to conclude under what conditions such an approximation is guaranteed to be effective to learn causal DAGs. Another direction is to compare the low rank assumption to other structural or parametric priors affecting model selection through marginal likelihood (Eggeling et al., 2019; Silander et al., 2007). Finally, it is also interesting to investigate if a low rank DAG model implies any useful behavior in the data. Appendix A EXAMPLES AND DISCUSSIONS We provide more examples and discussions in this section. Minimum rank of the graph in Figure 1 We first show that the minimum rank of the DAG structure in Figure 1 is 6. It is clear that the 6-th to 10-th rows of A are always linearly independent, so it suffices to show that the 11-th row is linearly independent of the 6-th to 10-th rows. To see this, notice that if the 11-th row is a linear combination of the 6-th to 10-th rows, then A(11, 1) would be non-zero, which is a contradiction. The pathfinder and arth150 networks Figure 8 visualizes the pathfinder and arth150 networks that are mentioned in Sections 4.3 and 5, respectively. Both networks can be found at http: //www.bnlearn.com/bnrepository. As one can see, these two networks contain hubs: the center note in the pathfinder network has a large number of children, while the arth150 network contains many ‘small’ hubs, each of which has 5 ∼ 10 children. We also notice that nearly all the hubs in the two networks have high out-degrees. Sparse DAGs and low rank DAGs A sparse DAG does not necessarily indicate a low rank DAG, and vice versa. For example, a directed linear graph with d vertices has only d − 1 edges, i.e. X1 → X2 → · · · → Xd, while the rank of its binary adjacency matrix is d − 1. According to Theorems 1 and 2, the maximum and minimum ranks of a directed linear graph are equal to its number of edges. Thus, directed linear graphs are sparse but have high ranks. On the other hand, for some non-sparse graphs, we can assign the edge weights so that the resulting graphs have low ranks. A simple example would be a fully connected directed balanced bipartite graph, as shown in Figure 9. The definition of bipartite graphs can be found in Appendix B.1. A bipartite graph is called balanced if its two parts contain the same number of vertices. The rank of a fully connected balanced bipartite graph with d vertices is 1 if all the edge weights are the same (e.g., the binary adjacency matrix), but the number of edges is d2/4. We also notice that there exist some connections between the maximum rank and the graph degree, or more precisely, the total number of edges in the graph, according to Theorem 2. Intuitively, if the graph is dense, then we need more vertices to cover all the edges. Thus, the size of the minimum head-tail vertex cover should be large. Explicitly providing a formula to characterize these two graph parameters is an interesting problem, which will be explored in the future. B PROOFS In this section, we present proofs for the theorems given in the main content. B.1 PRELIMINARIES A bipartite graph is a graph whose vertex set V can be partitioned into two disjoint subsets V0 and V1, such that the vertices within each subset are not adjacent to one another. V0 and V1 are called the parts of the graph. A matching of a graph is a subset of its edges where no two of them share a common endpoint. A vertex cover of a graph is a subset of the vertex set where every edge in the graph has at least one endpoint in the subset. The size of a matching (vertex cover) is the number of edges (vertices) in the matching (vertex cover). A maximum matching of a graph is a matching of the largest possible size and a minimum vertex cover is a vertex cover of the smallest possible size. An important result about bipartite graphs is König’s theorem (Dénes, 1931), which states that the size of a minimum vertex cover is equal to the size of a maximum matching in a bipartite graph. Based on the heights of vertices in V, we can define a weak ordering among the vertices: Xi Xj if and only if l(Xi) > l(Xj), and Xi ∼ Xj if and only if l(Xi) = l(Xj). Given this weak ordering, we can group the vertices by their heights, and the resulting graph shows a hierarchical structure; see Figure 1 in the main text for an example. This hierarchical representation has some simple and nice properties. Let Vs = {Xi; l(Xi) = s}, s = 0, 1, . . . , l(G), and let V−1 = ∅. We have: (1) for any given s ∈ {0, 1, . . . , l(G)} and two distinct vertices X1, X2 ∈ Vs, X1 and X2 are not adjacent, and (2) for any given s ∈ {1, 2, . . . , l(G)} and Xi ∈ Vs, there is at least one vertex in Vs−1 which is a child of Xi. If we denote the induced subgraph of G over Vs ∪Vs−1 by Gs,s−1, then Gs,s−1 is a bipartite graph with Vs and Vs−1 as parts, and singletons in Gs,s−1 (i.e., vertices that are not endpoints of any edge) only appear in Vs−1. For ease of presentation, we occasionally use index i to represent variable Xi in the following sections. B.2 PROOF OF THEOREM 1 Proof. Let G = (V,E). Consider an equivalence relation, denoted by ∼, among vertices in V defined as follows: for any Xi, Xj ∈ V, Xi ∼ Xj if and only if l(Xi) = l(Xj) and Xi and Xj are connected. Here, connected means that there is a path between Xi and Xj . Below we use C(Xi) to denote the equivalence class containing Xi. Next, we define a weak ordering π on V/ ∼, i.e., the equivalence classes induced by ∼, by letting C(Xi) π C(Xj) if and only if l(Xi) ≥ l(Xj). Then, we extend π to a total ordering ρ on V/ ∼. The ordering ρ also induces a weak ordering (denoted by ρ̄) on V: Xi ρ̄ Xj if and only if C(Xi) ρ C(Xj). Finally, we extend ρ̄ to a total ordering γ on V. It can be verified that γ is a topological ordering of G, that is, if we relabel the vertices according to γ, then Xi ∈ pa(Xj ,G) if and only if i > j and Xi and Xj are adjacent, and the adjacency matrix of G becomes lower triangular. Assume that the vertices of G are relabeled according to γ and we will consider the binary adjacency matrix A of the resulting graph throughout the rest of this proof. Note that relabelling is equivalent to applying a permutation onto the adjacency matrix, which does not change the rank. Let V0 = {1, 2, . . . , k1 − 1} for some k1 ≥ 2. Then the k1-th row of A, denoted by A(k1, ·), is the first non-zero row vector of A. Letting S = {A(k1, ·)}, then S contains a subset of linearly independent vector(s) of the first k1 rows of A. Suppose that we have visited the first m rows of A and S = {A(k1, ·), A(k2, ·), . . . , A(kt, ·)} contains a subset of linearly independent vector(s) of the first m rows ofA, where k1 ≤ m < d. IfXm+1 Xkt , then we addA(m+1, ·) to S; otherwise, we keep S unchanged. We claim that the vectors in S are still linearly independent after the above step. Clearly, if we do not add any new vector, then S contains only linearly independent vectors. To show the other case, note that if l(Xm+1) > l(Xkt) ≥ · · · ≥ l(Xk1), then there is an index i ∈ Vl(Xm+1)−1 such that A(m + 1, i) 6= 0, by the definition of height. Since l(Xm+1) > l(Xkt), we have l(Xkt) ≤ l(Xm+1)− 1 and thus A(kj , i) = 0 for all j = 1, 2, . . . , t. Therefore, A(m+ 1, ·) cannot be linearly represented by {A(kj , ·); j = 1, 2, . . . , t} and the vectors in S are linearly independent. On the other hand, if l(Xm+1) = l(Xkt), then the definition of the equivalence relation ∼ implies that Xm+1 and Xkt are disconnected, which means that Xm+1 and Xkt do not share a common child in Vl(Xm+1)−1. Consequently, there is an index i ∈ Vl(Xm+1)−1 such that A(m + 1, i) 6= 0 but A(kt, i) = 0. Similarly, we can show that A(kj , i) = 0 for all j = 1, 2, . . . , t. Thus, the vectors in S are still linearly independent. After visiting all the rows in A, the number of vectors in S is equal to ∑l(G) s=1 |C(Gs,s−1)| based on the definition of ∼. The second inequality can be shown by noting that C(Gs,s−1) has at least one elements. The proof is complete. B.3 PROOF OF THEOREM 2 Proof. Denote the directed graph by G = (V,E). Edmonds (1967, Theorem 1) showed that max{rank(W );W ∈ WA} is equal to the maximum number of nonzero entries of A, no two of which lie in a common row or column. Therefore, it suffices to show that the latter quantity is equal to the size of the minimum head-tail vertex cover. Let V ′ = V′0 ∪V′1, where V′0 = V × {0} = {(Xi, 0);Xi ∈ V} and V′1 = V × {1} = {(Xi, 1);Xi ∈ V}. Now define a bipartite graph B = (V′ ,E′) where E′ = {(Xi, 0) → (Xj , 1); (Xi, Xj) ∈ E}. Denote byM a set of nonzero entries of A so that no two entries lie in the same row or column. Notice thatM can be viewed as an edge set and no two edges inM share a common endpoint. Thus,M is a matching of B. Conversely, it can be shown by similar arguments that any matching of B corresponds to a set of nonzero entries of A, no two of which lie in a common row or column. Therefore, max{rank(W ),W ∈ WA} equals the size of the maximum matching of B, and further the size of the minimum vertex cover of B according to König’s theorem. Note that any vertex cover of B can be equivalently transformed to a head-tail vertex cover of G, by letting H and T be the subsets of the vertex cover containing all variables in V′0 and of the vertex cover containing all variables in V ′ 1, respectively. Thus, max{rank(W ),W ∈ WA} is equal to the size of the minimum head-tail vertex cover. B.4 PROOF OF THEOREM 3 Proof. We start with the first inequality in Equation (4). Let h1, . . . , hp denote the heights where |Vs| < |ch(Vs)|, and t1, . . . , tq the height where |Vs| > |ch(Vs)|. Let H = ∪pi=1Vhi and T = ∪qi=1Vti . It is straightforward to see that (H,T) is a head-tail vertex cover. Thus, Equation (4) holds according to Theorem 2. The second inequality can be shown similarly and its proof is omitted. For the third inequality, let m = argmax{|Vs| : 0 ≤ s ≤ l(G)}, and define H = ∪i>mVi and T = ∪i<mVi. Then (H,T) is also a head-tail vertex cover and the third inequality follows from Theorem 2, too. B.5 PROOF OF THEOREM 4 Proof. Notice that Theorem 2 holds for all directed graphs. This theorem then follows by treating the skeleton and the moral graph as directed graphs with loops, i.e., an undirected edge Xi −Xj is treated as two directed edges Xi → Xj and Xj → Xi. C IMPLEMENTATION DETAILS In this section, we present an algorithm to generate a random DAG with a given rank, a low rank version of NOTEARS and GraN-DAG, and also a description of our experimental settings. C.1 GENERATING RANDOM DAGS In Section 4.3, we briefly discuss the idea of generating a random DAG with a given rank. We now describe the detailed procedure in Algorithm 1. In particular, we aim to generate a random DAG with d nodes, average degree k, and rank r. The first part of Algorithm 1 after initialization is to sample a number N , representing the total number of edges, from a binomial distribution B(d(d − 1)/2, p) Algorithm 1 Generating random DAGs Require: Number of nodes d, average degree k, and rank r. Ensure: A randomly sampled DAG with the number of nodes d, average degree k, and rank r. 1: Set M = empty graph, Mp = ∅, and R = {(i, j); i < j, i, j = 1, 2, ..., d}. 2: Set p = k/(d− 1). 3: Sample a numberN ∼ B(d(d−1)/2, p), whereB(n, p) is a binomial distribution with parameters n and p. 4: if N < r then 5: return FAIL 6: end if 7: Sample r indices from 1, . . . , d− 1 and store them in Mp in descending order. 8: for each i in Mp do 9: Sample an index j from i+ 1 to d. 10: Add edge (i, j) to M and remove (i, j) from R. 11: end for 12: while R 6= ∅ and |M | < N do 13: Sample an edge (i, j) from R and remove it from R. 14: if adding (i, j) to M does not change the size of the minimum head-tail vertex cover of M then 15: Add (i, j) to M . 16: end if 17: end while 18: if |M | < N then 19: return FAIL 20: end if 21: return M where p = k/(d− 1). If N < r, Algorithm 1 would return FAIL since a graph with N < r edges could never have rank r. Otherwise, Algorithm 1 samples an initial graph with r edges and rank r, by choosing r edges such that no two of them share the same head points or the same tail points, i.e., each row and each column of the corresponding adjacency matrix have at most one non-zero entry. Then, Algorithm 1 sequentially samples an edge from R containing all possible edges and checks whether adding this edge to the graph changes the size of the minimum head-tail vertex cover. If not, the edge will be added to the graph; otherwise, it will be removed from R. This is because if a graph G is a super-graph of another graphH, then the size of the minimum head-tail cover of G is no less than that ofH. We repeat the above sampling procedure until there is no edge in R or the number of edges in the resulting graph reaches N . If the latter happens, the algorithm will return the generated graph; otherwise, it will return FAIL. The theoretic basis of Algorithm 1 is Theorem 2. Note that the algorithm may not return a valid graph if the desired number N of edges cannot be reached. This could happen if the input rank is too low while the input average degree is too high. With our experiment settings, we find it rare for Algorithm 1 to fail to return a desired graph. C.2 OPTIMIZATION For this part, we consider a dataset consisting of n i.i.d. observations from P (X) and consequently the expectations in Problems (1) and (2) are replaced by empirical means. Denote the design matrix by X ∈ Rn×d, where each row of X corresponds to an observation and each column represents a variable. Here we use NOTEARS (Zheng et al., 2018) and Gran-DAG (Lachapelle et al., 2020) from each class of methods as examples and will describe their low rank versions in the following. Other gradient-based methods and their optimization procedures can be similarly modified to incorporate the low rank assumption. Algorithm 2 Optimization procedure for NOTEARS-low-rank Require: Design matrix X, starting point (U0, V0, α0), rate c ∈ (0, 1), tolerance > 0, and threshold w > 0. Ensure: Locally optimal parameter W ∗. 1: for t = 1, 2, . . . do 2: (Solve primal) Ut+1, Vt+1 ← arg minU,V Lρ(U, V, αt) with ρ such that g(Ut+1V Tt+1) < cg(UtV T t ). 3: (Dual ascent) αt+1 ← αt + ρg(Ut+1V Tt+1). 4: if g(Ut+1V Tt+1) < then 5: Set U∗ = Ut+1 and V ∗ = Vt+1. 6: break 7: end if 8: end for 9: (Thresholding) Set W ∗ = U∗V ∗T ◦ 1(|U∗V ∗T | > w). 10: return W ∗ C.2.1 NOTEARS WITH LOW RANK ASSUMPTION Following Section 3, the optimization problem in our work can be written as min W 1 2n ∥∥X−XUV T∥∥2 F , subject to trace ( eUV T ◦UV T ) − d = 0, (5) where U, V ∈ Rd×r̂ and ◦ is the point-wise product. The constraint in Problem (5) holds if and only if UV T is a weighted adjacency matrix of a DAG. This problem can then be solved by standard numeric optimization methods such as the augmented Lagrangian method (Bertsekas, 1999). In particular, the augmented Lagrangian is given by Lρ(U, V, α) = 1 2n ∥∥X−XUV T∥∥2 F + αg(UV T ) + ρ 2 |g(UV T )|2, where g(UV T ) := trace ( eUV T ◦UV T ) − d, α is the Lagrange multiplier, and ρ > 0 is the penalty parameter. The optimization procedure is summarized in Algorithm 2, similar to Zheng et al. (2018, Algorithm 1). Notice that here we do not include the `1 penalty term (except for the first and last experiments in Sections 5.1 and 5.5, respectively), for the following reasons: (1) the thresholding procedure can also control false discoveries; (2) we consider relatively sufficient data for the experiments and NOTEARS with thresholding has been shown in Zheng et al. (2018) to perform consistently well even when the graph is sparse; (3) we are more concerned with relatively large and dense graphs, so a sparsity assumption may be harmful, as shown also by Zheng et al. (2018); (4) the `1 penalty term requires a tuning parameter, which itself is not easy to choose. Zheng et al. (2018) used L-BFGS to solve the unconstrained subproblem in Step 2. We alternatively use the Newton conjugate gradient method that is written in C. Empirically, these two optimizers behave similarly in terms of the estimate performance, while the latter can run much faster thanks to its C implementation. The DAG constraint may not be satisfied exactly using iterative numeric methods, so it is a common practice to pick a small tolerance, followed by a thresholding procedure on the estimated entries to obtain exact DAGs. In our implementation, we choose U0 and V0 to be the first r̂ columns of the d× d identity matrices. Other parameter choices are: α0 = 0, c = 0.25, = 10−6, and w = 0.3, similar to those used in related methods on the same datasets (e.g., Zheng et al. (2018); Yu et al. (2019); Zhu et al. (2020)). The chosen threshold w = 0.3 works well in our experiments and in the experiments of related works that use the same data model. In case the thresholded matrix is not a DAG, one may further increase the threshold until the resulting matrix corresponds to a DAG. After obtaining W ∗, we add an additional pruning step: we use linear regression to refit the dataset based on the structure indicated by W ∗ and then apply another thresholding (with w = 0.3) to the refitted weighted adjacency matrix. Both the Newton conjugate gradient optimizer and the pruning technique are also applied to NOTEARS, which not only accelerate the optimization but also improve its performance by obtaining a much lower SHD, particularly for large and dense graphs. See Appendix D.3 for an empirical comparison. C.2.2 GRAN-DAG WITH LOW RANK ASSUMPTION We next consider a low rank version of GraN-DAG. The optimization problem can be written as min θ − 1 n n∑ l=1 d∑ i=1 log p ( X (l) i | pa(Xi,W (θ)) (l); θ ) + λ‖W (θ)‖∗ subject to trace ( eW (θ) ) − d = 0, (6) where X(l)i is the l-th sample of variable Xi and pa(Xi,W (θ)) (l) means the l-th sample of Xi’s parents indicated by the adjacency matrix W (θ). Here, θ denotes the parameters of neural networks and W (θ) with non-negative entries is obtained from the neural network path products. Problem (6) can be solved similarly using augmented Lagrangian. The procedure is similar to Algorithm 2 and is the same to that used by GraN-DAG, with slight modifications: (1) the subproblem in Step 2 is approximately solved using first-order methods; (2) the thresholding at Step 9 is replaced by a variable selection method proposed by Bühlmann et al. (2014). The same variable selection or pruning method is adopted by two other benchmark methods CAM and NOTEARS-MLP in our experiment. Please refer to Lachapelle et al. (2020) and Bühlmann et al. (2014) for further details. C.3 EXPERIMENT SETUP In our experiments, we consider three data models: linear Gaussian SEMs, linear non-Gaussian SEMs (linear exponential SEMs), and non-linear SEMs (Gaussian processes). Given a randomly generated DAG G, the associated SEM is generated as follows: Linear Gaussian A linear Gaussian SEM is given by Xi = ∑ Xj∈pa(Xi,G) W (j, i)Xj + i, i = 1, 2, . . . , d, (7) where pa(Xi,G) denotes Xi’s parents in G and i’s are jointly independent standard Gaussian noises. In our experiments, the weights W (i, j)’s are uniformly sampled from [−2,−0.5] ∪ [0.5, 2]. Linear Exponential A linear exponential SEM is also generated according to Equation (7), where i’s are replaced by jointly independent Exp(1) random variables. The weightsW (i, j)’s are sampled from [−2,−0.5] ∪ [0.5, 2] uniformly, too. Gaussian Processes We consider the following additive noise model: Xi = fi(pa(Xi,G)) + i, i = 1, 2, . . . , d, (8) where i’s are jointly independent standard Gaussian noises and fi’s are functions sampled from Gaussian processes with RBF kernel of bandwidth one. We sample 3, 000 observations according the SEM. The reported results of each setting are summarized over 10 repetitions with different seeds. The experiments are run on a Linux workstation with 16-core Intel Xeon 3.20GHz CPU and 128GB RAM. C.4 BENCHMARK METHODS Existing causal structure learning methods used in our experiments all have available implementations, as listed below: • GES and PC: an implementation of both methods is available through the py-causal package at https://github.com/bd2kccd/py-causal. We note that, the implementation of py-causal package is based on the CMU TETRAD project, in which the version of GES is indeed the fast GES algorithm proposed by Ramsey et al. (2017). • MMHC (Tsamardinos et al., 2006): an implementation is available in the bnlearn package at https://CRAN.R-project.org/package=bnlearn. • CAM (Peters et al., 2014): its codes are available through the CRAN R package repository at https://cran.r-project.org/web/packages/CAM. • NOTEARS (Zheng et al., 2018) and NOTEARS-MLP (Zheng et al., 2020): codes are available at the first author’s github repository https://github.com/xunzheng/ notears. • GraN-DAG (Lachapelle et al., 2020): an implementation is available at the first author’s github repository https://github.com/kurowasan/GraN-DAG. Note that for graphs of 50 nodes or more, GraN-DAG performs a preliminary neighborhood selection step to avoid overfitting. • DAG-GNN (Yu et al., 2019): the codes are available at the first author’s github repository https://github.com/fishmoon1234/DAG-GNN. • ICA-LiNGAM (Shimizu et al., 2006): an implementation is available at https://sites. google.com/site/sshimizu06/lingam. In the experiments, we mostly use default hyperparameters unless otherwise stated. D ADDITIONAL EXPERIMENTAL RESULTS D.1 LINEAR SEMS WITH HIGHER RANKS This experiment considers graphs of higher ranks. We use rank-specified random graphs with d = 100 nodes and rank r ∈ {30, 35, 40, 45, 50} on linear Gaussian SEMs. The results are shown in Figures 10a and 10b with degrees 2 and 8, respectively. We observe that when the rank of the underlying graph becomes higher, the advantage of NOTEARS-low-rank over NOTEARS decreases. Nonetheless, NOTEARS-low-rank with rank r = 50 is still comparable to NOTEARS, and has a lower average SHD after removing outlier SHDs using the interquartile range rule. D.2 NOTEARS-LOW-RANK WITH DIFFERENT SAMPLE SIZES We next empirically study the consistency of NOTEARSlow-rank. Again, we use rank-specified random graphs (sampled according to Algorithm 1) with d = 100 nodes, degree k = 8, rank r = 10, and linear Gaussian SEMs. We also assume that the true rank is known. We fix the rank parameter r̂ = 10 and use different sample sizes ranging from 200 to 5, 000. From Figure 11, NOTEARSlow-rank performs reasonably well when the sample size is small and tends to have a better performance with a larger number of samples. D.3 FURTHER PRUNING We compare the empirical results before and after applying the additional pruning technique described in Appendix C.2. The graphs are rank-specified with d ∈ {100, 300} nodes, rank r = d0.1de, and degree k ∈ {2, 4, 6, 8}. We again use linear Gaussian data model with equal noise variances to generate the datasets. The average SHDs are reported in Figure 12. We see that applying an additional pruning step indeed improves the final performance of both NOTEARS and NOTEARS-low-rank, especially on relatively large and dense graphs. D.4 AN EMPIRICAL COMPARISON BETWEEN ICA-LINGAM AND DIRECTLINGAM To our best knowledge, there are two Python implementations of ICA-LiNGAM (Shimizu et al., 2006) released by the authors, available at https://sites.google.com/site/sshimizu06/ lingam and https://github.com/cdt15/lingam, respectively, where the latter is a Python package containing several LiNGAM related methods. In the following, we use ICALiNGAM-pre and ICA-LiNGAM-cdt to denote these two implementations, respectively. For DirectLiNGAM (Shimizu et al., 2011), we only find a Python implementation available at the previously mentioned Python package containing ICA-LiNGAM-cdt. Here we run DirectLiNGAM, ICA-LiNGAM-cdt, and ICA-LiNGAM-pre on linear exponential data models with 100-node and rank-10 graphs. The mean SHDs are reported below in Table 1. Based on this experimental result as well as our past experience, DirectLiNGAM usually has a (slightly) better performance than ICA-LiNGAM-cdt, while ICA-LiNGAM-pre has a noticeably (if not much) better performance for relatively dense and large graphs. We are more concerned with relatively large and dense graphs and hence report the results achieved by ICA-LiNGAM-pre in the main paper. D.5 DETAILED EMPIRICAL RESULTS FOR EXPERIMENT 1 WITH LINEAR GAUSSIAN SEMS Table 2 reports detailed results including true positive rates (TPRs), false discovery rates (FDRs), structural Hamming distances (SHDs), and running time on rank-specified graphs with linear Gaussian data model. Here the true rank is assumed to be known and is used as the rank parameter in NOTEARSlow-rank. We also test (fast) GES, MMHC, and PC. However, PC is too slow since some nodes may have a high in-degree (i.e., hubs) in large, dense, and low rank graphs. For the same reason, the skeleton may not be correctly estimated by MMHC, which has a similar performance to that of GES. Therefore, we only include the results of GES for comparison. We treat GES favorably by regarding undirected edges as true positives if the true graph has a directed edge in place of the undirected ones. D.6 DETAILED RESULTS FOR EXPERIMENT 4 WITH NON-LINEAR SEMS Table 3 reports the detailed SHDs for each method in Section 5.4. We also mark in bold the best results from methods with or without low rank modifications.
1. What is the focus of the paper regarding low-rank DAG models? 2. What are the strengths of the proposed approach, particularly in exploiting the property of low-rank? 3. What are the weaknesses of the paper, especially regarding the simulation settings? 4. How does the reviewer assess the clarity and novelty of the paper's content? 5. What are the limitations of the proposed method in representing a complete partial DAG?
Review
Review ########################################################################## Summary: The paper provides a new approch for learning a (possibly densely-connced) low-rank DAG models in the high dimensional settings. In particular, this paper provides how to exploit the property of the low-rank for recovering a underlying causal structure. Futher shown is that under what circumstance the low-rank assumption holds and heuristically confirms that thgrough simulation settings. Lastly, the proposed approach is compared against the state-of-the-art DAG learning algorithms that requirs the assumption of a sparse graph. ########################################################################## Reasons for score: Overall, I vote for accepting. This paper is well-written and delivers its main contribution really well. Futhermore it well summarizes the prior works on learning a causal graph. In addition, the main idea of recovering a graph under low-rank is novel. However, my major concern is about the simulations of the paper although I acknolwedge that most of relevant papers exploit a similar settings. It would be better to emphasize that the proposed algorithm attempts to learn a complete partial DAG, not a DAG. Although some of related paper fasely asserts that their approches recovers a DAG using conditional independence relationships or score function, I hope this paper clarifies this point. ##########################################################################Pros: Pros: The paper solve a very important problem of causal inference. It seems to be practical and novel. The paper is really clear and convincing. ########################################################################## Cons: One important comment from my side is that the way in which you simulate your models is severely biased. What I always do when I simulate models (and I think others should do something similar), is rescale edge weights for each node such that if all parents have values with a standard-normal distribution, then the value of the node itself will also have a standard-normal distribution (assuming Gaussian additive noise). In this way one avoids that the variance of the variables blows up (or converges to 0) as one adds more and more nodes to the graph. Therefore, assuming a standard-normal error distribution (or error variance is large) is impractical. Furthermore, in the densely-connected graph settings, it must be really careful to determine the range of edge weights; otherwise, the variance of the variables are again blowing up. Hence, in some points, the targeted graph is unrealistic in large-scale settings (d is large). Nevertheless, as an emerging field of learning DAG models in polynomial time with complete search, it should be accepted. However, for a better representation and fair comparison, it would be better to change simulation settings. Lastly, this paper does not explain a complete partial DAG that the proposed method is actually finds. In principle, there might be plenty of solutions for the considered optimization problem. Hence this paper would be clearer for new researcher in DAG model learning if it emphasizes CPDAG or PDAG. Update Although it is responded that the simulation setting used in the paper does cause blowing up samples or marginal variance, it is in general impossible or the setting assumes too sparse case where considered graphs are almost empty. In addition, as you mentioned, I also ackowledge that it is a widely-used setting; however, there are a lot of papers that are rejected because of the unfair simulation setting. I like the main idea of the paper a lot, and hence, I hope the authors set the simulation setting more carefully. Furthermore, it is really frustaring answer that the authors consider the only case where graph is uniquely idenfiable from the pure observations. As you know that is really rare when the number of nodes is large (p > 50).
ICLR
Title Making Stochastic Neural Networks from Deterministic Ones Abstract It has been believed that stochastic feedforward neural networks (SFNN) have several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due to their stochastic nature. However, training SFNN is notoriously harder. In this paper, we aim at developing efficient training methods for large-scale SFNN, in particular using known architectures and pre-trained parameters of DNN. To this end, we propose a new intermediate stochastic model, called Simplified-SFNN, which can be built upon any baseline DNN and approximates certain SFNN by simplifying its upper latent units above stochastic ones. The main novelty of our approach is in establishing the connection between three models, i.e., DNN → Simplified-SFNN → SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. Using several popular DNNs, we show how they can be effectively transferred to the corresponding stochastic models for both multi-modal and classification tasks on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million parameters, where the former consistently outperforms the latter for the classification tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect. 1 INTRODUCTION Recently, deterministic deep neural networks (DNN) have demonstrated state-of-the-art performance on many supervised tasks, e.g., speech recognition (Hinton et al., 2012a) and object recognition (Krizhevsky et al., 2012). One of the main components underlying these successes is on the efficient training methods for deeper and wider DNNs, which include backpropagation (Rumelhart et al., 1988), stochastic gradient descent (Robbins & Monro, 1951), dropout/dropconnect (Hinton et al., 2012b; Wan et al., 2013), batch/weight normalization (Ioffe & Szegedy, 2015; Salimans & Kingma, 2016), and various activation functions (Nair & Hinton, 2010; Gulcehre et al., 2016). On the other hand, stochastic feedforward neural networks (SFNN) (Neal, 1990) having random latent units are often necessary in order to model complex stochastic natures in many real-world tasks, e.g., structured prediction (Tang & Salakhutdinov, 2013), image generation (Goodfellow et al., 2014) and memory networks (Zaremba & Sutskever, 2015). Furthermore, it has been believed that SFNN has several advantages beyond DNN (Raiko et al., 2014): it has more expressive power for multi-modal learning and regularizes better for large-scale learning. Training large-scale SFNN is notoriously hard since backpropagation is not directly applicable. Certain stochastic neural networks using continuous random units are known to be trainable efficiently using backpropagation under the variational techniques and the reparameterization tricks (Kingma & Welling, 2013). On the other hand, training SFNN having discrete, i.e., binary or multi-modal, random units is more difficult since intractable probabilistic inference is involved requiring too many random samples. There have been several efforts developing efficient training methods for SFNN having binary random latent units (Neal, 1990; Saul et al., 1996; Tang & Salakhutdinov, 2013; Bengio et al., 2013; Raiko et al., 2014; Gu et al., 2015) (see Section 2.1 for more details). However, training SFNN is still significantly slower than doing DNN of the same architecture, e.g., most prior works on this line have considered a small number (at most 5 or so) of layers in SFNN. We aim for the same goal, but our direction is orthogonal to them. Instead of training SFNN directly, we study whether pre-trained parameters of DNN (or easier models) can be transferred to it, possibly with further fine-tuning of light cost. This approach can be attractive since one can utilize recent advances in DNN on its design and training. For example, one can design the network structure of SFNN following known specialized ones of DNN and use their pre-trained parameters. To this end, we first try transferring pre-trained parameters of DNN using sigmoid activation functions to those of the corresponding SFNN directly. In our experiments, the heuristic reasonably works well. For multi-modal learning, SFNN under such a simple transformation outperforms DNN. Even for the MNIST classification, the former performs similarly as the latter (see Section 2 for more details). However, it is questionable whether a similar strategy works in general, particularly for other unbounded activation functions like ReLU (Nair & Hinton, 2010) since SFNN has binary, i.e., bounded, random latent units. Moreover, it lost the regularization benefit of SFNN: it is rather believed that transferring parameters of stochastic models to DNN helps its regularization, but the opposite direction is unlikely possible. To address the issues, we propose a special form of stochastic neural networks, named SimplifiedSFNN, which intermediates between SFNN and DNN, having the following properties. First, Simplified-SFNN can be built upon any baseline DNN, possibly having unbounded activation functions. The most significant part of our approach lies in providing rigorous network knowledge transferring (Chen et al., 2015) between Simplified-SFNN and DNN. In particular, we prove that parameters of DNN can be transformed to those of the corresponding Simplified-SFNN while preserving the performance, i.e., both represent the same mapping and features. Second, Simplified-SFNN approximates certain SFNN, better than DNN, by simplifying its upper latent units above stochastic ones using two different non-linear activation functions. Simplified-SFNN is much easier to train than SFNN while utilizing its stochastic nature for regularization. The above connection DNN→ Simplified-SFNN→ SFNN naturally suggests the following training procedure for both SFNN and Simplified-SFNN: train a baseline DNN first and then fine-tune its corresponding Simplified-SFNN initialized by the transformed DNN parameters. The pre-training stage accelerates the training task since DNN is faster to train than Simplified-SFNN. In addition, one can also utilize known DNN training techniques such as dropout and batch normalization for fine-tuning Simplified-SFNN. In our experiments, we train SFNN and Simplified-SFNN under the proposed strategy. They consistently outperform the corresponding DNN for both multi-modal and classification tasks, where the former and the latter are for measuring the model expressive power and the regularization effect, respectively. To the best of our knowledge, we are the first to confirm that SFNN indeed regularizes better than DNN. We also construct the stochastic models following the same network structure of popular DNNs including Lenet-5 (LeCun et al., 1998), NIN (Lin et al., 2014) and WRN (Zagoruyko & Komodakis, 2016). In particular, WRN (wide residual network) of 28 layers and 36 million parameters has shown the state-of-art performances on CIFAR-10 and CIFAR-100 classification datasets, and our stochastic models built upon WRN outperform the deterministic WRN on the datasets. Organization. In Section 2, we focus on DNNs having sigmoid and ReLU activation functions and study simple transformations of their parameters to those of SFNN. In Section 3, we consider DNNs having general activation functions and describe more advanced transformations via introducing a new model, named Simplified-SFNN. 2 SIMPLE TRANSFORMATION FROM DNN TO SFNN 2.1 PRELIMINARIES FOR SFNN Stochastic feedforward neural network (SFNN) is a hybrid model, which has both stochastic binary and deterministic hidden units. We first introduce SFNN with one stochastic hidden layer (and without deterministic hidden layers) for simplicity. Throughout this paper, we commonly denote the bias for unit i and the weight matrix of the `-th hidden layer by b`i and W `, respectively. Then, the stochastic hidden layer in SFNN is defined as a binary random vector with N1 units, i.e., h1 ∈ {0, 1}N1 , drawn under the following distribution: P ( h1 | x ) = N1∏ i=1 P ( h1i | x ) , where P ( h1i = 1 | x ) = σ ( W1ix+ b 1 i ) . (1) In the above, x is the input vector and σ (x) = 1/ (1 + e−x) is the sigmoid function. Our conditional distribution of the output y is defined as follows: P (y | x) = EP (h1|x) [ P ( y | h1 )] = EP (h1|x) [ N ( y |W2h1 + b2, σ2y )] , where N (·) denotes the normal distribution with mean W2h1 + b2 and (fixed) variance σ2y . Therefore, P (y | x) can express a very complex, multi-modal distribution since it is a mixture of exponentially many normal distributions. The multi-layer extension is straightforward via a combination of stochastic and deterministic hidden layers, e.g., see Tang & Salakhutdinov (2013), Raiko et al. (2014). Furthermore, one can use any other output distributions as like DNN, e.g., softmax for classification tasks. There are two computational issues for training SFNN: computing expectations with respect to stochastic units in forward pass and computing gradients in backward pass. One can notice that both are computationally intractable since they require summations over exponentially many configurations of all stochastic units. First, in order to handle the issue in forward pass, one can use the follow- ing Monte Carlo approximation for estimating the expectation: P (y | x) w 1M M∑ m=1 P (y | h(m)), where h(m) ∼ P ( h1 | x ) and M is the number of samples. This random estimator is unbiased and has relatively low variance (Tang & Salakhutdinov, 2013) since its accuracy does not depend on the dimensionality of h1 and one can draw samples from the exact distribution. Next, in order to handle the issue in backward pass, Neal (1990) proposed a Gibbs sampling, but it is known that it often mixes poorly. Saul et al. (1996) proposed a variational learning based on the mean-field approximation, but it has additional parameters making the variational lower bound looser. More recently, several other techniques have been proposed including unbiased estimators of the variational bound using importance sampling (Tang & Salakhutdinov, 2013; Raiko et al., 2014) and biased/unbiased estimators of the gradient for approximating backpropagation (Bengio et al., 2013; Raiko et al., 2014; Gu et al., 2015). 2.2 SIMPLE TRANSFORMATION FROM SIGMOID-DNN AND RELU-DNN TO SFNN Despite the recent advances, training SFNN is still very slow compared to DNN due to the sampling procedures: in particular, it is notoriously hard to train SFNN when the network structure is deeper and wider. In order to handle these issues, we consider the following approximation: P (y | x) = EP (h1|x) [ N ( y |W2h1 + b2, σ2y )] w N ( y | EP (h1|x) [ W2h1 ] + b2, σ2y ) = N ( y |W2σ ( W1x+ b1 ) + b2, σ2y ) . (2) Note that the above approximation corresponds to replacing stochastic units by deterministic ones such that their hidden activation values are same as marginal distributions of stochastic units, i.e., SFNN can be approximated by DNN using sigmoid activation functions, say sigmoid-DNN. When there exist more latent layers above the stochastic one, one has to apply similar approximations to all of them, i.e., exchanging the orders of expectations and non-linear functions, for making DNN and SFNN are equivalent. Therefore, instead of training SFNN directly, one can try transferring pretrained parameters of sigmoid-DNN to those of the corresponding SFNN directly: train sigmoidDNN instead of SFNN, and replace deterministic units by stochastic ones for the inference purpose. Although such a strategy looks somewhat ‘rude’, it was often observed in the literature that it reasonably works well for SFNN (Raiko et al., 2014) and we also evaluate it as reported in Table 1. We also note that similar approximations appear in the context of dropout: it trains a stochastic model averaging exponentially many DNNs sharing parameters, but also approximates a single DNN well. Now we investigate a similar transformation in the case when DNN uses the unbounded ReLU activation function, say ReLU-DNN. Many recent deep networks are of ReLU-DNN type due to the gradient vanishing problem, and their pre-trained parameters are often available. Although it is straightforward to build SFNN from sigmoid-DNN, it is less clear in this case since ReLU is unbounded. To handle this issue, we redefine the stochastic latent units of SFNN: P ( h1 | x ) = N1∏ i=1 P ( h1i | x ) , where P ( h1i = 1 | x ) = min { αf ( W1ix+ b 1 i ) , 1 } . (3) In the above, f(x) = max{x, 0} is the ReLU activation function and α is some hyper-parameter. A simple transformation can be defined similarly as the case of sigmoid-DNN via replacing deterministic units by stochastic ones. However, to preserve the parameter information of ReLU-DNN, one has to choose α such that αf ( W1ix+ b 1 i ) ≤ 1 and rescale upper parameters W2 as follows: α−1 ← max i,x ∣∣∣f (Ŵ1ix+ b̂1i)∣∣∣ , (W1, b1)← (Ŵ1, b̂1) , (W2, b2)← ( Ŵ2/α, b̂2) . (4) Then, applying similar approximations as in (2), i.e., exchanging the orders of expectations and non-linear functions, one can observe that ReLU-DNN and SFNN are equivalent. We evaluate the performance of the simple transformations from DNN to SFNN on the MNIST dataset (LeCun et al., 1998) and the synthetic dataset (Bishop, 1994), where the former and the latter are popular datasets used for a classification task and a multi-modal (i.e., one-to-many mappings) prediction learning, respectively. In all experiments reported in this paper, we commonly use the softmax and Gaussian with standard deviation of σy = 0.05 are used for the output probability on classification and regression tasks, respectively. The only first hidden layer of DNN is replaced by stochastic one, and we use 500 samples for estimating the expectations in the SFNN inference. As reported in Table 1, we observe that the simple transformation often works well for both tasks: the SFNN and sigmoid-DNN inferences (using same parameters trained by sigmoid-DNN) perform similarly for the classification task and the former significantly outperforms for the latter for the multi-modal task (also see Figure 1). It might suggest some possibilities that the expensive SFNN training might not be not necessary, depending on the targeted learning quality. However, in case of ReLU, SFNN performs much worse than ReLU-DNN for the MNIST classification task under the parameter transformation. 3 TRANSFORMATION FROM DNN TO SFNN VIA SIMPLIFIED-SFNN In this section, we propose an advanced method to utilize the pre-trained parameters of DNN for training SFNN. As shown in the previous section, simple parameter transformations from DNN to SFNN are not clear to work in general, in particular for activation functions other than sigmoid. Moreover, training DNN does not utilize the stochastic regularizing effect, which is an important benefit of SFNN. To address the issues, we design an intermediate model, called Simplified-SFNN. The proposed model is a special form of stochastic neural networks, which approximates certain SFNN by simplifying its upper latent units above stochastic ones. Then, we establish more rigorous connections between three models: DNN → Simplified-SFNN → SFNN, which leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. In our experiments, we evaluate the strategy for various tasks and popular DNN architectures. 3.1 SIMPLIFIED-SFNN OF TWO HIDDEN LAYERS AND NON-NEGATIVE ACTIVATION FUNCTIONS For clarity of presentation, we first introduce Simplified-SFNN with two hidden layers and nonnegative activation functions, where its extensions to multiple layers and general activation functions are presented in Appendix B. We also remark that we primarily describe fully-connected SimplifiedSFNNs, but their convolutional versions can also be naturally defined. In Simplified-SFNN of two hidden layers, we assume that the first and second hidden layers consist of stochastic binary hidden units and deterministic ones, respectively. As like (3), the first layer is defined as a binary random vector with N1 units, i.e., h1 ∈ {0, 1}N1 , drawn under the following distribution: P ( h1 | x ) = N1∏ i=1 P ( h1i | x ) , where P ( h1i = 1 | x ) = min { α1f ( W1ix+ b 1 i ) , 1 } . (5) where x is the input vector, α1 > 0 is a hyper-parameter for the first layer, and f : R→ R+ is some non-negative non-linear activation function with |f ′(x)| ≤ 1 for all x ∈ R, e.g., ReLU and sigmoid activation functions. Now the second layer is defined as the following deterministic vector with N2 units, i.e., h2(x) ∈ RN2 : h2 (x) = [ f ( α2 ( EP (h1|x) [ s ( W2jh 1 + b2j )] − s (0) )) : ∀j ∈ N2 ] , (6) where α2 > 0 is a hyper-parameter for the second layer and s : R → R is a differentiable function with |s′′(x)| ≤ 1 for all x ∈ R, e.g., sigmoid and tanh functions. In our experiments, we use the sigmoid function for s(x). Here, one can note that the proposed model also has the same computational issues with SFNN in forward and backward passes due to the complex expectation. One can train Simplified-SFNN similarly as SFNN: we use Monte Carlo approximation for estimating the expectation and the (biased) estimator of the gradient for approximating backpropagation inspired by Raiko et al. (2014) (more detailed explanation is presented in Appendix A). We are interested in transferring parameters of DNN to Simplified-SFNN to utilize the training benefits of DNN since the former is much faster to train than the latter. To this end, we consider the following DNN of which `-th hidden layer is deterministic and defined as follows: ĥ` (x) = [ ĥ`i (x) = f ( Ŵ`i ĥ `−1 (x) + b̂`i ) : i ∈ N ` ] , (7) where ĥ0(x) = x. As stated in the following theorem, we establish a rigorous way how to initialize parameters of Simplified-SFNN in order to transfer the knowledge stored in DNN. Theorem 1 Assume that both DNN and Simplified-SFNN with two hidden layers have same network structure with non-negative activation function f . Given parameters {Ŵ`, b̂` : ` = 1, 2} of DNN and input dataset D, choose those of Simplified-SFNN as follows:( α1,W 1, b1 ) ← ( 1 γ1 , Ŵ1, b̂1 ) , ( α2,W 2, b2 ) ← ( γ2γ1 s′ (0) , 1 γ2 Ŵ2, 1 γ1γ2 b̂2 ) , (8) where γ1 = max i,x∈D ∣∣∣f (Ŵ1ix+ b̂1i)∣∣∣ and γ2 > 0 is any positive constant. Then, it follows that ∣∣∣h2j (x)− ĥ2j (x)∣∣∣ ≤ γ1 (∑ i ∣∣∣Ŵ 2ij∣∣∣+ b̂2jγ−11 )2 2s′ (0) γ2 , ∀j,x ∈ D. The proof of the above theorem is presented in Appendix D.1. Our proof is built upon the first-order Taylor expansion of non-linear function s(x). Theorem 1 implies that one can make Simplified-SFNN represent the function values of DNN with bounded errors using a linear transformation. Furthermore, the errors can be made arbitrarily small by choosing large γ2, i.e., lim γ2→∞ ∣∣∣h2j (x)− ĥ2j (x)∣∣∣ = 0, ∀j,x ∈ D. Figure 2(c) shows that knowledge transferring loss decreases as γ2 increases on MNIST classification. Based on this, we choose γ2 = 50 commonly for all experiments. 3.2 WHY SIMPLIFIED-SFNN ? Given a Simplified-SFNN model, the corresponding SFNN can be naturally defined by taking out the expectation in (6). As illustrated in Figure 2(a), the main difference between SFNN and SimplifiedSFNN is that the randomness of the stochastic layer propagates only to its upper layer in the latter, i.e., the randomness of h1 is averaged out at its upper units h2 and does not propagate to h3 or output y. Hence, Simplified-SFNN is no longer a Bayesian network. This makes training Simplified-SFNN much easier than SFNN since random samples are not required at some layers1 and consequently the quality of gradient estimations can also be improved, in particular for unbounded activation functions. Furthermore, one can use the same approximation procedure (2) to see that SimplifiedSFNN approximates SFNN. However, since Simplified-SFNN still maintains binary random units, it uses approximation steps later, in comparison with DNN. In summary, Simplified-SFNN is an intermediate model between DNN and SFNN, i.e., DNN→ Simplified-SFNN→ SFNN. The above connection naturally suggests the following training procedure for both SFNN and Simplified-SFNN: train a baseline DNN first and then fine-tune its corresponding Simplified-SFNN initialized by the transformed DNN parameters. Finally, the fine-tuned parameters can be used for SFNN as well. We evaluate the strategy for the MNIST classification, which is reported in Table 2 (see Appendix C for more detailed experiment setups). We found that SFNN under the two-stage training always performs better than SFNN under a simple transformation (4) from ReLU-DNN. 1 For example, if one replaces the first feature maps in the fifth residual unit of Pre-ResNet having 164 layers (He et al., 2016) by stochastic ones, then the corresponding DNN, Simplified-SFNN and SFNN took 1 mins 35 secs, 2 mins 52 secs and 16 mins 26 secs per each training epoch, respectively, on our machine with one Intel CPU (Core i7-5820K 6-Core@3.3GHz) and one NVIDIA GPU (GTX Titan X, 3072 CUDA cores). Here, we trained both stochastic models using the biased estimator (Raiko et al., 2014) with 10 random samples on CIFAR-10 dataset. More interestingly, Simplified-SFNN consistently outperforms its baseline DNN due to the stochastic regularizing effect, even when we train both models using dropout (Hinton et al., 2012b) and batch normalization (Ioffe & Szegedy, 2015). In order to confirm the regularization effects, one can again approximate a trained Simplified-SFNN by a new deterministic DNN which we call DNN∗ and is different from its baseline DNN under the following approximation at upper latent units above binary random units: EP (h`|x) [ s ( W`+1j h ` )] w s ( EP (h`|x) [ W`+1j h ` ]) = s (∑ i W `+1ij P ( h`i = 1 | x )) . (9) We found that DNN∗ using fined-tuned parameters of Simplified-SFNN also outperforms the baseline DNN as shown in Table 2 and Figure 2(b). 3.3 EXPERIMENTAL RESULTS ON MULTI-MODAL LEARNING AND CONVOLUTIONAL NETWORKS We present several experimental results for both multi-modal and classification tasks on MNIST (LeCun et al., 1998), Toronto Face Database (TFD) (Susskind et al., 2010), CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009) and SVHN (Netzer et al., 2011). Here, we present some key results due to the space constraints and more detailed explanations for our experiment setups are presented in Appendix C. We first verify that it is possible to learn one-to-many mapping via Simplified-SFNN on the TFD and MNIST datasets, where the former and the latter are used to predict multiple facial expressions from the mean of face images per individual and the lower half of the MNIST digit given the upper half, respectively. We remark that both tasks are commonly performed in recent other works to test the multi-modal learning using SFNN (Raiko et al., 2014; Gu et al., 2015). In all experiments, we first train a baseline DNN, and the trained parameters of DNN are used for further fine-tuning those of Simplified-SFNN. As shown in Table 3 and Figure 3, stochastic models outperform their baseline DNN, and generate different digits for the case of ambiguous inputs (while DNN does not). We also evaluate the regularization effect of Simplified-SFNN for the classification tasks on CIFAR-10, CIFAR-100 and SVHN. Table 4 reports the classification error rates using convolutional neural networks such as Lenet-5 (LeCun et al., 1998), NIN (Lin et al., 2014) and WRN (Zagoruyko & Komodakis, 2016). Due to the regularization effects, Simplified-SFNNs consistently outperform their baseline DNNs. For example, WRN∗ outperforms WRN by 0.08% on CIFAR-10 and 0.58% on CIFAR-100. We expect that if one introduces more stochastic layers, the error would be decreased more (see Figure 4), but it increases the fine-tuning time-complexity of Simplified-SFNN. 4 CONCLUSION In order to develop an efficient training method for large-scale SFNN, this paper proposes a new intermediate stochastic model, called Simplified-SFNN. We establish the connection between three models, i.e., DNN → Simplified-SFNN → SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. This connection naturally leads an efficient training procedure of the stochastic models utilizing pre-trained parameters and architectures of DNN. We believe that our work brings a new important direction for training stochastic neural networks, which should be of broader interest in many related applications. A TRAINING SIMPLIFIED-SFNN The parameters of Simplified-SFNN can be learned using a variant of the backpropagation algorithm (Rumelhart et al., 1988) in a similar manner to DNN. However, in contrast to DNN, there are two computational issues for simplified-SFNN: computing expectations with respect to stochastic units in forward pass and computing gradients in back pass. One can notice that both are intractable since they require summations over all possible configurations of all stochastic units. First, in order to handle the issue in forward pass, we use the following Monte Carlo approximation for estimating the expectation: EP (h1|x) [ s ( W2jh 1 + b2j )] w 1 M M∑ m=1 s ( W2jh (m) + b2j ) , h(m) ∼ P ( h1 | x ) , where M is the number of samples. This random estimator is unbiased and has relatively low variance (Tang & Salakhutdinov, 2013) since its accuracy does not depend on the dimensionality of h1 and one can draw samples from the exact distribution. Next, in order to handle the issue in back pass, we use the following approximation inspired by (Raiko et al., 2014): ∂ ∂W2j EP (h1|x) [ s ( W2jh 1 + b2j )] w 1 M ∑ m ∂ ∂W2j s ( W2jh (m) + b2j ) , ∂ ∂W1i EP (h1|x) [ s ( W2jh 1 + b2j )] w W 2ij M ∑ m s′ ( W2jh (m) + b2j ) ∂ ∂W1i P ( h1i = 1 | x ) , where h(m) ∼ P ( h1 | x ) and M is the number of samples. In our experiments, we commonly choose M = 20. B EXTENSIONS OF SIMPLIFIED-SFNN In this section, we describe how the network knowledge transferring between Simplified-SFNN and DNN, i.e., Theorem 1, generalizes to multiple layers and general activation functions. B.1 EXTENSION TO MULTIPLE LAYERS A deeper Simplified-SFNN with L hidden layers can be defined similarly as the case of L = 2. We also establish network knowledge transferring between Simplified-SFNN and DNN with L hidden layers as stated in the following theorem. Here, we assume that stochastic layers are not consecutive for simpler presentation, but the theorem is generalizable for consecutive stochastic layers. Theorem 2 Assume that both DNN and Simplified-SFNN with L hidden layers have same network structure with non-negative activation function f . Given parameters {Ŵ`, b̂` : ` = 1, . . . , L} of DNN and input dataset D, choose the same ones for Simplified-SFNN initially and modify them for each `-th stochastic layer and its upper layer as follows: α` ← 1 γ` , ( α`+1,W `+1, b`+1 ) ← ( γ`γ`+1 s′ (0) , Ŵ`+1 γ`+1 , b̂`+1 γ`γ`+1 ) , where γ` = max i,x∈D ∣∣∣f (Ŵ`ih`−1(x) + b̂`i)∣∣∣ and γ`+1 is any positive constant. Then, it follows that lim γ`+1→∞ ∀ stochastic hidden layer ` ∣∣∣hLj (x)− ĥLj (x)∣∣∣ = 0, ∀j,x ∈ D. The above theorem again implies that it is possible to transfer knowledge from DNN to SimplifiedSFNN by choosing large γl+1. The proof of Theorem 2 is similar to that of Theorem 1 and given in Appendix D.2. B.2 EXTENSION TO GENERAL ACTIVATION FUNCTIONS In this section, we describe an extended version of Simplified-SFNN which can utilize any activation function. To this end, we modify the definitions of stochastic layers and their upper layers by introducing certain additional terms. If the `-th hidden layer is stochastic, then we slightly modify the original definition (5) as follows: P ( h` | x ) = N`∏ i=1 P ( h`i | x ) with P ( h`i = 1 | x ) = min { α`f ( W1ix+ b 1 i + 1 2 ) , 1 } , where f : R → R is a non-linear (possibly, negative) activation function with |f ′(x)| ≤ 1 for all x ∈ R. In addition, we re-define its upper layer as follows: h`+1 (x) = [ f ( α`+1 ( EP (h`|x) [ s ( W`+1j h ` + b`+1j )] − s (0)−s ′ (0) 2 ∑ i W `+1ij )) : ∀j ] , where h0(x) = x and s : R→ R is a differentiable function with |s′′(x)| ≤ 1 for all x ∈ R. Under this general Simplified-SFNN model, we also show that transferring network knowledge from DNN to Simplified-SFNN is possible as stated in the following theorem. Here, we again assume that stochastic layers are not consecutive for simpler presentation. Theorem 3 Assume that both DNN and Simplified-SFNN with L hidden layers have same network structure with non-linear activation function f . Given parameters {Ŵ`, b̂` : ` = 1, . . . , L} of DNN and input dataset D, choose the same ones for Simplified-SFNN initially and modify them for each `-th stochastic layer and its upper layer as follows: α` ← 1 2γ` , ( α`+1,W `+1, b`+1 ) ← ( 2γ`γ`+1 s′(0) , Ŵ`+1 γ`+1 , b̂`+1 2γ`γ`+1 ) , where γ` = max i,x∈D ∣∣∣f (Ŵ`ih`−1(x) + b̂`i)∣∣∣, and γ`+1 is any positive constant. Then, it follows that lim γ`+1→∞ ∀ stochastic hidden layer ` ∣∣∣hLj (x)− ĥLj (x)∣∣∣ = 0, ∀j,x ∈ D. We omit the proof of the above theorem since it is somewhat direct adaptation of that of Theorem 2. C EXPERIMENTAL SETUPS In this section, we describe detailed explanation about all the experiments described in Section 3. In all experiments, the softmax and Gaussian with the standard deviation of 0.05 are used as the output probability for the classification task and the multi-modal prediction, respectively. The loss was minimized using ADAM learning rule (Kingma & Ba, 2014) with a mini-batch size of 128. We used an exponentially decaying learning rate. C.1 CLASSIFICATION ON MNIST The MNIST dataset consists of 28 × 28 pixel greyscale images, each containing a digit 0 to 9 with 60,000 training and 10,000 test images. For this experiment, we do not use any data augmentation or pre-processing. Hyper-parameters are tuned on the validation set consisting of the last 10,000 training images. All Simplified-SFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hidden layer. As described in Section 3.2, we train Simplified-SFNNs under the two-stage procedure: first train a baseline DNN for first 200 epochs, and the trained parameters of DNN are used for initializing those of Simplified-SFNN. For 50 epochs, we train simplified-SFNN. We choose the hyper-parameter γ2 = 50 in the parameter transformation. All Simplified-SFNNs are trained with M = 20 samples at each epoch, and in the test, we use 500 samples. C.2 MULTI-MODAL REGRESSION ON TFD AND MNIST The Toronto Face Database (TFD) (Susskind et al., 2010) dataset consists of 48×48 pixel greyscale images, each containing a face image of 900 individuals with 7 different expressions. Similar to (Raiko et al., 2014), we use 124 individuals with at least 10 facial expressions as data. We randomly choose 100 individuals with 1403 images for training and the remaining 24 individuals with 326 images for the test. We take the mean of face images per individual as the input and set the output as the different expressions of the same individual. The MNIST dataset consists of 28 × 28 pixel greyscale images, each containing a digit 0 to 9 with 60,000 training and 10,000 test images. For this experiments, each pixel of every digit images is binarized using its grey-scale value. We take the upper half of the MNIST digit as the input and set the output as the lower half of it. All SimplifiedSFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hidden layer. We train Simplified-SFNNs with M = 20 samples at each epoch, and in the test, we use 500 samples. We use 200 hidden units for each layer of neural networks in two experiments. Learning rate is chosen from {0.005 , 0.002, 0.001, ... , 0.0001} , and the best result is reported for both tasks. C.3 CLASSIFICATION ON CIFAR-10, CIFAR-100 AND SVHN The CIFAR-10 and CIFAR-100 datasets consist of 50,000 training and 10,000 test images. The SVHN dataset consists of 73,257 training and 26,032 test images.2 We pre-process the data using global contrast normalization and ZCA whitening. For these datasets, we design a convolutional version of Simplified-SFNN. In a similar manner to the case of fully-connected networks, one can define a stochastic convolution layer, which considers the input feature map as a binary random matrix and generates the output feature map as defined in (6). All Simplified-SFNNs are constructed by replacing a hidden feature map of a baseline models, i.e., Lenet-5, NIN and WRN, with stochastic one as shown in Figure 5(d). We use WRN with 16 and 28 layers for SVHN and CIFAR datasets, respectively, since they showed state-of-the-art performance as reported by Zagoruyko & Komodakis (2016). In case of WRN, we introduce up to two stochastic convolution layers.For 100 epochs, we first train baseline models, i.e., Lenet-5, NIN and WRN, and trained parameters are used for initializing those of Simplified-SFNNs. All Simplified-SFNNs are trained with M = 5 samples and the test error is only measured by the approximation (9). The test errors of baseline models are measured after training them for 200 epochs similar to Zagoruyko & Komodakis (2016). D PROOFS OF THEOREMS D.1 PROOF OF THEOREM 1 First consider the first hidden layer, i.e., stochastic layer. Let γ1 = max i,x∈D f ( Ŵ1ix+ b̂ 1 i ) be the maximum value of hidden units in DNN. If we initialize the parameters ( α1,W 1, b1 ) ←( 1 γ1 , Ŵ1, b̂1 ) , then the marginal distribution of each hidden unit i becomes P ( h1i = 1 | x,W1,b1 ) = min { α1f ( Ŵ1ix+ b̂ 1 i ) , 1 } = 1 γ1 f ( Ŵ1ix+ b̂ 1 i ) , ∀i,x ∈ D. (10) 2We do not use the extra SVHN dataset for training. Next consider the second hidden layer. From Taylor’s theorem, there exists a value z between 0 and x such that s(x) = s(0) + s′(0)x + R(x), where R(x) = s ′′(z)x2 2! . Since we consider a binary random vector, i.e., h1 ∈ {0, 1}N1 , one can write EP (h1|x) [ s ( βj ( h1 ))] = ∑ h1 ( s (0) + s′ (0)βj ( h1 ) +R ( βj ( h1 ))) P ( h1 | x ) = s (0) + s′ (0) (∑ i W 2ijP (h 1 i = 1 | x) + b2j ) + EP (h1|x) [ R(βj(h 1)) ] , where βj ( h1 ) := W2jh 1 + b2j is the incoming signal. From (6) and (10), for every hidden unit j, it follows that h2j ( x;W2,b2 ) = f ( α2 ( s′(0) ( 1 γ1 ∑ i W 2ij ĥ 1 i (x) + b 2 j ) + EP (h1|x) [ R ( βj ( h1 ))])) . Since we assume that |f ′(x)| ≤ 1, the following inequality holds:∣∣∣∣∣h2j (x;W2,b2)− f ( α2s ′(0) ( 1 γ1 ∑ i W 2ij ĥ 1 i (x) + b 2 j ))∣∣∣∣∣ ≤ ∣∣α2EP (h1|x) [R(βj(h1))]∣∣ ≤ α2 2 EP (h1|x) [( W2jh 1 + b2j )2] , where we use |s′′(z)| < 1 for the last inequality. Therefore, it follows that ∣∣∣h2j (x;W2,b2)− ĥ2j (x;Ŵ2, b̂2)∣∣∣ ≤ γ1 (∑ i ∣∣∣Ŵ 2ij∣∣∣+ b̂2jγ−11 )2 2s′(0)γ2 , ∀j, since we set ( α2,W 2, b2 ) ← ( γ2γ1 s′(0) , Ŵ2 γ2 , γ−11 γ2 b̂2 ) . This completes the proof of Theorem 1. D.2 PROOF OF THEOREM 2 For the proof of Theorem 2, we first state the two key lemmas on error propagation in SimplifiedSFNN. Lemma 4 Assume that there exists some positive constant B such that∣∣∣h`−1i (x)− ĥ`−1i (x)∣∣∣ ≤ B, ∀i,x ∈ D, and the `-th hidden layer of NCSFNN is standard deterministic layer as defined in (7). Given parameters {Ŵ`, b̂`} of DNN, choose same ones for NCSFNN. Then, the following inequality holds:∣∣∣h`j (x)− ĥ`j (x)∣∣∣ ≤ BN `−1Ŵ `max, ∀j,x ∈ D. where Ŵ `max = max ij ∣∣∣Ŵ `ij∣∣∣. Proof. See Appendix D.3. Lemma 5 Assume that there exists some positive constant B such that∣∣∣h`−1i (x)− ĥ`−1i (x)∣∣∣ ≤ B, ∀i,x ∈ D, and the `-th hidden layer of simplified-SFNN is stochastic layer. Given parameters {Ŵ`,Ŵ`+1, b̂`, b̂`+1} of DNN, choose those of Simplified-SFNN as follows: α` ← 1 γ` , ( α`+1,W `+1, b`+1 ) ← ( γ`γ`+1 s′ (0) , Ŵ`+1 γ`+1 , b̂`+1 γ`γ`+1 ) , where γ` = max j,x∈D ∣∣∣f (Ŵ`jh`−1(x) + b̂`j)∣∣∣ and γ`+1 is any positive constant. Then, it follows that ∣∣∣h`+1k (x)− ĥ`+1k (x)∣∣∣ ≤ BN `−1N `Ŵ `maxŴ `+1max + ∣∣∣∣∣∣∣ γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′(0)γ`+1 ∣∣∣∣∣∣∣ , ∀k,x ∈ D, where b̂`max = max j ∣∣∣̂b`j∣∣∣ and Ŵ `max = max ij ∣∣∣Ŵ `ij∣∣∣. Proof. See Appendix D.4. Assume that `-th layer is first stochastic hidden layer in Simplified-SFNN. Then, from Theorem 1, we have ∣∣∣h`+1j (x)− ĥ`+1j (x)∣∣∣ ≤ ∣∣∣∣∣∣∣ γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′(0)γ`+1 ∣∣∣∣∣∣∣ , ∀j,x ∈ D. (11) According to Lemma 4 and 5, the final error generated by the right hand side of (11) is bounded by τ`γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′ (0) γ`+1 , (12) where τ` = L∏ `′=l+2 ( N ` ′−1Ŵ ` ′ max ) . One can note that every error generated by each stochastic layer is bounded by (12). Therefore, it follows that ∣∣∣hLj (x)− ĥLj (x)∣∣∣ ≤ ∑ `:stochastic hidden layer τ`γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′ (0) γ`+1 , ∀j,x ∈ D. From above inequality, we can conclude that lim γ`+1→∞ ∀ stochastic hidden layer ` ∣∣∣hLj (x)− ĥLj (x)∣∣∣ = 0, ∀j,x ∈ D. This completes the proof of Theorem 2. D.3 PROOF OF LEMMA 4 From assumption, there exists some constant i such that | i| < B and h`−1i (x) = ĥ `−1 i (x) + i, ∀i,x. By definition of standard deterministic layer, it follows that h`j (x) = f (∑ i Ŵ `ijh `−1 i (x) + b̂ `−1 j ) = f (∑ i Ŵ `ij ĥ `−1 i (x) + ∑ i Ŵ `ij i + b̂ ` j ) . Since we assume that |f ′(x)| ≤ 1, one can conclude that∣∣∣∣∣h`j (x)− f (∑ i Ŵ `ij ĥ `−1 i (x) + b̂ ` j )∣∣∣∣∣ ≤ ∣∣∣∣∣∑ i Ŵ `ij i ∣∣∣∣∣ ≤ B ∣∣∣∣∣∑ i Ŵ `ij ∣∣∣∣∣ ≤ BN `−1Ŵ `max. This completes the proof of Lemma 4. D.4 PROOF OF LEMMA 5 From assumption, there exists some constant `−1i such that ∣∣ `−1i ∣∣ < B and h`−1i (x) = ĥ `−1 i (x) + `−1 i , ∀i,x. (13) Let γ` = max j,x∈D ∣∣∣f (Ŵ`jh`−1(x) + b̂`j)∣∣∣ be the maximum value of hidden units. If we initialize the parameters ( α`,W `, b` ) ← ( 1 γ` , Ŵ`, b̂` ) , then the marginal distribution becomes P ( h`j = 1 | x,W`,b` ) = min { α`f ( Ŵ`jh `−1 (x) + b̂`j ) , 1 } = 1 γ` f ( Ŵ`jh `−1 (x) + b̂`j ) , ∀j,x. From (13), it follows that P ( h`j = 1 | x,W`,b` ) = 1 γ` f ( Ŵ`jĥ `−1 (x) + ∑ i Ŵ `ij `−1 i + b̂ ` j ) , ∀j,x. Similar to Lemma 4, there exists some constant `j such that ∣∣ `j∣∣ < BN `−1Ŵ `max and P ( h`j = 1 | x,W`,b` ) = 1 γ` ( ĥ`j (x) + ` j ) , ∀j,x. (14) Next, consider the upper hidden layer of stochastic layer. From Taylor’s theorem, there exists a value z between 0 and t such that s(x) = s(0) + s′(0)x+ R(x), where R(x) = s ′′(z)x2 2! . Since we consider a binary random vector, i.e., h` ∈ {0, 1}N` , one can write EP (h`|x)[s(βk(h`))] = ∑ h` ( s(0) + s′(0)βk(h `) +R ( βk(h `) )) P (h` | x) = s(0) + s′(0) ∑ j W `+1jk P (h ` j = 1 | x) + b`+1k +∑ h` R(βk(h `))P (h` | x), where βk(h`) = W`+1k h ` + b`+1k is the incoming signal. From (14) and above equation, for every hidden unit k, we have h`+1k (x;W `+1,b`+1) = f α`+1 s′(0) 1 γ` ∑ j W `+1jk ĥ ` j(x) + ∑ j W `+1jk ` j + b`+1k + EP (h`|x) [R(βk(h`))] . Since we assume that |f ′(x)| < 1, the following inequality holds:∣∣∣∣∣∣h`+1k (x;W`+1,b`+1)− f α`+1s′(0) 1 γ` ∑ j W `+1ij ĥ ` j(x) + b `+1 j ∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣α`+1s ′(0) γ` ∑ j W `+1jk ` j + α`+1EP (h`|x) [ R(βk(h `)) ]∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣α`+1s ′(0) γ` ∑ j W `+1jk ` j ∣∣∣∣∣∣+ ∣∣∣α`+1 2 EP (h`|x) [( W`+1k h ` + b`+1k )2]∣∣∣ , (15) where we use |s′′(z)| < 1 for the last inequality. Therefore, it follows that ∣∣∣h`+1k (x)− ĥ`+1k (x)∣∣∣ ≤ BN `−1N `Ŵ `maxŴ `+1max + ∣∣∣∣∣∣∣ γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′(0)γ`+1 ∣∣∣∣∣∣∣ , since we set ( α`+1,W `+1, b`+1 ) ← ( γ`+1γ` s′(0) , Ŵ`+1 γ`+1 , γ−1` b̂ `+1 γ`+1 ) . This completes the proof of Lemma 5.
1. What is the focus of the paper, and how does it contribute to the field of deep learning? 2. What are the strengths of the proposed training method, particularly in its simplicity and experimental demonstration? 3. What are the weaknesses of the paper regarding its limitations in exploring scalability and uncertainty representation? 4. How does the reviewer suggest improving the paper by discussing connections with other related works, such as deep Bayes networks, deep generative models, and variational autoencoders?
Review
Review Strengths - interesting to explore the connection between ReLU DNN and simplified SFNN - small task (MNIST) is used to demonstrate the usefulness of the proposed training methods experimentally - the proposed, multi-stage training methods are simple to implement (despite lacking theoretical rigor) Weaknesses -no results are reported on real tasks with large training set -not clear exploration on the scalability of the learning methods when training data becomes larger -when the hidden layers become stochastic, the model shares uncertainty representation with deep Bayes networks or deep generative models (Deep Discriminative and Generative Models for Pattern Recognition , book chapter in “Pattern Recognition and Computer Vision”, November 2015, Download PDF). Such connections should be discussed, especially wrt the use of uncertainty representation to benefit pattern recognition (i.e. supervised learning via Bayes rule) and to benefit the use of domain knowledge such as “explaining away”. -would like to see connections with variational autoencoder models and training, which is also stochastic with hidden layers
ICLR
Title Making Stochastic Neural Networks from Deterministic Ones Abstract It has been believed that stochastic feedforward neural networks (SFNN) have several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due to their stochastic nature. However, training SFNN is notoriously harder. In this paper, we aim at developing efficient training methods for large-scale SFNN, in particular using known architectures and pre-trained parameters of DNN. To this end, we propose a new intermediate stochastic model, called Simplified-SFNN, which can be built upon any baseline DNN and approximates certain SFNN by simplifying its upper latent units above stochastic ones. The main novelty of our approach is in establishing the connection between three models, i.e., DNN → Simplified-SFNN → SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. Using several popular DNNs, we show how they can be effectively transferred to the corresponding stochastic models for both multi-modal and classification tasks on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million parameters, where the former consistently outperforms the latter for the classification tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect. 1 INTRODUCTION Recently, deterministic deep neural networks (DNN) have demonstrated state-of-the-art performance on many supervised tasks, e.g., speech recognition (Hinton et al., 2012a) and object recognition (Krizhevsky et al., 2012). One of the main components underlying these successes is on the efficient training methods for deeper and wider DNNs, which include backpropagation (Rumelhart et al., 1988), stochastic gradient descent (Robbins & Monro, 1951), dropout/dropconnect (Hinton et al., 2012b; Wan et al., 2013), batch/weight normalization (Ioffe & Szegedy, 2015; Salimans & Kingma, 2016), and various activation functions (Nair & Hinton, 2010; Gulcehre et al., 2016). On the other hand, stochastic feedforward neural networks (SFNN) (Neal, 1990) having random latent units are often necessary in order to model complex stochastic natures in many real-world tasks, e.g., structured prediction (Tang & Salakhutdinov, 2013), image generation (Goodfellow et al., 2014) and memory networks (Zaremba & Sutskever, 2015). Furthermore, it has been believed that SFNN has several advantages beyond DNN (Raiko et al., 2014): it has more expressive power for multi-modal learning and regularizes better for large-scale learning. Training large-scale SFNN is notoriously hard since backpropagation is not directly applicable. Certain stochastic neural networks using continuous random units are known to be trainable efficiently using backpropagation under the variational techniques and the reparameterization tricks (Kingma & Welling, 2013). On the other hand, training SFNN having discrete, i.e., binary or multi-modal, random units is more difficult since intractable probabilistic inference is involved requiring too many random samples. There have been several efforts developing efficient training methods for SFNN having binary random latent units (Neal, 1990; Saul et al., 1996; Tang & Salakhutdinov, 2013; Bengio et al., 2013; Raiko et al., 2014; Gu et al., 2015) (see Section 2.1 for more details). However, training SFNN is still significantly slower than doing DNN of the same architecture, e.g., most prior works on this line have considered a small number (at most 5 or so) of layers in SFNN. We aim for the same goal, but our direction is orthogonal to them. Instead of training SFNN directly, we study whether pre-trained parameters of DNN (or easier models) can be transferred to it, possibly with further fine-tuning of light cost. This approach can be attractive since one can utilize recent advances in DNN on its design and training. For example, one can design the network structure of SFNN following known specialized ones of DNN and use their pre-trained parameters. To this end, we first try transferring pre-trained parameters of DNN using sigmoid activation functions to those of the corresponding SFNN directly. In our experiments, the heuristic reasonably works well. For multi-modal learning, SFNN under such a simple transformation outperforms DNN. Even for the MNIST classification, the former performs similarly as the latter (see Section 2 for more details). However, it is questionable whether a similar strategy works in general, particularly for other unbounded activation functions like ReLU (Nair & Hinton, 2010) since SFNN has binary, i.e., bounded, random latent units. Moreover, it lost the regularization benefit of SFNN: it is rather believed that transferring parameters of stochastic models to DNN helps its regularization, but the opposite direction is unlikely possible. To address the issues, we propose a special form of stochastic neural networks, named SimplifiedSFNN, which intermediates between SFNN and DNN, having the following properties. First, Simplified-SFNN can be built upon any baseline DNN, possibly having unbounded activation functions. The most significant part of our approach lies in providing rigorous network knowledge transferring (Chen et al., 2015) between Simplified-SFNN and DNN. In particular, we prove that parameters of DNN can be transformed to those of the corresponding Simplified-SFNN while preserving the performance, i.e., both represent the same mapping and features. Second, Simplified-SFNN approximates certain SFNN, better than DNN, by simplifying its upper latent units above stochastic ones using two different non-linear activation functions. Simplified-SFNN is much easier to train than SFNN while utilizing its stochastic nature for regularization. The above connection DNN→ Simplified-SFNN→ SFNN naturally suggests the following training procedure for both SFNN and Simplified-SFNN: train a baseline DNN first and then fine-tune its corresponding Simplified-SFNN initialized by the transformed DNN parameters. The pre-training stage accelerates the training task since DNN is faster to train than Simplified-SFNN. In addition, one can also utilize known DNN training techniques such as dropout and batch normalization for fine-tuning Simplified-SFNN. In our experiments, we train SFNN and Simplified-SFNN under the proposed strategy. They consistently outperform the corresponding DNN for both multi-modal and classification tasks, where the former and the latter are for measuring the model expressive power and the regularization effect, respectively. To the best of our knowledge, we are the first to confirm that SFNN indeed regularizes better than DNN. We also construct the stochastic models following the same network structure of popular DNNs including Lenet-5 (LeCun et al., 1998), NIN (Lin et al., 2014) and WRN (Zagoruyko & Komodakis, 2016). In particular, WRN (wide residual network) of 28 layers and 36 million parameters has shown the state-of-art performances on CIFAR-10 and CIFAR-100 classification datasets, and our stochastic models built upon WRN outperform the deterministic WRN on the datasets. Organization. In Section 2, we focus on DNNs having sigmoid and ReLU activation functions and study simple transformations of their parameters to those of SFNN. In Section 3, we consider DNNs having general activation functions and describe more advanced transformations via introducing a new model, named Simplified-SFNN. 2 SIMPLE TRANSFORMATION FROM DNN TO SFNN 2.1 PRELIMINARIES FOR SFNN Stochastic feedforward neural network (SFNN) is a hybrid model, which has both stochastic binary and deterministic hidden units. We first introduce SFNN with one stochastic hidden layer (and without deterministic hidden layers) for simplicity. Throughout this paper, we commonly denote the bias for unit i and the weight matrix of the `-th hidden layer by b`i and W `, respectively. Then, the stochastic hidden layer in SFNN is defined as a binary random vector with N1 units, i.e., h1 ∈ {0, 1}N1 , drawn under the following distribution: P ( h1 | x ) = N1∏ i=1 P ( h1i | x ) , where P ( h1i = 1 | x ) = σ ( W1ix+ b 1 i ) . (1) In the above, x is the input vector and σ (x) = 1/ (1 + e−x) is the sigmoid function. Our conditional distribution of the output y is defined as follows: P (y | x) = EP (h1|x) [ P ( y | h1 )] = EP (h1|x) [ N ( y |W2h1 + b2, σ2y )] , where N (·) denotes the normal distribution with mean W2h1 + b2 and (fixed) variance σ2y . Therefore, P (y | x) can express a very complex, multi-modal distribution since it is a mixture of exponentially many normal distributions. The multi-layer extension is straightforward via a combination of stochastic and deterministic hidden layers, e.g., see Tang & Salakhutdinov (2013), Raiko et al. (2014). Furthermore, one can use any other output distributions as like DNN, e.g., softmax for classification tasks. There are two computational issues for training SFNN: computing expectations with respect to stochastic units in forward pass and computing gradients in backward pass. One can notice that both are computationally intractable since they require summations over exponentially many configurations of all stochastic units. First, in order to handle the issue in forward pass, one can use the follow- ing Monte Carlo approximation for estimating the expectation: P (y | x) w 1M M∑ m=1 P (y | h(m)), where h(m) ∼ P ( h1 | x ) and M is the number of samples. This random estimator is unbiased and has relatively low variance (Tang & Salakhutdinov, 2013) since its accuracy does not depend on the dimensionality of h1 and one can draw samples from the exact distribution. Next, in order to handle the issue in backward pass, Neal (1990) proposed a Gibbs sampling, but it is known that it often mixes poorly. Saul et al. (1996) proposed a variational learning based on the mean-field approximation, but it has additional parameters making the variational lower bound looser. More recently, several other techniques have been proposed including unbiased estimators of the variational bound using importance sampling (Tang & Salakhutdinov, 2013; Raiko et al., 2014) and biased/unbiased estimators of the gradient for approximating backpropagation (Bengio et al., 2013; Raiko et al., 2014; Gu et al., 2015). 2.2 SIMPLE TRANSFORMATION FROM SIGMOID-DNN AND RELU-DNN TO SFNN Despite the recent advances, training SFNN is still very slow compared to DNN due to the sampling procedures: in particular, it is notoriously hard to train SFNN when the network structure is deeper and wider. In order to handle these issues, we consider the following approximation: P (y | x) = EP (h1|x) [ N ( y |W2h1 + b2, σ2y )] w N ( y | EP (h1|x) [ W2h1 ] + b2, σ2y ) = N ( y |W2σ ( W1x+ b1 ) + b2, σ2y ) . (2) Note that the above approximation corresponds to replacing stochastic units by deterministic ones such that their hidden activation values are same as marginal distributions of stochastic units, i.e., SFNN can be approximated by DNN using sigmoid activation functions, say sigmoid-DNN. When there exist more latent layers above the stochastic one, one has to apply similar approximations to all of them, i.e., exchanging the orders of expectations and non-linear functions, for making DNN and SFNN are equivalent. Therefore, instead of training SFNN directly, one can try transferring pretrained parameters of sigmoid-DNN to those of the corresponding SFNN directly: train sigmoidDNN instead of SFNN, and replace deterministic units by stochastic ones for the inference purpose. Although such a strategy looks somewhat ‘rude’, it was often observed in the literature that it reasonably works well for SFNN (Raiko et al., 2014) and we also evaluate it as reported in Table 1. We also note that similar approximations appear in the context of dropout: it trains a stochastic model averaging exponentially many DNNs sharing parameters, but also approximates a single DNN well. Now we investigate a similar transformation in the case when DNN uses the unbounded ReLU activation function, say ReLU-DNN. Many recent deep networks are of ReLU-DNN type due to the gradient vanishing problem, and their pre-trained parameters are often available. Although it is straightforward to build SFNN from sigmoid-DNN, it is less clear in this case since ReLU is unbounded. To handle this issue, we redefine the stochastic latent units of SFNN: P ( h1 | x ) = N1∏ i=1 P ( h1i | x ) , where P ( h1i = 1 | x ) = min { αf ( W1ix+ b 1 i ) , 1 } . (3) In the above, f(x) = max{x, 0} is the ReLU activation function and α is some hyper-parameter. A simple transformation can be defined similarly as the case of sigmoid-DNN via replacing deterministic units by stochastic ones. However, to preserve the parameter information of ReLU-DNN, one has to choose α such that αf ( W1ix+ b 1 i ) ≤ 1 and rescale upper parameters W2 as follows: α−1 ← max i,x ∣∣∣f (Ŵ1ix+ b̂1i)∣∣∣ , (W1, b1)← (Ŵ1, b̂1) , (W2, b2)← ( Ŵ2/α, b̂2) . (4) Then, applying similar approximations as in (2), i.e., exchanging the orders of expectations and non-linear functions, one can observe that ReLU-DNN and SFNN are equivalent. We evaluate the performance of the simple transformations from DNN to SFNN on the MNIST dataset (LeCun et al., 1998) and the synthetic dataset (Bishop, 1994), where the former and the latter are popular datasets used for a classification task and a multi-modal (i.e., one-to-many mappings) prediction learning, respectively. In all experiments reported in this paper, we commonly use the softmax and Gaussian with standard deviation of σy = 0.05 are used for the output probability on classification and regression tasks, respectively. The only first hidden layer of DNN is replaced by stochastic one, and we use 500 samples for estimating the expectations in the SFNN inference. As reported in Table 1, we observe that the simple transformation often works well for both tasks: the SFNN and sigmoid-DNN inferences (using same parameters trained by sigmoid-DNN) perform similarly for the classification task and the former significantly outperforms for the latter for the multi-modal task (also see Figure 1). It might suggest some possibilities that the expensive SFNN training might not be not necessary, depending on the targeted learning quality. However, in case of ReLU, SFNN performs much worse than ReLU-DNN for the MNIST classification task under the parameter transformation. 3 TRANSFORMATION FROM DNN TO SFNN VIA SIMPLIFIED-SFNN In this section, we propose an advanced method to utilize the pre-trained parameters of DNN for training SFNN. As shown in the previous section, simple parameter transformations from DNN to SFNN are not clear to work in general, in particular for activation functions other than sigmoid. Moreover, training DNN does not utilize the stochastic regularizing effect, which is an important benefit of SFNN. To address the issues, we design an intermediate model, called Simplified-SFNN. The proposed model is a special form of stochastic neural networks, which approximates certain SFNN by simplifying its upper latent units above stochastic ones. Then, we establish more rigorous connections between three models: DNN → Simplified-SFNN → SFNN, which leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. In our experiments, we evaluate the strategy for various tasks and popular DNN architectures. 3.1 SIMPLIFIED-SFNN OF TWO HIDDEN LAYERS AND NON-NEGATIVE ACTIVATION FUNCTIONS For clarity of presentation, we first introduce Simplified-SFNN with two hidden layers and nonnegative activation functions, where its extensions to multiple layers and general activation functions are presented in Appendix B. We also remark that we primarily describe fully-connected SimplifiedSFNNs, but their convolutional versions can also be naturally defined. In Simplified-SFNN of two hidden layers, we assume that the first and second hidden layers consist of stochastic binary hidden units and deterministic ones, respectively. As like (3), the first layer is defined as a binary random vector with N1 units, i.e., h1 ∈ {0, 1}N1 , drawn under the following distribution: P ( h1 | x ) = N1∏ i=1 P ( h1i | x ) , where P ( h1i = 1 | x ) = min { α1f ( W1ix+ b 1 i ) , 1 } . (5) where x is the input vector, α1 > 0 is a hyper-parameter for the first layer, and f : R→ R+ is some non-negative non-linear activation function with |f ′(x)| ≤ 1 for all x ∈ R, e.g., ReLU and sigmoid activation functions. Now the second layer is defined as the following deterministic vector with N2 units, i.e., h2(x) ∈ RN2 : h2 (x) = [ f ( α2 ( EP (h1|x) [ s ( W2jh 1 + b2j )] − s (0) )) : ∀j ∈ N2 ] , (6) where α2 > 0 is a hyper-parameter for the second layer and s : R → R is a differentiable function with |s′′(x)| ≤ 1 for all x ∈ R, e.g., sigmoid and tanh functions. In our experiments, we use the sigmoid function for s(x). Here, one can note that the proposed model also has the same computational issues with SFNN in forward and backward passes due to the complex expectation. One can train Simplified-SFNN similarly as SFNN: we use Monte Carlo approximation for estimating the expectation and the (biased) estimator of the gradient for approximating backpropagation inspired by Raiko et al. (2014) (more detailed explanation is presented in Appendix A). We are interested in transferring parameters of DNN to Simplified-SFNN to utilize the training benefits of DNN since the former is much faster to train than the latter. To this end, we consider the following DNN of which `-th hidden layer is deterministic and defined as follows: ĥ` (x) = [ ĥ`i (x) = f ( Ŵ`i ĥ `−1 (x) + b̂`i ) : i ∈ N ` ] , (7) where ĥ0(x) = x. As stated in the following theorem, we establish a rigorous way how to initialize parameters of Simplified-SFNN in order to transfer the knowledge stored in DNN. Theorem 1 Assume that both DNN and Simplified-SFNN with two hidden layers have same network structure with non-negative activation function f . Given parameters {Ŵ`, b̂` : ` = 1, 2} of DNN and input dataset D, choose those of Simplified-SFNN as follows:( α1,W 1, b1 ) ← ( 1 γ1 , Ŵ1, b̂1 ) , ( α2,W 2, b2 ) ← ( γ2γ1 s′ (0) , 1 γ2 Ŵ2, 1 γ1γ2 b̂2 ) , (8) where γ1 = max i,x∈D ∣∣∣f (Ŵ1ix+ b̂1i)∣∣∣ and γ2 > 0 is any positive constant. Then, it follows that ∣∣∣h2j (x)− ĥ2j (x)∣∣∣ ≤ γ1 (∑ i ∣∣∣Ŵ 2ij∣∣∣+ b̂2jγ−11 )2 2s′ (0) γ2 , ∀j,x ∈ D. The proof of the above theorem is presented in Appendix D.1. Our proof is built upon the first-order Taylor expansion of non-linear function s(x). Theorem 1 implies that one can make Simplified-SFNN represent the function values of DNN with bounded errors using a linear transformation. Furthermore, the errors can be made arbitrarily small by choosing large γ2, i.e., lim γ2→∞ ∣∣∣h2j (x)− ĥ2j (x)∣∣∣ = 0, ∀j,x ∈ D. Figure 2(c) shows that knowledge transferring loss decreases as γ2 increases on MNIST classification. Based on this, we choose γ2 = 50 commonly for all experiments. 3.2 WHY SIMPLIFIED-SFNN ? Given a Simplified-SFNN model, the corresponding SFNN can be naturally defined by taking out the expectation in (6). As illustrated in Figure 2(a), the main difference between SFNN and SimplifiedSFNN is that the randomness of the stochastic layer propagates only to its upper layer in the latter, i.e., the randomness of h1 is averaged out at its upper units h2 and does not propagate to h3 or output y. Hence, Simplified-SFNN is no longer a Bayesian network. This makes training Simplified-SFNN much easier than SFNN since random samples are not required at some layers1 and consequently the quality of gradient estimations can also be improved, in particular for unbounded activation functions. Furthermore, one can use the same approximation procedure (2) to see that SimplifiedSFNN approximates SFNN. However, since Simplified-SFNN still maintains binary random units, it uses approximation steps later, in comparison with DNN. In summary, Simplified-SFNN is an intermediate model between DNN and SFNN, i.e., DNN→ Simplified-SFNN→ SFNN. The above connection naturally suggests the following training procedure for both SFNN and Simplified-SFNN: train a baseline DNN first and then fine-tune its corresponding Simplified-SFNN initialized by the transformed DNN parameters. Finally, the fine-tuned parameters can be used for SFNN as well. We evaluate the strategy for the MNIST classification, which is reported in Table 2 (see Appendix C for more detailed experiment setups). We found that SFNN under the two-stage training always performs better than SFNN under a simple transformation (4) from ReLU-DNN. 1 For example, if one replaces the first feature maps in the fifth residual unit of Pre-ResNet having 164 layers (He et al., 2016) by stochastic ones, then the corresponding DNN, Simplified-SFNN and SFNN took 1 mins 35 secs, 2 mins 52 secs and 16 mins 26 secs per each training epoch, respectively, on our machine with one Intel CPU (Core i7-5820K 6-Core@3.3GHz) and one NVIDIA GPU (GTX Titan X, 3072 CUDA cores). Here, we trained both stochastic models using the biased estimator (Raiko et al., 2014) with 10 random samples on CIFAR-10 dataset. More interestingly, Simplified-SFNN consistently outperforms its baseline DNN due to the stochastic regularizing effect, even when we train both models using dropout (Hinton et al., 2012b) and batch normalization (Ioffe & Szegedy, 2015). In order to confirm the regularization effects, one can again approximate a trained Simplified-SFNN by a new deterministic DNN which we call DNN∗ and is different from its baseline DNN under the following approximation at upper latent units above binary random units: EP (h`|x) [ s ( W`+1j h ` )] w s ( EP (h`|x) [ W`+1j h ` ]) = s (∑ i W `+1ij P ( h`i = 1 | x )) . (9) We found that DNN∗ using fined-tuned parameters of Simplified-SFNN also outperforms the baseline DNN as shown in Table 2 and Figure 2(b). 3.3 EXPERIMENTAL RESULTS ON MULTI-MODAL LEARNING AND CONVOLUTIONAL NETWORKS We present several experimental results for both multi-modal and classification tasks on MNIST (LeCun et al., 1998), Toronto Face Database (TFD) (Susskind et al., 2010), CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009) and SVHN (Netzer et al., 2011). Here, we present some key results due to the space constraints and more detailed explanations for our experiment setups are presented in Appendix C. We first verify that it is possible to learn one-to-many mapping via Simplified-SFNN on the TFD and MNIST datasets, where the former and the latter are used to predict multiple facial expressions from the mean of face images per individual and the lower half of the MNIST digit given the upper half, respectively. We remark that both tasks are commonly performed in recent other works to test the multi-modal learning using SFNN (Raiko et al., 2014; Gu et al., 2015). In all experiments, we first train a baseline DNN, and the trained parameters of DNN are used for further fine-tuning those of Simplified-SFNN. As shown in Table 3 and Figure 3, stochastic models outperform their baseline DNN, and generate different digits for the case of ambiguous inputs (while DNN does not). We also evaluate the regularization effect of Simplified-SFNN for the classification tasks on CIFAR-10, CIFAR-100 and SVHN. Table 4 reports the classification error rates using convolutional neural networks such as Lenet-5 (LeCun et al., 1998), NIN (Lin et al., 2014) and WRN (Zagoruyko & Komodakis, 2016). Due to the regularization effects, Simplified-SFNNs consistently outperform their baseline DNNs. For example, WRN∗ outperforms WRN by 0.08% on CIFAR-10 and 0.58% on CIFAR-100. We expect that if one introduces more stochastic layers, the error would be decreased more (see Figure 4), but it increases the fine-tuning time-complexity of Simplified-SFNN. 4 CONCLUSION In order to develop an efficient training method for large-scale SFNN, this paper proposes a new intermediate stochastic model, called Simplified-SFNN. We establish the connection between three models, i.e., DNN → Simplified-SFNN → SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. This connection naturally leads an efficient training procedure of the stochastic models utilizing pre-trained parameters and architectures of DNN. We believe that our work brings a new important direction for training stochastic neural networks, which should be of broader interest in many related applications. A TRAINING SIMPLIFIED-SFNN The parameters of Simplified-SFNN can be learned using a variant of the backpropagation algorithm (Rumelhart et al., 1988) in a similar manner to DNN. However, in contrast to DNN, there are two computational issues for simplified-SFNN: computing expectations with respect to stochastic units in forward pass and computing gradients in back pass. One can notice that both are intractable since they require summations over all possible configurations of all stochastic units. First, in order to handle the issue in forward pass, we use the following Monte Carlo approximation for estimating the expectation: EP (h1|x) [ s ( W2jh 1 + b2j )] w 1 M M∑ m=1 s ( W2jh (m) + b2j ) , h(m) ∼ P ( h1 | x ) , where M is the number of samples. This random estimator is unbiased and has relatively low variance (Tang & Salakhutdinov, 2013) since its accuracy does not depend on the dimensionality of h1 and one can draw samples from the exact distribution. Next, in order to handle the issue in back pass, we use the following approximation inspired by (Raiko et al., 2014): ∂ ∂W2j EP (h1|x) [ s ( W2jh 1 + b2j )] w 1 M ∑ m ∂ ∂W2j s ( W2jh (m) + b2j ) , ∂ ∂W1i EP (h1|x) [ s ( W2jh 1 + b2j )] w W 2ij M ∑ m s′ ( W2jh (m) + b2j ) ∂ ∂W1i P ( h1i = 1 | x ) , where h(m) ∼ P ( h1 | x ) and M is the number of samples. In our experiments, we commonly choose M = 20. B EXTENSIONS OF SIMPLIFIED-SFNN In this section, we describe how the network knowledge transferring between Simplified-SFNN and DNN, i.e., Theorem 1, generalizes to multiple layers and general activation functions. B.1 EXTENSION TO MULTIPLE LAYERS A deeper Simplified-SFNN with L hidden layers can be defined similarly as the case of L = 2. We also establish network knowledge transferring between Simplified-SFNN and DNN with L hidden layers as stated in the following theorem. Here, we assume that stochastic layers are not consecutive for simpler presentation, but the theorem is generalizable for consecutive stochastic layers. Theorem 2 Assume that both DNN and Simplified-SFNN with L hidden layers have same network structure with non-negative activation function f . Given parameters {Ŵ`, b̂` : ` = 1, . . . , L} of DNN and input dataset D, choose the same ones for Simplified-SFNN initially and modify them for each `-th stochastic layer and its upper layer as follows: α` ← 1 γ` , ( α`+1,W `+1, b`+1 ) ← ( γ`γ`+1 s′ (0) , Ŵ`+1 γ`+1 , b̂`+1 γ`γ`+1 ) , where γ` = max i,x∈D ∣∣∣f (Ŵ`ih`−1(x) + b̂`i)∣∣∣ and γ`+1 is any positive constant. Then, it follows that lim γ`+1→∞ ∀ stochastic hidden layer ` ∣∣∣hLj (x)− ĥLj (x)∣∣∣ = 0, ∀j,x ∈ D. The above theorem again implies that it is possible to transfer knowledge from DNN to SimplifiedSFNN by choosing large γl+1. The proof of Theorem 2 is similar to that of Theorem 1 and given in Appendix D.2. B.2 EXTENSION TO GENERAL ACTIVATION FUNCTIONS In this section, we describe an extended version of Simplified-SFNN which can utilize any activation function. To this end, we modify the definitions of stochastic layers and their upper layers by introducing certain additional terms. If the `-th hidden layer is stochastic, then we slightly modify the original definition (5) as follows: P ( h` | x ) = N`∏ i=1 P ( h`i | x ) with P ( h`i = 1 | x ) = min { α`f ( W1ix+ b 1 i + 1 2 ) , 1 } , where f : R → R is a non-linear (possibly, negative) activation function with |f ′(x)| ≤ 1 for all x ∈ R. In addition, we re-define its upper layer as follows: h`+1 (x) = [ f ( α`+1 ( EP (h`|x) [ s ( W`+1j h ` + b`+1j )] − s (0)−s ′ (0) 2 ∑ i W `+1ij )) : ∀j ] , where h0(x) = x and s : R→ R is a differentiable function with |s′′(x)| ≤ 1 for all x ∈ R. Under this general Simplified-SFNN model, we also show that transferring network knowledge from DNN to Simplified-SFNN is possible as stated in the following theorem. Here, we again assume that stochastic layers are not consecutive for simpler presentation. Theorem 3 Assume that both DNN and Simplified-SFNN with L hidden layers have same network structure with non-linear activation function f . Given parameters {Ŵ`, b̂` : ` = 1, . . . , L} of DNN and input dataset D, choose the same ones for Simplified-SFNN initially and modify them for each `-th stochastic layer and its upper layer as follows: α` ← 1 2γ` , ( α`+1,W `+1, b`+1 ) ← ( 2γ`γ`+1 s′(0) , Ŵ`+1 γ`+1 , b̂`+1 2γ`γ`+1 ) , where γ` = max i,x∈D ∣∣∣f (Ŵ`ih`−1(x) + b̂`i)∣∣∣, and γ`+1 is any positive constant. Then, it follows that lim γ`+1→∞ ∀ stochastic hidden layer ` ∣∣∣hLj (x)− ĥLj (x)∣∣∣ = 0, ∀j,x ∈ D. We omit the proof of the above theorem since it is somewhat direct adaptation of that of Theorem 2. C EXPERIMENTAL SETUPS In this section, we describe detailed explanation about all the experiments described in Section 3. In all experiments, the softmax and Gaussian with the standard deviation of 0.05 are used as the output probability for the classification task and the multi-modal prediction, respectively. The loss was minimized using ADAM learning rule (Kingma & Ba, 2014) with a mini-batch size of 128. We used an exponentially decaying learning rate. C.1 CLASSIFICATION ON MNIST The MNIST dataset consists of 28 × 28 pixel greyscale images, each containing a digit 0 to 9 with 60,000 training and 10,000 test images. For this experiment, we do not use any data augmentation or pre-processing. Hyper-parameters are tuned on the validation set consisting of the last 10,000 training images. All Simplified-SFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hidden layer. As described in Section 3.2, we train Simplified-SFNNs under the two-stage procedure: first train a baseline DNN for first 200 epochs, and the trained parameters of DNN are used for initializing those of Simplified-SFNN. For 50 epochs, we train simplified-SFNN. We choose the hyper-parameter γ2 = 50 in the parameter transformation. All Simplified-SFNNs are trained with M = 20 samples at each epoch, and in the test, we use 500 samples. C.2 MULTI-MODAL REGRESSION ON TFD AND MNIST The Toronto Face Database (TFD) (Susskind et al., 2010) dataset consists of 48×48 pixel greyscale images, each containing a face image of 900 individuals with 7 different expressions. Similar to (Raiko et al., 2014), we use 124 individuals with at least 10 facial expressions as data. We randomly choose 100 individuals with 1403 images for training and the remaining 24 individuals with 326 images for the test. We take the mean of face images per individual as the input and set the output as the different expressions of the same individual. The MNIST dataset consists of 28 × 28 pixel greyscale images, each containing a digit 0 to 9 with 60,000 training and 10,000 test images. For this experiments, each pixel of every digit images is binarized using its grey-scale value. We take the upper half of the MNIST digit as the input and set the output as the lower half of it. All SimplifiedSFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hidden layer. We train Simplified-SFNNs with M = 20 samples at each epoch, and in the test, we use 500 samples. We use 200 hidden units for each layer of neural networks in two experiments. Learning rate is chosen from {0.005 , 0.002, 0.001, ... , 0.0001} , and the best result is reported for both tasks. C.3 CLASSIFICATION ON CIFAR-10, CIFAR-100 AND SVHN The CIFAR-10 and CIFAR-100 datasets consist of 50,000 training and 10,000 test images. The SVHN dataset consists of 73,257 training and 26,032 test images.2 We pre-process the data using global contrast normalization and ZCA whitening. For these datasets, we design a convolutional version of Simplified-SFNN. In a similar manner to the case of fully-connected networks, one can define a stochastic convolution layer, which considers the input feature map as a binary random matrix and generates the output feature map as defined in (6). All Simplified-SFNNs are constructed by replacing a hidden feature map of a baseline models, i.e., Lenet-5, NIN and WRN, with stochastic one as shown in Figure 5(d). We use WRN with 16 and 28 layers for SVHN and CIFAR datasets, respectively, since they showed state-of-the-art performance as reported by Zagoruyko & Komodakis (2016). In case of WRN, we introduce up to two stochastic convolution layers.For 100 epochs, we first train baseline models, i.e., Lenet-5, NIN and WRN, and trained parameters are used for initializing those of Simplified-SFNNs. All Simplified-SFNNs are trained with M = 5 samples and the test error is only measured by the approximation (9). The test errors of baseline models are measured after training them for 200 epochs similar to Zagoruyko & Komodakis (2016). D PROOFS OF THEOREMS D.1 PROOF OF THEOREM 1 First consider the first hidden layer, i.e., stochastic layer. Let γ1 = max i,x∈D f ( Ŵ1ix+ b̂ 1 i ) be the maximum value of hidden units in DNN. If we initialize the parameters ( α1,W 1, b1 ) ←( 1 γ1 , Ŵ1, b̂1 ) , then the marginal distribution of each hidden unit i becomes P ( h1i = 1 | x,W1,b1 ) = min { α1f ( Ŵ1ix+ b̂ 1 i ) , 1 } = 1 γ1 f ( Ŵ1ix+ b̂ 1 i ) , ∀i,x ∈ D. (10) 2We do not use the extra SVHN dataset for training. Next consider the second hidden layer. From Taylor’s theorem, there exists a value z between 0 and x such that s(x) = s(0) + s′(0)x + R(x), where R(x) = s ′′(z)x2 2! . Since we consider a binary random vector, i.e., h1 ∈ {0, 1}N1 , one can write EP (h1|x) [ s ( βj ( h1 ))] = ∑ h1 ( s (0) + s′ (0)βj ( h1 ) +R ( βj ( h1 ))) P ( h1 | x ) = s (0) + s′ (0) (∑ i W 2ijP (h 1 i = 1 | x) + b2j ) + EP (h1|x) [ R(βj(h 1)) ] , where βj ( h1 ) := W2jh 1 + b2j is the incoming signal. From (6) and (10), for every hidden unit j, it follows that h2j ( x;W2,b2 ) = f ( α2 ( s′(0) ( 1 γ1 ∑ i W 2ij ĥ 1 i (x) + b 2 j ) + EP (h1|x) [ R ( βj ( h1 ))])) . Since we assume that |f ′(x)| ≤ 1, the following inequality holds:∣∣∣∣∣h2j (x;W2,b2)− f ( α2s ′(0) ( 1 γ1 ∑ i W 2ij ĥ 1 i (x) + b 2 j ))∣∣∣∣∣ ≤ ∣∣α2EP (h1|x) [R(βj(h1))]∣∣ ≤ α2 2 EP (h1|x) [( W2jh 1 + b2j )2] , where we use |s′′(z)| < 1 for the last inequality. Therefore, it follows that ∣∣∣h2j (x;W2,b2)− ĥ2j (x;Ŵ2, b̂2)∣∣∣ ≤ γ1 (∑ i ∣∣∣Ŵ 2ij∣∣∣+ b̂2jγ−11 )2 2s′(0)γ2 , ∀j, since we set ( α2,W 2, b2 ) ← ( γ2γ1 s′(0) , Ŵ2 γ2 , γ−11 γ2 b̂2 ) . This completes the proof of Theorem 1. D.2 PROOF OF THEOREM 2 For the proof of Theorem 2, we first state the two key lemmas on error propagation in SimplifiedSFNN. Lemma 4 Assume that there exists some positive constant B such that∣∣∣h`−1i (x)− ĥ`−1i (x)∣∣∣ ≤ B, ∀i,x ∈ D, and the `-th hidden layer of NCSFNN is standard deterministic layer as defined in (7). Given parameters {Ŵ`, b̂`} of DNN, choose same ones for NCSFNN. Then, the following inequality holds:∣∣∣h`j (x)− ĥ`j (x)∣∣∣ ≤ BN `−1Ŵ `max, ∀j,x ∈ D. where Ŵ `max = max ij ∣∣∣Ŵ `ij∣∣∣. Proof. See Appendix D.3. Lemma 5 Assume that there exists some positive constant B such that∣∣∣h`−1i (x)− ĥ`−1i (x)∣∣∣ ≤ B, ∀i,x ∈ D, and the `-th hidden layer of simplified-SFNN is stochastic layer. Given parameters {Ŵ`,Ŵ`+1, b̂`, b̂`+1} of DNN, choose those of Simplified-SFNN as follows: α` ← 1 γ` , ( α`+1,W `+1, b`+1 ) ← ( γ`γ`+1 s′ (0) , Ŵ`+1 γ`+1 , b̂`+1 γ`γ`+1 ) , where γ` = max j,x∈D ∣∣∣f (Ŵ`jh`−1(x) + b̂`j)∣∣∣ and γ`+1 is any positive constant. Then, it follows that ∣∣∣h`+1k (x)− ĥ`+1k (x)∣∣∣ ≤ BN `−1N `Ŵ `maxŴ `+1max + ∣∣∣∣∣∣∣ γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′(0)γ`+1 ∣∣∣∣∣∣∣ , ∀k,x ∈ D, where b̂`max = max j ∣∣∣̂b`j∣∣∣ and Ŵ `max = max ij ∣∣∣Ŵ `ij∣∣∣. Proof. See Appendix D.4. Assume that `-th layer is first stochastic hidden layer in Simplified-SFNN. Then, from Theorem 1, we have ∣∣∣h`+1j (x)− ĥ`+1j (x)∣∣∣ ≤ ∣∣∣∣∣∣∣ γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′(0)γ`+1 ∣∣∣∣∣∣∣ , ∀j,x ∈ D. (11) According to Lemma 4 and 5, the final error generated by the right hand side of (11) is bounded by τ`γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′ (0) γ`+1 , (12) where τ` = L∏ `′=l+2 ( N ` ′−1Ŵ ` ′ max ) . One can note that every error generated by each stochastic layer is bounded by (12). Therefore, it follows that ∣∣∣hLj (x)− ĥLj (x)∣∣∣ ≤ ∑ `:stochastic hidden layer τ`γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′ (0) γ`+1 , ∀j,x ∈ D. From above inequality, we can conclude that lim γ`+1→∞ ∀ stochastic hidden layer ` ∣∣∣hLj (x)− ĥLj (x)∣∣∣ = 0, ∀j,x ∈ D. This completes the proof of Theorem 2. D.3 PROOF OF LEMMA 4 From assumption, there exists some constant i such that | i| < B and h`−1i (x) = ĥ `−1 i (x) + i, ∀i,x. By definition of standard deterministic layer, it follows that h`j (x) = f (∑ i Ŵ `ijh `−1 i (x) + b̂ `−1 j ) = f (∑ i Ŵ `ij ĥ `−1 i (x) + ∑ i Ŵ `ij i + b̂ ` j ) . Since we assume that |f ′(x)| ≤ 1, one can conclude that∣∣∣∣∣h`j (x)− f (∑ i Ŵ `ij ĥ `−1 i (x) + b̂ ` j )∣∣∣∣∣ ≤ ∣∣∣∣∣∑ i Ŵ `ij i ∣∣∣∣∣ ≤ B ∣∣∣∣∣∑ i Ŵ `ij ∣∣∣∣∣ ≤ BN `−1Ŵ `max. This completes the proof of Lemma 4. D.4 PROOF OF LEMMA 5 From assumption, there exists some constant `−1i such that ∣∣ `−1i ∣∣ < B and h`−1i (x) = ĥ `−1 i (x) + `−1 i , ∀i,x. (13) Let γ` = max j,x∈D ∣∣∣f (Ŵ`jh`−1(x) + b̂`j)∣∣∣ be the maximum value of hidden units. If we initialize the parameters ( α`,W `, b` ) ← ( 1 γ` , Ŵ`, b̂` ) , then the marginal distribution becomes P ( h`j = 1 | x,W`,b` ) = min { α`f ( Ŵ`jh `−1 (x) + b̂`j ) , 1 } = 1 γ` f ( Ŵ`jh `−1 (x) + b̂`j ) , ∀j,x. From (13), it follows that P ( h`j = 1 | x,W`,b` ) = 1 γ` f ( Ŵ`jĥ `−1 (x) + ∑ i Ŵ `ij `−1 i + b̂ ` j ) , ∀j,x. Similar to Lemma 4, there exists some constant `j such that ∣∣ `j∣∣ < BN `−1Ŵ `max and P ( h`j = 1 | x,W`,b` ) = 1 γ` ( ĥ`j (x) + ` j ) , ∀j,x. (14) Next, consider the upper hidden layer of stochastic layer. From Taylor’s theorem, there exists a value z between 0 and t such that s(x) = s(0) + s′(0)x+ R(x), where R(x) = s ′′(z)x2 2! . Since we consider a binary random vector, i.e., h` ∈ {0, 1}N` , one can write EP (h`|x)[s(βk(h`))] = ∑ h` ( s(0) + s′(0)βk(h `) +R ( βk(h `) )) P (h` | x) = s(0) + s′(0) ∑ j W `+1jk P (h ` j = 1 | x) + b`+1k +∑ h` R(βk(h `))P (h` | x), where βk(h`) = W`+1k h ` + b`+1k is the incoming signal. From (14) and above equation, for every hidden unit k, we have h`+1k (x;W `+1,b`+1) = f α`+1 s′(0) 1 γ` ∑ j W `+1jk ĥ ` j(x) + ∑ j W `+1jk ` j + b`+1k + EP (h`|x) [R(βk(h`))] . Since we assume that |f ′(x)| < 1, the following inequality holds:∣∣∣∣∣∣h`+1k (x;W`+1,b`+1)− f α`+1s′(0) 1 γ` ∑ j W `+1ij ĥ ` j(x) + b `+1 j ∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣α`+1s ′(0) γ` ∑ j W `+1jk ` j + α`+1EP (h`|x) [ R(βk(h `)) ]∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣α`+1s ′(0) γ` ∑ j W `+1jk ` j ∣∣∣∣∣∣+ ∣∣∣α`+1 2 EP (h`|x) [( W`+1k h ` + b`+1k )2]∣∣∣ , (15) where we use |s′′(z)| < 1 for the last inequality. Therefore, it follows that ∣∣∣h`+1k (x)− ĥ`+1k (x)∣∣∣ ≤ BN `−1N `Ŵ `maxŴ `+1max + ∣∣∣∣∣∣∣ γ` ( N `Ŵ `+1max + b̂ `+1 maxγ −1 ` )2 2s′(0)γ`+1 ∣∣∣∣∣∣∣ , since we set ( α`+1,W `+1, b`+1 ) ← ( γ`+1γ` s′(0) , Ŵ`+1 γ`+1 , γ−1` b̂ `+1 γ`+1 ) . This completes the proof of Lemma 5.
1. What is the focus of the paper, and what connections does it establish between different models? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle small tasks? 3. What are the weaknesses of the paper, specifically regarding its applicability to real tasks with large training sets?
Review
Review This paper builds connections between DNN, simplified stochastic neural network (SFNN) and SFNN and proposes to use DNN as the initialization model for simplified SFNN. The authors evaluated their model on several small tasks with positive results. The connection between different models is interesting. I think the connection between sigmoid DNN and Simplified SFNN is the same as mean-field approximation that has been known for decades. However, the connection between ReLU DNN and simplified SFNN is novel. My main concern is whether the proposed approach is useful when attacking real tasks with large training set. For tasks with small training set I can see that stochastic units would help generalize well.
ICLR
Title Towards Discovering Neural Architectures from Scratch Abstract The discovery of neural architectures from scratch is the long-standing goal of Neural Architecture Search (NAS). Searching over a wide spectrum of neural architectures can facilitate the discovery of previously unconsidered but wellperforming architectures. In this work, we take a large step towards discovering neural architectures from scratch by expressing architectures algebraically. This algebraic view leads to a more general method for designing search spaces, which allows us to compactly represent search spaces that are 100s of orders of magnitude larger than common spaces from the literature. Further, we propose a Bayesian Optimization strategy to efficiently search over such huge spaces, and demonstrate empirically that both our search space design and our search strategy can be superior to existing baselines. We open source our algebraic NAS approach and provide APIs for PyTorch and TensorFlow. 1 INTRODUCTION Neural Architecture Search (NAS), a field with over 1 000 papers in the last two years (Deng & Lindauer, 2022), is widely touted to automatically discover novel, well-performing architectural patterns. However, while state-of-the-art performance has already been demonstrated in hundreds of NAS papers (prominently, e.g., (Tan & Le, 2019; 2021; Liu et al., 2019a)), success in automatically finding truly novel architectural patterns has been very scarce (Ramachandran et al., 2017; Liu et al., 2020). For example, novel architectures, such as transformers (Vaswani et al., 2017; Dosovitskiy et al., 2021) have been crafted manually and were not found by NAS. There is an accumulating amount of evidence that over-engineered, restrictive search spaces (e.g., cell-based ones) are major impediments for NAS to discover truly novel architectures. Yang et al. (2020b) showed that in the DARTS search space (Liu et al., 2019b) the manually-defined macro architecture is more important than the searched cells, while Xie et al. (2019) and Ru et al. (2020) achieved competitive performance with randomly wired neural architectures that do not adhere to common search space limitations. As a result, there are increasing efforts to break these impediments, and the discovery of novel neural architectures has been referred to as the holy grail of NAS. Hierarchical search spaces are a promising step towards this holy grail. In an initial work, Liu et al. (2018) proposed a hierarchical cell, which is shared across a fixed macro architecture, imitating the compositional neural architecture design pattern widely used by human experts. However, subsequent works showed the importance of both layer diversity (Tan & Le, 2019) and macro architecture (Xie et al., 2019; Ru et al., 2020). In this work, we introduce a general formalism for the representation of hierarchical search spaces, allowing both for layer diversity and a flexible macro architecture. The key observation is that any neural architecture can be represented algebraically; e.g., two residual blocks followed by a fullyconnected layer in a linear macro topology can be represented as the algebraic term ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (1) We build upon this observation and employ Context-Free Grammars (CFGs) to construct large spaces of such algebraic architecture terms. Although a particular search space is of course limited in its overall expressiveness, with this approach, we could effectively represent any neural architecture, facilitating the discovery of truly novel ones. Due to the hierarchical structure of algebraic terms, the number of candidate neural architectures scales exponentially with the number of hierarchical levels, leading to search spaces 100s of orders of magnitudes larger than commonly used ones. To search in these huge spaces, we propose an efficient search strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), which leverages hierarchical information, capturing the topological patterns across the hierarchical levels, in its tailored kernel design. Our contributions are as follows: • We present a novel technique to construct hierarchical NAS spaces based on an algebraic notion views neural architectures as algebraic architecture terms and CFGs to create algebraic search spaces (Section 2). • We propose BANAT, a Bayesian Optimization (BO) strategy that uses a tailored modeling strategy to efficiently and effectively search over our huge search spaces (Section 3). • After surveying related work (Section 4), we empirically show that search spaces of algebraic architecture terms perform on par or better than common cell-based spaces on different datasets, show the superiority of BANAT over common baselines, demonstrate the importance of incorporating hierarchical information in the modeling, and show that we can find novel architectural parts from basic mathematical operations (Section 5). We open source our code and provide APIs for PyTorch (Paszke et al., 2019) and TensorFlow (Abadi et al., 2015) at https://anonymous.4open.science/r/iclr23_tdnafs. 2 ALGEBRAIC NEURAL ARCHITECTURE SEARCH SPACE CONSTRUCTION In this section we present an algebraic view on Neural Architecture Search (NAS) (Section 2.1) and propose a construction mechanism based on Context-Free Grammars (CFGs) (Section 2.2 and 2.3). 2.1 ALGEBRAIC ARCHITECTURE TERMS FOR NEURAL ARCHITECTURE SEARCH We introduce algebraic architecture terms as a string representation for neural architectures from a (term) algebra. Formally, an algebra (A,F) consists of a non-empty set A (universe) and a set of operators f : An → A ∈ F of different arities n ≥ 0 (Birkhoff, 1935). In our case, A corresponds to the set of all (sub-)architectures and we distinguish between two types of operators: (i) nullary operators representing primitive computations (e.g., conv() or fc()) and (ii) k-ary operators with k > 0 representing topological operators (e.g., Linear(·, ·, ·) or Residual(·, ·, ·)). For sake of notational simplicity, we omit parenthesis for nullary operators (i.e., we write conv). Term algebras (Baader & Nipkow, 1999) are a special type of algebra mapping an algebraic expression to its string representation. E.g., we can represent a neural architecture as the algebraic architecture term ω as shown in Equation 1. Term algebras also allow for variables xi that are set to terms themselves that can be re-used across a term. In our case, the intermediate variables xi can therefore share patterns across the architecture, e.g., a shared cell. For example, we could define the intermediate variable x1 to map to the residual block in ω from Equation 1 as follows: ω′ = Linear(x1, x1, fc), x1 = Residual(conv, id, conv) . (2) Algebraic NAS We formulate our algebraic view on NAS, where we search over algebraic architecture terms ω ∈ Ω representing their associated architectures Φ(ω), as follows: argmin ω∈Ω f(Φ(ω)) , (3) where f(·) is an error measure that we seek to minimize, e.g., final validation error of a fixed training protocol. For example, we can represent the popular cell-based NAS-Bench-201 search space(Dong & Yang, 2020) as algebraic search space Ω. The algebraic search space Ω is characterized by a fixed macro architecture Macro(. . .) that stacks 15 instances of a shared cell Cell(pi,pi,pi,pi,pi,pi), where the cell has six edges, on each of which one of five primitive computations can be placed (i.e., pi for i ∈ {1, 2, 3, 4, 5} corresponding to zero, id, conv1x1, conv3x3, or avg pool, respectively). By leveraging the intermediate variable x1 we can effectively share the cell topology across the architecture. For example, we can express an architecture ωi ∈ Ω from the NAS-Bench-201 search space Ω as: ωi = Macro(x1, x1, ..., x1︸ ︷︷ ︸ 15× ), x1 = Cell(p1,p2,p1,p5,p4,p3) . (4) Algebraic NAS over such algebraic architecture terms then amounts to finding the best-performing primitive computation pi for each edge, as the macro architecture is fixed. In contrast to this simple cell-based algebraic space, the search spaces we consider can be much more expressive and, e.g., allow for layer diversity and a flexible macro architecture over several hierarchical levels (Section 5.1). 2.2 CONSTRUCTING NEURAL ARCHITECTURE TERMS WITH CONTEXT-FREE GRAMMARS We propose to use Context-Free Grammars (CFGs) (Chomsky, 1956) since they can naturally generate (hierarchical) algebraic architecture terms. Compared to other search space designs, CFGs give us a formally grounded way to naturally and compactly define very expressive hierarchical search spaces (e.g., see Section 5.1). We can also unify popular search spaces from the literature with our general search space design in one framework (Appendix E). They give us further a simple mechanism to evolve architectures while staying within the defined search space (Section 3). Formally, a CFG G = ⟨N,Σ, P, S⟩ consists of a finite set of nonterminals N and terminals Σ with N ∩Σ = ∅, a finite set of production rules P = {A→ β|A ∈ N, β ∈ (N ∪Σ)∗}, where the asterisk ∗ denotes the Kleene star operation (Kleene et al., 1956), and a start symbol S ∈ N . To generate an algebraic architecture term, starting from the start symbol S, we recursively replace nonterminals of the current algebraic term with a right-hand side of a production rule consisting of nonterminals and terminals, until the resulting string does not contain any nonterminals. For example, consider the following CFG in extended Backus-Naur form (Backus, 1959) (see Appendix B for background): S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc (5) From this CFG, we can derive the algebraic architecture term ω (with three hierarchical levels) from Equation 1 as follows: S→ Linear(S, S, S) Level 1 → Linear(Residual(S, S, S), Residual(S, S, S), fc) Level 2 (6) → Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) Level 3 Figure 1 makes the above derivation and the connection to the associated architecture explicit. The set of all (potentially infinite) algebraic terms generated by a CFG G is the language L(G), which naturally forms our search space Ω. Thus, the algebraic NAS problem from Equation 3 becomes: argmin ω∈L(G) f(Φ(ω)) . (7) 2.3 EXTENSIONS TO THE CONSTRUCTION MECHANISM Constraints In many search space designs, we want to adhere to some constraints, e.g., to limit the number of nodes or to ensure that for all architectures in the search space there exists at least one path from the input to the output. We can simply do so by allowing only the application of production rules which guarantee compliance to such constraints. For example, to ensure that there is at least one path from the input to the output, it is sufficient to ensure that each derivation connects its input to the output due to the recursive nature of CFGs. Note that this makes CFGs context-sensitive w.r.t. those constraints. For more details, please refer to Appendix D. Fostering regularity through substitution To implement intermediate variables xi (Section 2.1) we leverage that context-free languages are closed under substitution: we map terminals, representing the intermediate variables xi, from one language to algebraic terms of other languages, e.g., a shared cell. For example, we can split a CFG G, constructing entire algebraic architecture terms, into the CFGs Gmacro and Gcell for the macro- or cell-level, respectively. Further, we add a single (or multiple) intermediate terminal(s) x1 to Gmacro which maps to an algebraic term ω1 ∈ L(Gcell), e.g., the searchable cell. Thus, we effectively search over the macro-level as well as a single, shared cell. Note that by using a fixed macro architecture (i.e., |L(Gmacro)| = 1), we can represent cell-based search spaces, e.g., NAS-Bench-201 (Dong & Yang, 2020), while also being able to represent more expressive search spaces (e.g., see Section 5.1). More generally, we could extend this by adding further intermediate terminals which map to other languages L(Gj), or by adding intermediate terminals to G2 which map to languages L(Gj ̸=1). In this way, we can effectively foster regularity. Representing common architecture patterns for object recognition Neural architectures for object recognition commonly build a hierarchy of features that are gradually downsampled, e.g., by pooling operations. However, previous works in NAS were either limited to a fixed macro architecture (Zoph et al., 2018), only allowed for linear macro architectures (Liu et al., 2019a), or required post-sampling testing for resolution mismatches (Stanley & Miikkulainen, 2002; Ru et al., 2020). While this produced impressive performance on popular benchmarks (Tan & Le, 2019; 2021; Liu et al., 2019a), it is an open research question whether a different type of macro architecture (e.g., one with multiple branches) could yield even better performance. To accommodate flexible macro architectures, we propose to overload the nonterminals. In particular, the nonterminals indicate how often we apply downsampling operations in the subsequent derivations of the nonterminal. Consider the production rule D2 → Residual(D1, D2, D1), where Di with i ∈ {1, 2} are a nonterminals which indicate that i downsampling operations have to be applied in their subsequent derivations. That is, in both paths of the residual the input features will be downsampled twice and, consequently, the merging paths will have the same spatial resolution. Thereby, this mechanism distributes the downsampling operations recursively across the architecture. For the channels, we adopted the common design to double the number of channels whenever we halve the spatial resolution in our experiments. Note that we could also handle a varying number of channels by using, e.g., depthwise concatenation as merge operation. 3 BAYESIAN OPTIMIZATION FOR ALGEBRAIC NEURAL ARCHITECTURE SEARCH We propose a BO strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), to efficiently search in the huge search spaces spanned by our algebraic architecture terms: we introduce a novel surrogate model which combines a Gaussian Process (GP) surrogate with a tailored kernel that leverages the hierarchical structure of algebraic neural architecture terms (see below), and adopt expected improvement as the acquisition function (Mockus et al., 1978). Given the discrete nature of architectures, we adopt ideas from grammar-guided genetic programming (McKay et al., 2010; Moss et al., 2020) for acquisition function optimization. Furthermore, to reduce wallclock time by leveraging parallel computing resources, we adapt the Kriging Believer (Ginsbourger et al., 2010) to select architectures at every search iteration so that we can train and evaluate them in parallel. Specifically, Kriging Believer assigns hallucinated values (i.e., posterior mean) of pending evaluations at each iteration to avoid redundant evaluations. For a more detailed explanation of BANAT, please refer to Appendix F. Hierarchical Weisfeiler-Lehman kernel (hWL) Inspired by the state-of-the-art BO approach for NAS (Ru et al., 2021), we adopt the WL graph kernel (Shervashidze et al., 2011) in a GP surrogate, modeling performance of the algebraic architecture terms ωi with the associated architectures Φ(ωi). However, modeling solely based on the final architecture ignores the useful hierarchical information inherent in our algebraic representation. Moreover, the large size of the architectures also makes it difficult to use a single WL kernel to capture the more global topological patterns. Since our hierarchical construction can be viewed as a series of gradually unfolding architectures, with the final architecture containing only primitive computations, we propose a novel hierarchical kernel design assigning a WL kernel to each hierarchy and combine them in a weighted sum. To this end, we introduce fold operators Fl, that removes algebraic terms beyond the l-th hierarchical level. For example, the fold operators F1, F2 and F3 yield for the algebraic term ω (Equation 1) F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (8) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear . Note the similarity to the derivations in Figure 1. Furthermore note that, in practice, we also add the corresponding nonterminals to integrate information from our hierarchical construction process. We define our hierarchical WL kernel (hWL) for two architectures Φ(ωi) and Φ(ωj) with algebraic architecture terms ωi or ωj , respectively, constructed over a hierarchy of L levels, as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (9) where the weights λl govern the importance of the learned graph information at different hierarchical levels (granularities of the architecture) and can be tuned (along with other hyperparameters of the GP) by maximizing the marginal likelihood. We omit l = 1 in the additive kernel as F1(ω) does not contain any edge features which are required for our WL kernel kWL. For more details on our novel hierarchical kernel design, please refer to Appendix F.2. Our proposed kernel efficiently captures the information in all algebraic term construction levels, which substantially improves its search and surrogate regression performance on our search space as demonstrated in Section 5. Acquisition function optimization To optimize the acquisition function, we adopt ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). For mutation, we randomly replace a sub-architecture term with a new randomly generated term, using the same nonterminal as start symbol. For crossover, we randomly swap two sub-architecture terms with the same corresponding nonterminal. We consider two crossover operators: a novel self-crossover operation swaps two sub-terms of a single architecture term, and the common crossover operation swaps subterms of two different architecture terms. Importantly, all evolutionary operations by design only result in valid terms. We provide examples for the evolutionary operations in Appendix F. 4 RELATED WORK We discuss related works in NAS below and discuss works beyond NAS in Appendix G. Neural Architecture Search Neural Architecture Search (NAS) aims to automatically discover architectural patterns (or even entire architectures) (Elsken et al., 2019). Previous approaches, e.g., used reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), evolution (Real et al., 2017), gradient descent (Liu et al., 2019b), or Bayesian Optimization (BO) (Kandasamy et al., 2018; White et al., 2021; Ru et al., 2021). To enable the effective use of BO on graph-like inputs for NAS, previous works have proposed to use a GP with specialized kernels (Kandasamy et al., 2018; Ru et al., 2021), encoding schemes (Ying et al., 2019; White et al., 2021), or graph neural networks as surrogate model (Ma et al., 2019; Shi et al., 2020; Zhang et al., 2019). Different to prior works, we explicitly leverage the hierarchical construction of architectures for modeling. Searching for novel architectural patterns Previous works mostly focused on finding a shared cell (Zoph et al., 2018) with a fixed macro architecture while only few works considered more expressive hierarchical search spaces (Liu et al., 2018; 2019a; Tan et al., 2019). The latter works considered hierarchical assembly (Liu et al., 2018), combination of a cell- and network-level search space (Liu et al., 2019a; Zhang et al., 2020), evolution of network topologies (Miikkulainen et al., 2019), factorization of the search space (Tan et al., 2019), parameterization of a hierarchy of random graph generators (Ru et al., 2020), a formal language over computational graphs (Negrinho et al., 2019), or a hierarchical construction of TensorFlow programs (So et al., 2021). Similarly, our formalism allows to design search spaces covering a general set of architecture design choices, but also permits the search for macro architectures with spatial resolution changes and multiple branches. We also handle spatial resolution changes without requiring post-hoc testing or resizing of the feature maps unlike prior works (Stanley & Miikkulainen, 2002; Miikkulainen et al., 2019; Stanley et al., 2019). Other works proposed approaches based on string rewriting systems (Kitano, 1990; Boers et al., 1993), cellular (or tree-structured) encoding schemes (Gruau, 1994; Luke & Spector, 1996; De Jong & Pollack, 2001; Cai et al., 2018), hyperedge replacement graph grammars Luerssen & Powers (2003); Luerssen (2005), attribute grammars (Mouret & Doncieux, 2008), CFGs (Jacob & Rehder, 1993; Couchet et al., 2007; Ahmadizar et al., 2015; Ahmad et al., 2019; Assunção et al., 2017; 2019; Lima et al., 2019; de la Fuente Castillo et al., 2020), or And-Or-grammars (Li et al., 2019). Different to these prior works, we construct entire architectures with spatial resolution changes across multiple branches, and propose techniques to incorporate constraints and foster regularity. Orthogonal to the aforementioned approaches, Roberts et al. (2021) searched over neural (XD-)operations, which is orthogonal to our approach, i.e., our predefined primitive computations could be replaced by their proposed XD-operations. 5 EXPERIMENTS In this section, we investigate potential benefits of hierarchical search spaces and our search strategy BANAT. More specifically, we address the following questions: Q1 Can hierarchical search spaces yield on par or superior architectures compared to cell-based search spaces with a limited number of evaluations? Q2 Can our search strategy BANAT improve performance over common baselines? Q3 Does leveraging the hierarchical information improve performance? Q4 Do zero-cost proxies work in vast hierarchical search spaces? Q5 Can we discover novel architectural patterns (e.g., activation functions)? To answer questions Q1-Q4, we introduce a hierarchical search space based on the popular NASBench-201 search space (Dong & Yang, 2020) in Section 5.1. To answer question Q5, we search for activation functions (Ramachandran et al., 2017) and defer the search space definition to Appendix J.1. We provide complementary results and analyses in Appendix I.2 and J.3. 5.1 HIERARCHICAL NAS-BENCH-201 We propose a hierarchical variant of the popular cell-based NAS-Bench-201 search space (Dong & Yang, 2020) by adding a hierarchical macro space (i.e., spatial resolution flow and wiring at the macro-level) and parameterizable convolutional blocks (i.e., choice of convolutions, activations, and normalizations). We express the hierarchical NAS-Bench-201 search space with CFG Gh as follows: D2 ::= Linear3(D1, D1, D0) | Linear3(D0, D1, D1) | Linear4(D1, D1, D0, D0) D1 ::= Linear3(C, C, D) | Linear4(C, C, C, D) | Residual3(C, C, D, D) D0 ::= Linear3(C, C, CL) | Linear4(C, C, C, CL) | Residual3(C, C, CL, CL) D ::= Linear2(CL, down) | Linear3(CL, CL, down) | Residual2(C, down, down) C ::= Linear2(CL, CL) | Linear3(CL, CL) | Residual2(CL, CL, CL) CL ::= Cell(OP, OP, OP, OP, OP, OP) OP ::= zero | id | BLOCK | avg pool BLOCK ::= Linear3(ACT, CONV, NORM) ACT ::= relu | hardswish | mish CONV ::= conv1x1 | conv3x3 | dconv3x3 NORM ::= batch | instance | layer . (10) See Appendix A for the terminal vocabulary of topological operators and primitive computations. The productions with the nonterminals {D2, D1, D0, D} define the spatial resolution flow and together with {C} define the macro architecture containing possibly multiple branches. The productions for {CL, OP} construct the NAS-Bench-201 cell and {BLOCK, ACT, CONV, NORM} parameterize the convolutional block. To ensure that we use the same distribution over the primitive computations as in NAS-Bench-201, we reweigh the sampling probabilities of the productions generated by the nonterminal OP, i.e., all production choices have sampling probability of 20%, but BLOCK has 40%. Note that we omit the stem (i.e., 3x3 convolution followed by batch normalization) and classifier (i.e., batch normalization followed by ReLU, global average pooling, and fully-connected layer) for simplicity. We implemented the merge operation as element-wise summation. Different to the cell-based NAS-Bench-201 search space, we exclude degenerated architectures by introducing a constraint that ensures that each subterm maps the input to the output (i.e., in the associated computational graph there is at least one path from source to sink). Our search space consists of ca. 10446 algebraic architecture terms (please refer to Appendix C on how to compute the search space size), which is significantly larger than other popular search spaces from the literature. For comparison, the cell-based NAS-Bench-201 search space is just a minuscule subspace of size 104.18, where we apply only the blue-colored production rules and replace the CL nonterminals with a placeholder terminal x1 that will be substituted by the searched, shared cell. 5.2 EVALUATION DETAILS For all search experiments, we compared the search strategies BANAT, Random Search (RS), Regularized Evolution (RE) (Real et al., 2019; Liu et al., 2018), and BANAT (WL) (Ru et al., 2021). For implementation details of the search strategies, please refer to Appendix H. We ran search for a total of 100 evaluations with a random initial design of 10 on three seeds {777, 888, 999} on the hierarchical NAS-Bench-201 search space or 1000 evaluations with a random initial design of 50 on one seed {777} on the activation function search space using 8 asynchronous workers each with a single NVIDIA RTX 2080 Ti GPU. In each evaluation, we fully trained the architectures and recorded their last validation error. For training details on the hierarchical NAS-Bench-201 search space and activation function search space, please refer to Appendix I.1 or Appendix J.2, respectively. To assess the modeling performance of our surrogate, we compared regression performance of GPs with different kernels, i.e., our hierarchical WL kernel (hWL), (standard) WL kernel (Ru et al., 2021), and NASBOT’s kernel (Kandasamy et al., 2018). We also tried the GCN encoding (Shi et al., 2020) but it could not capture the mapping from the complex graph space to performance, resulting in constant performance predictions. Further, note that the adjacency encoding (Ying et al., 2019) and path encoding (White et al., 2021) cannot be used in our hierarchical search spaces since the former requires the same amount of nodes across graphs and the latter scales exponentially in the number of nodes. We ran 20 trials over the seeds {0, 1, ..., 19} and re-used the data from the search runs. In every trial, we sampled a training and test set of 700 or 500 architecture and validation error pairs, respectively. We fitted the surrogates with a varying number of training samples by randomly choosing samples from the training set without replacement, and recorded Kendall’s τ rank correlation between the predicted and true validation error. To assess zero-cost proxies, we re-used the data from the search runs and recorded Kendall’s τ rank correlation. 5.3 RESULTS In the following we answer all of the questions Q1-Q5. Figure 2 compares the results of the cellbased and hierarchical search space design using our search strategy BANAT. Results with BANAT are on par on CIFAR-10/100, superior on ImageNet-16-120, and clearly superior on CIFARTile and AddNIST (answering Q1). We emphasize that the NAS community has engineered the cell-based search space to achieve strong performance on those popular image classification datasets for over a decade, making it unsurprising that our improvements are much larger for the novel datasets. Yet, our best found architecture on ImageNet-16-120 from the hierarchical search space also achieves an excellent test error of 52.78% with only 0.626MB parameters (Appendix I.2); this is superior to the architecture found by the state-of-the-art method Shapley-NAS (i.e., 53.15%) (Xiao et al., 2022) and on par with the optimal architecture of the cell-based NAS-Bench-201 search space (i.e., 52.69% with 0.866MB). Figure 3 shows that our search strategy BANAT is also superior 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based Evaluations Evaluations Evaluations Evaluations Evaluations BANAT (ours) RS RE BANAT (WL) Figure 3: Comparison of search strategies on the hierarchical search space. We plot mean and ±1 standard error of the validation error on the hierarchical NAS-Bench-201 search space for our search strategy BANAT (solid blue), RS (dashed orange), RE (dotted green), and BANAT (WL) (dashdotted red). We report test errors, best architectures, and conduct further analyses in Appendix I.2. to common baselines (answering Q2) and leveraging hierarchical information clearly improves performance (answering Q3). Further, the evaluation of surrogate performance in Figure 4 shows that incorporating hierarchical information with our hierarchical WL kernel (hWL) improves modeling, especially on smaller amounts of training data (further answering Q3). Table 1 shows that the baseline zero-cost proxies flops and l2-norm yield competitive (or often superior) results to more sophisticated zero-cost proxies; making hierarchical search spaces an interesting future research direction for them (answering Q4). Finally, Table 2 shows that we can find novel well-performing activation functions from basic mathematical operations with BANAT (answering Q5). 6 DISCUSSION AND LIMITATIONS While our grammar-based construction mechanism is a powerful mechanism to construct huge hierarchical search space, we can not construct any architecture with our grammar-based construction approach (Section 2.2 and 2.3) since we are limited to context-free languages; e.g., architectures of the type {anbncn|n ∈ N>0} cannot be generated by CFGs (this can be proven using Odgen’s lemma (Ogden, 1968)). Further, due to the discrete nature of CFGs we can not easily integrate continuous design choices, e.g., dropout probability. Furthermore, our grammar-based mechanism does not (generally) support simple scalability of discovered neural architectures (e.g., repetition of building blocks) without special consideration in the search space design. Nevertheless, our search spaces still significantly increase the expressiveness, including the ability to represent common search spaces from the literature (see Appendix E for how we can represent the search spaces of DARTS, Auto-Deeplab, the hierarchical cell search space of Liu et al. (2018), the Mobile-net search space, and the hierarchical random graph generator search space), as well as allowing search for entire neural architectures based around the popular NAS-Bench-201 search space CIFAR-10 CIFAR-100 ImageNet-16-120 CIFARTile AddNIST (Section 5). Thus, our search space design can facilitate the discovery of novel well-performing neural architectures in those huge search spaces of algebraic architecture terms. However, there is an inherent trade-off between the expressiveness and the difficulty of search. The much greater expressiveness facilitates search in a richer set of architectures that may include better architectures than in more restrictive search spaces, which however need not exist. Besides that, the (potential) existence of such a well-performing architecture does not result in a search strategy discovering it, even with large amounts of computing power available. Note that the tradeoff manifests itself also in the acquisition function optimization of our search strategy BANAT. In addition, a well-performing neural architecture may not work with current training protocols and hyperparameters due to interaction effects, i.e., training protocols and hyperparameters may be overoptimized for specific types of neural architectures. To overcome this limitation, one could consider a joint optimization of neural architectures, training protocols, and hyperparameters. However, this further fuels the trade-off between expressiveness and the difficulty of search. 7 CONCLUSION We introduced very expressive search spaces of algebraic architecture terms constructed with CFGs. To efficiently search over the huge search spaces, we proposed BANAT, an efficient BO strategy with a tailored kernel leveraging the available hierarchical information. Our experiments indicate that both our search space design and our search strategy can yield strong performance over existing baselines. Our results motivate further steps towards the discovery of neural architectures based on even more atomic primitive computations. Furthermore, future works could (simultaneously) learn the search space (i.e., learn the grammar) or improve search efficiency by means of multi-fidelity optimization or gradient-based search strategies. REPRODUCIBILITY STATEMENT To ensure reproducibility, we address all points of the best practices checklist for NAS research (Lindauer & Hutter, 2020) in Appendix K. ETHICS STATEMENT NAS has immense potential to facilitate systematic, automated discovery of high-performing (novel) architecture designs. However, the restrictive cell-based search spaces most commonly used in NAS render it impossible to discover truly novel neural architectures. With our general formalism based on algebraic terms, we hope to provide fertile foundation towards discovering high-performing and efficient architectures; potentially from scratch. However, search in such huge search spaces is expensive, particularly in the context of the ongoing detrimental climate crisis. While on the one hand, the discovered neural architectures, like other AI technologies, could potentially be exploited to have a negative societal impact; on the other hand, our work could also lead to advances across scientific disciplines like healthcare and chemistry. A FROM TERMINALS TO PRIMITIVE COMPUTATIONS AND TOPOLOGICAL OPERATORS Table 3 and Figure 5 describe the primitive computations and topological operators used throughout our experiments in Section 5 and Appendix I, respectively. Note that by adding more primitive computations and/or topological operators we could construct even more expressive search spaces. B EXTENDED BACKUS-NAUR FORM The (extended) Backus-Naur form (Backus, 1959) is a meta-language to describe the syntax of CFGs. We use meta-rules of the form S ::= α where S ∈ N is a nonterminal and α ∈ (N ∪ Σ)∗ is a string of nonterminals and/or terminals. We denote nonterminals in UPPER CASE, terminals corresponding to topological operators in Initial upper case/teletype, and terminals corresponding to primitive computations in lower case/teletype, e.g., S ::= Residual(S, S, id). To compactly express production rules with the same left-hand side nonterminal, we use the vertical bar | to indicate a choice of production rules with the same left-hand side, e.g., S ::= Linear(S, S, S) | Residual(S, S, id) | conv. C SEARCH SPACE SIZE In this section, we show how to efficiently compute the size of our search spaces constructed by CFGs. There are two cases to consider: (i) a CFG contains cycles (i.e., part of the derivation can be repeated infinitely many times) , yielding an open-ended, infinite search space; and (ii) a CFG contains no cycles, yielding in a finite search space whose size we can compute. Consider a production A → Residual(B, B, B) where Residual is a terminal, and A and B are nonterminals with B → conv | id. Consequently, there are 23 = 8 possible instances of the residual block. If we add another production choice for the nonterminal A, e.g., A → Linear(B, B, B), we would have 23 + 23 = 16 possible instances. Further, adding a production C → Linear(A, A, A) would yield a search space size of (23 + 23)3 = 4096. More generally, we introduce the function PA that returns the set of productions for nonterminal A ∈ N , and the function µ : P → N that returns all the nonterminals for a production p ∈ P . We can then recursively compute the size of the search space as follows: f(A) = ∑ p∈PA { 1 , µ(p) = ∅,∏ A′∈µ(p) f(A′) , otherwise . (11) When a CFG contains some constraint, we ensure to only account for valid architectures (i.e., compliant with the constraints) by ignoring productions which would lead to invalid architectures. D MORE DETAILS ON SEARCH SPACE CONSTRAINTS During the design of the search space, we may want to comply with some constraints, e.g., only consider valid neural architectures or impose structural constraints on architectures. We can guarantee compliance with constraints by modifying sampling (and evolution): we only allow the application of production rules, which guarantee compliance with the constraint(s). In the following, we show exemplary how this can be implemented for the former constraint mentioned above. Note that other constraints can be implemented in a similar manner To implement the constraint ”only consider valid neural architectures”, we note that our search space design only creates neural architectures where neither the spatial resolution nor the channels can be mismatched; please refer to Section 2.3 for details. Thus, the only way a neural architecture can become invalid is through zero operations, which could remove edges from the computational graph and possibly disassociate the input from the output. Since we recursively assemble neural architectures, it is sufficient to ensure that the derived algebraic architecture term (i.e., the associated computational graph) is compliant with the constraint, i.e.,there is at least one path from input to output. Thus, during sampling (and similarly during evolution), we modify the current production rule choices when an application of the zero operation would disassociate the input from the output. E COMMON SEARCH SPACES FROM THE LITERATURE In Section 5.1, we demonstrated how to construct the popular NAS-Bench-201 search space within our algebraic search space design, and below we show how to reconstruct the following popular search spaces: DARTS search space (Liu et al., 2019b), Auto-DeepLab search space (Liu et al., 2019a), hierarchical cell search space (Liu et al., 2018), Mobile-net search space (Tan et al., 2019), and hierarchical random graph generator search space (Ru et al., 2020). For implementation details we refer to the respective works. DARTS SEARCH SPACE The DARTS search space (Liu et al., 2019b) consists of a fixed macro architecture and a cell, i.e., a seven node directed acyclic graph (Darts; see Figure 6 for the topological operator). We omit the fixed macro architecture from our search space design for simplicity. Each cell receives the feature maps from the two preceding cells as input and outputs a single feature map. All intermediate nodes (i.e., Node3, Node4, Node5, and Node6) is computed based on all of its predecessors. Thus, we can define the DARTS search space as follows: DARTS ::= Darts(NODE3, NODE4, NODE5, NODE6) NODE3 ::= Node3(OP, OP) NODE4 ::= Node4(OP, OP, OP) NODE5 ::= Node5(OP, OP, OP, OP) NODE6 ::= Node6(OP, OP, OP, OP, OP) OP ::= sep conv 3x3 | sep conv 5x5 | dil conv 3x3 | dil conv 5x5 | max pool | avg pool | id | zero , (12) where the topological operator Node3 receives two inputs, applies the operations separately on them, and sums them up. Similarly, Node4, Node5, and Node6 apply their operations separately to the given inputs and sum them up. The topological operator Darts feeds the corresponding feature maps into each of those topological operators and finally concatenates all intermediate feature maps. AUTO-DEEPLAB SEARCH SPACE Auto-DeepLab (Liu et al., 2019a) combines a cell-level with a network-level search space to search for segmentation networks, where the cell is shared across the searched macro architecture, i.e., a twelve step (linear) path across different spatial resolutions. The cell-level design is adopted from Liu et al. (2019b) and, thus, we can re-use the CFG from Equation 12. For the network-level, we introduce a constraint that ensures that the path is of length twelve, i.e., we ensure exactly twelve derivations in our CFG. Further, we overload the nonterminals so that they correspond to the respective spatial resolution level, e.g., D4 indicates that the original input is downsampled by a factor of four; please refer to Section 2.3 for details on overloading nonterminals. For the sake of simplicity, we omit the first two layers and atrous spatial pyramid poolings as they are fixed, and hence define the network-level search space as follows: D4 ::= Same(CELL, D4) | Down(CELL, D8) D8 ::= Up(CELL, D4) | Same(CELL, D8) | Down(CELL, D16) D16 ::= Up(CELL, D8) | Same(CELL, D16) | Down(CELL, D32) D32 ::= Up(CELL, D16) | Same(CELL, D32) , (13) where the topological operators Up, Same, and Down upsample/halve, do not change/do not change, or downsample/double the spatial resolution/channels, respectively. The placeholder variable CELL maps to the shared DARTS cell from the language generated by the CFG from Equation 12. HIERARCHICAL CELL SEARCH SPACE The hierarchical cell search space (Liu et al., 2018) consists of a fixed (linear) macro architecture and a hierarchically assembled cell with three levels which is shared across the macro architecture. Thus, we can omit the fixed macro architecture from our search space design for simplicity. Their first, second, and third hierarchical levels correspond to the primitive computations (i.e., id, max pool, avg pool, sep conv, depth conv, conv, zero), six densely connected four node directed acyclic graphs (DAG4), and a densely connected five node directed acyclic graph (DAG5), respectively. The zero operation could lead to directed acyclic graphs which have fewer nodes. Therefore, we introduce a constraint enforcing that there are always four (level 2) or five (level 3) nodes for every directed acyclic graph. Further, since a densely connected five node directed acyclic graph graph has ten edges, we need to introduce placeholder variables (i.e., M1, ..., M6) to enforce that only six (possibly) different four node directed acyclic graphs are used, and consequently define a CFG for the third level LEVEL3 ::= DAG5(LEVEL2, ..., LEVEL2︸ ︷︷ ︸ ×10 ) LEVEL2 ::= M1 | M2 | M3 | M4 | M5 | M6 | zero , (14) mapping the placeholder variables M1, ..., M6 to the six lower-level motifs constructed by the first and second hierarchical level LEVEL2 ::= DAG4(LEVEL1, ..., LEVEL1)︸ ︷︷ ︸ ×6 LEVEL1 ::= id | max pool | avg pool | sep conv | depth conv | conv | zero . (15) MOBILE-NET SEARCH SPACE Factorized hierarchical search spaces, e.g., the Mobile-net search space (Tan et al., 2019), allow for layer diversity. They factorize a (fixed) macro architecture – often based on an already wellperforming reference architecture – into separate blocks (e.g., cells). For the sake of simplicity, we assume here a three sequential blocks (Block) architecture (Linear). In each of those blocks, we search for the convolution operations (CONV), kernel sizes (KSIZE), squeeze-and-excitation ratio (SERATIO) (Hu et al., 2018), skip connections (SKIP), number of output channels (FSIZE), and number of layers per block (#LAYERS), where the latter two are discretized using a reference architecture, e.g., MobileNetV2 (Sandler et al., 2018). Consequently, we can express this search space as follows: MACRO ::= Linear(BLOCK, BLOCK, BLOCK) BLOCK ::= Block(CONV, KSIZE, SERATIO, SKIP, FSIZE, #LAYERS) CONV ::= conv | dconv | mbconv KSIZE ::= 3 | 5 SERATIO ::= 0 | 0.25 SKIP ::= pooling | id residual | no skip FSIZE ::= 0.75 | 1.0 | 1.25 #LAYERS ::= -1 | 0 | 1 , (16) where conv, donv and mbconv correspond to convolution, depthwise convolution, and mobile inverted bottleneck convolution (Sandler et al., 2018), respectively. HIERARCHICAL RANDOM GRAPH GENERATOR SEARCH SPACE The hierarchical random graph generator search space (Ru et al., 2020) consists of three hierarchical levels of random graph generators (i.e., Watts-Strogatz (Watts & Strogatz, 1998) and Erdõs-Rényi (Erdős et al., 1960)). We denote with Watts-Strogatz i the random graph generated by the Watts-Strogatz model with i nodes. Thus, we can represent the search space as follows: TOP ::= Watts-Strogatz 3(K, Pt)(MID, MID, MID) | ... | Watts-Strogatz 10(K, Pt)(MID, ..., MID︸ ︷︷ ︸ ×10 ) MID ::= Erdõs-Rényi 1(Pm)(BOT) | ... | Erdõs-Rényi 10(Pm)(BOT, ..., BOT︸ ︷︷ ︸ ×10 ) BOT ::= Watts-Strogatz 3(K, Pb)(NODE, NODE, NODE) | ... | Watts-Strogatz 10(K, Pb)(NODE ..., NODE︸ ︷︷ ︸ ×10 ) K ::= 2 | 3 | 4 | 5 , (17) Algorithm 1 Bayesian Optimization algorithm (Brochu et al., 2010). Input: Initial observed data Dt, a black-box objective function f , total number of BO iterations T Output: The best recommendation about the global optimizer x∗ for t = 1, . . . , T do Select the next xt+1 by maximizing acquisition function α(x|Dt) Evaluate the objective function at ft+1 = f(xt+1) Dt+1 ← Dt ∪ (xt+1, ft+1) Update the surrogate model with Dt+1 end for where each terminal Pt, Pm, and Pb maps to a continuous number in [0.1, 0.9]1 and the placeholder variable NODEmaps to a primitive computation, e.g., separable convolution. Note that we omit other hyperparameters, such as stage ratio, channel ratio etc., for simplicity. F MORE DETAILS ON THE SEARCH STRATEGY In this section, we provide more details and examples for our search strategy Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT) presented in Section 3. F.1 BAYESIAN OPTIMIZATION Bayesian Optimization (BO) is a powerful family of search techniques for finding the global optimum of a black-box objective problem. It is particularly useful when the objective is expensive to evaluate and thus sample efficiency is highly important (Brochu et al., 2010). To minimize a black-box objective problem with BO, we first need to build a probabilistic surrogate to model the objective based on the observed data so far. Based on the surrogate model, we design an acquisition function to evaluate the utility of potential candidate points by trading off exploitation (where the posterior mean of the surrogate model is low) and exploration (where the posterior variance of the surrogate model is high). The next candidate points to evaluate is then selected by maximizing the acquisition function (Shahriari et al., 2015). The general procedures of BO is summarized in Algorithm 1. We adopted the widely used acquisition function, expected improvement (EI) (Mockus et al., 1978), in our BO strategy. EI evaluates the expected amount of improvement of a candidate point x over the minimal value f ′ observed so far. Specifically, denote the improvement function as I(x) = max(0, f ′ − f(x)), the EI acquisition function has the form αEI(x|Dt) = E[I(x)|Dt] = ∫ f ′ −∞ (f ′ − f)N ( f ;µ(x|Dt), σ2(x|Dt) ) df = (f ′ − f)Φ ( f ′;µ(x|Dt), σ2(x|Dt) ) + σ2(x|Dt)ϕ(f ′;µ(x|Dt), σ2(x|Dt)) , where µ(x|Dt) and σ2(x|Dt) are the mean and variance of the predictive posterior distribution at a candidate point x, and ϕ(·) and Φ(·) denote the PDF and CDF of the standard normal distribution, respectively. To make use of ample distributed computing resource, we adopted Kriging Believer (Ginsbourger et al., 2010) which uses the predictive posterior of the surrogate model to assign hallucinated function values {f̃p}p∈{1, ..., P} to the P candidate points with pending evaluations {x̃p}p∈{1, ..., P} and perform next BO recommendation in the batch by pseudo-augmenting the observation data with D̃p = {(x̃p, f̃p)}p∈{1, ..., P}, namely D̃t = Dt ∪ D̃p. The algorithm of Kriging Believer at one BO iteration to select a batch of recommended candidate points is summarized in Algorithm 2. 1Theoretically, this is not possible with CFGs. However, we can extend the notion of substitution by substituting a string representation of a Python (float) variable for the placeholder variables Pt, Pm, and Pb. Algorithm 2 Kriging Believer algorithm to select one batch of points. Input: Observation data Dt, batch size b Output: The batch points Bt+1 = {x(1)t+1, . . . ,x (b) t+1} D̃t = Dt ∪ D̃p for j = 1, . . . , b do Select the next x(j)t+1 by maximizing acquisition function α(x|D̃t) Compute the predictive posterior mean µ(x(j)t+1|D̃t) D̃t ← D̃t ∪ (xt+1, µ(x(j)t+1|D̃t)) end for Algorithm 3 Weisfeiler-Lehman subtree kernel computation (Shervashidze et al., 2011). Input: Graphs G1, G2, maximum iterations H Output: Kernel function value between the graphs Initialize the feature vectors ϕ(G1) = ϕ0(G1), ϕ(G2) = ϕ0(G2) with the respective counts of original node labels (i.e., the h = 0 WL features) for h = 1, . . . H do Assign a multiset Mh(v) = {lh−1(u)|u ∈ N (v)} to each node v ∈ G, where lh−1 is the node label function of the h− 1-th WL iteration and N is the node neighbor function Sort elements in multiset Mh(v) and concatenate them to string sh(v) Compress each string sh(v) using the hash function f s.t. f(sh(v)) = f(sh(w)) ⇐⇒ sh(v) = sh(u) Add lh−1 as prefix for sh(v) Concatenate the WL features ϕh(G1), ϕh(G2) with the respective counts of the new labels: ϕ(G1) = [ϕ(G1), ϕh(G1)], ϕ(G2) = [ϕ(G2), ϕh(G2)] Set lh(v) := f(sh(v)) ∀v ∈ G end for Compute inner product k = ⟨ϕh(G1), ϕh(G2)⟩ between WL features ϕh(G1), ϕh(G2) in RKHS H F.2 HIERARCHICAL WEISFEILER-LEHMAN KERNEL Inspired by Ru et al. (2021), we adopted the Weisfeiler-Lehman (WL) graph kernel (Shervashidze et al., 2011) in the GP surrogate model to handle the graph nature of neural architectures. The basic idea of the WL kernel is to first compare node labels, and then iteratively aggregate labels of neighboring nodes, compress them into a new label and compare them. Algorithm 3 summarizes the WL kernel procedure. Ru et al. (2021) identified three reasons for using the WL kernel: (1) it is able to compare labeled and directed graphs of different sizes, (2) it is expressive, and (3) it is relatively efficient and scalable. Our search space design can afford a diverse spectrum of neural architectures with very heterogeneous topological structure. Therefore, reason (1) is a very important property of the WL kernel to account for the diversity of neural architectures. Moreover, if we allow many hierarchical levels, we can construct very large neural architectures. Therefore, reasons (2) and (3) are essential for accurate and fast modeling. However, neural architectures in our search spaces may be significantly larger, which makes it difficult for a single WL kernel to capture the more global topological patterns. Moreover, modeling solely based on the final neural architecture ignores the useful macro-level information from earlier hierarchical levels. In our experiments (Section 5 and I), we have found stronger neural architectures by incorporating the hierarchical information in the kernel design, which provides experimental support for above arguments. However, modeling solely based on the (standard) WL graph kernel neglects the useful hierarchical information from our assembly process. Moreover, the large size of neural architectures make it still challenging to capture the more global topological patterns. We therefore propose to use hierarchical information through a hierarchy of WL graph kernels that take into account the different granularities of the architectures and combine them in a weighted sum. To obtain the different granularities, we use the fold operators Fl that removes algebraic terms beyond the l-th hierarchical level. Thereby, Residual Residual fc we obtain the folds F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (18) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear , for the algebraic architecture term ω. Note that we ignore the first fold since it does not represent a labeled DAG. Figure 7 visualizes the labeled graphs Φ(F2) and Φ(F3) of the folds F2 or F3, respectively. These graphs can be fed into (standard) WL graph kernels. Therefore, we can construct a hierarchy of WL graph kernels kWL as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (19) where ωi and ωj are two algebraic architecture terms. Note that λl govern the importance of the learned graph information across the hierarchical levels and can be optimized through the marginal likelihood. F.3 EXAMPLES FOR THE EVOLUTIONARY OPERATIONS For the evolutionary operations, we adopted ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). In the following, we will show how these evolutionary operations manipulate algebraic terms, e.g., Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) , (20) from the search space S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc , (21) to generate evolved algebraic terms. Figure 1 shows how we can derive the algebraic term in Equation 20 from the search space in Equation 21. For mutation operations, we first randomly pick a subterm of the algebraic term, e.g., Residual(conv, id, conv). Then, we randomly sample a new subterm with the same nonterminal symbol S as start symbol, e.g., Linear(conv, id, fc), and replace the previous subterm, yielding Linear(Linear(conv, id, fc), Residual(conv, id, conv), fc) . (22) For (self-)crossover operations, we swap two subterms, e.g., Residual(conv, id, conv) and Residual(conv, id, conv) with the same nonterminal S as start symbol, yielding Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (23) Note that unlike the commonly used crossover operation, which uses two parents, self-crossover has only one parent. In future work, we could also add a self-copy operation that copies a subterm to another part of the algebraic term, explicitly regularizing diversity and thus potentially speeding up the search. G RELATED WORK BEYOND NEURAL ARCHITECTURE SEARCH While our work focuses exclusively on NAS, we will discuss below how it relates to the areas of optimizer search (as well as from scratch automated machine learning) and neural-symbolic programming. Optimizer search is a closely related field to NAS, where we automatically search for an optimizer (i.e., an update function for the weights) instead of an architecture. Initial works used learnable parametric or non-parametric optimizers. While the former approaches (Andrychowicz et al., 2016; Li & Malik, 2017; Chen et al., 2017; 2022a) have poor scalability and generality, the latter works overcome those limitations. Bello et al. (2017) searched for an instantiation of hand-crafted patterns via reinforcement learning, while Wang et al. (2022) proposed a tree-structured search space2 and searched for optimizers via a modified Monte Carlo sampling approach. AutoML-Zero (Real et al., 2020) took an even more general approach by searching over entire machine learning algorithms, including optimizers, from a generic search space built from basic mathematical operations with an evolutionary algorithm. Chen et al. (2022b) used RE to discover optimizers from a generic search space (inspired by AutoML-Zero) for training vision transformers (Dosovitskiy et al., 2021). Complementary to the above, there is recent interest in automatically synthesizing programs from domain-specific languages. Gaunt et al. (2017) proposed a hand-crafted program template and simultaneously optimized the parameters of the differentiable program with gradient descent. The HOUDINI framework (Valkov et al., 2018) proposed type-directed (top-down) enumeration and evolution approaches over differentiable functional programs. Shah et al. (2020) hierarchically assembled differentiable programs and used neural networks for the approximation of missing expression in partial programs. Cui & Zhu (2021) treated CFGs stochastically with trainable production rule sampling weights, which were optimized with a gradient-based approach (Liu et al., 2019b). However, naı̈vely applying gradient-based approaches does not work in our search spaces due to the exponential explosion of supernet weights, but still renders an interesting direction for future work. Compared to these lines of work, we extended CFGs to handle changes in spatial resolution, promote regularity, and (compared to most of them) incorporate constraints, the latter two of which could also be applied in those domains. We also proposed a BO search strategy to search efficiently with a tailored kernel design to handle the hierarchical nature of the search space (i.e., the architectures). H IMPLEMENTATION DETAILS OF THE SEARCH STRATEGIES BANAT & BANAT (WL) The only difference between BANAT and BANAT (WL) is that the former uses our proposed hierarchy of WL kernels (hWL), whereas the latter only uses a single WL kernel (WL) for the entire architecture (c.f., (Ru et al., 2021)). We ran BANAT asynchronously in parallel throughout our experiments with a batch size of B = 1, i.e., at each BO iteration a single architecture is proposed for evaluation. For the acquisition function optimization, we used a pool size of P = 200, where the initial population consisted of the current ten best-performing architectures and the remainder were randomly sampled architectures to encourage exploration in the huge search spaces. During evolution, the mutation probability was set to pmut = 0.5 and crossover probability was set to pcross = 0.5. From the crossovers, half of them were self-crossovers of one parent and the other half were common crossovers between two parents. The tournament selection probability was set to ptour = 0.2. We evolved the population at least for ten iterations and a maximum of 50 iterations using a early stopping criterion based on the fitness value improvements over the last five iterations. Regularized Evolution (RE) RE (Real et al., 2019; Liu et al., 2018) iteratively mutates the best architectures out of a sample of the population. We reduced the population size from 50 to 30 to account for fewer evaluations, and used a sample size of 10. We also ran RE asynchronously for better comparability. I SEARCHING THE HIERARCHICAL NAS-BENCH-201 SEARCH SPACE In this section, we provide training details (Section I.1) and provide complementary results as well as conduct extensive analyses (Section I.2). 2Note that the tree-structured search space can equivalently be described with a CFG (with a constraint on the number of maximum depth of the syntax trees). I.1 TRAINING DETAILS Training protocol We evaluated all search strategies on CIFAR-10/100 (Krizhevsky et al., 2009), ImageNet-16-120 (Chrabaszcz et al., 2017), CIFARTile, and AddNIST (Geada et al., 2021). Note that CIFARTile and AddNIST are novel datasets and therefore have not yet been optimized by the research community. We provide further dataset details below. For training of architectures on CIFAR-10/100 and ImageNet-16-120, we followed Dong & Yang (2020). We trained architectures with SGD with learning rate of 0.1, Nesterov momentum of 0.9, weight decay of 0.0005 with cosine annealing (Loshchilov & Hutter, 2019), and batch size of 256 for 200 epochs. The initial channels were set to 16. For both CIFAR-10 and CIFAR-100, we used random flip with probability 0.5 followed by a random crop (32x32 with 4 pixel padding) and normalization. For ImageNet-16120, we used a 16x16 random crop with 2 pixel padding instead. For training of architectures on AddNIST and CIFARTile, we followed the training protocol from the CVPR-NAS 2021 competition (Geada et al., 2021): We trained architectures with SGD with learning rate of 0.01, momentum of 0.9, and weight decay of 0.0003 with cosine annealing, and batch size of 64 for 64 epochs. We set the initial channels to 16 and did not apply any further data augmentation. Dataset details In Table 4, we provide the licenses for the datasets used in our experiments. For training of architectures on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet-16-120 (Chrabaszcz et al., 2017), we followed the dataset splits and training protocol of NAS-Bench-201 (Dong & Yang, 2020). For CIFAR-10, we split the original training set into a new training set with 25k images and validation set with 25k images following Dong & Yang (2020). The test set remained unchanged. For evaluation, we trained architectures on both the training and validation set. For CIFAR-100, the training set remained unchanged, but the test set was partitioned in a validation set and new test set with each 5K images. For ImageNet-16-120, all splits remained unchanged. For AddNIST and CIFARTile, we used the training, validation, and test splits as defined in the CVPR-NAS 2021 competition (Geada et al., 2021). I.2 EXTENDED SEARCH RESULTS AND ANALYSES Supplementary to Figure 2, Figure 8 compares the cell-based vs. hierarchical NAS-Bench-201 search space from Section 6.1 using RS, RE, and BANAT (WL). The cell-based search space design shows on par or stronger performance on all datasets except for CIFARTile for the three search strategies. In contrast, for our proposed search strategy BANAT we find on par (CIFAR-10/100) or superior (ImageNet-16-120, CIFARTile, and AddNIST) performance using the hierarchical search space design. This clearly shows that the increase of the search space does not necessarily yields the discovery of stronger neural architectures. Further, it exemplifies the importance of a strong search strategy to search effectively and efficiently in huge hierarchical search spaces (Q2), and provides further evidence that the incorporation of hierarchical information is a key contributor for search efficiency (Q3). Based on this, we believe that future work using, e.g., graph neural networks as a surrogate, may benefit from the incorporation of hierarchical information. We report the test errors of our best found architectures in Table 5. We observe that our search strategy BANAT finds the strongest performing architectures across all dataset (Q2, Q3). Also note that we achieve better (validation and) test performance on ImageNet-16-120 on the hierarchical than the state-of-the-art search strategy on the cell-based NAS-Bench-201 search space (i.e., +0.37%p compared to Shapley-NAS (Xiao et al., 2022)) (Q1). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (a) Random Search (RS). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (b) Regularized Evolution (RE). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST Ta bl e 5: Te st er ro rs (a nd ± 1 st an da rd er ro r) of po pu la r ba se lin e ar ch ite ct ur es (e .g ., R es N et (H e et al ., 20 16 ) an d E ffi ci en tN et (T an & L e, 20 19 ) va ri an ts ), an d ou r be st fo un d ar ch ite ct ur es on th e ce ll- ba se d an d hi er ar ch ic al N A S- B en ch -2 01 se ar ch sp ac e. N ot e th at w e pi ck ed th e R es N et an d E ffi ci en tN et va ri an tb as ed on th e te st er ro r, co ns eq ue nt ly gi vi ng an ov er es tim at e of th ei rt es tp er fo rm an ce . † op tim al nu m be rs as re po rt ed in D on g & Y an g (2 02 0) . (b es t) te st er ro r( an d ± 1 st an da rd er ro r) ac ro ss th re e se ed s {7 7 7 , 8 8 8 , 9 9 9 } of th e be st ar ch ite ct ur e of th e th re e se ar ch ru ns w ith lo w es tv al id at io n er ro r. M et ho d C IF A R -1 0 C IF A R -1 00 Im ag eN et -1 6- 12 0 C IF A R Ti le A dd N IS T ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al B es tR es N et (H e et al ., 20 16 ) 06 .4 9 ± 0. 24 (3 2) 27 .1 0± 0. 67 (1 10 ) 53 .6 7 ± 0. 18 (5 6) 57 .8 0± 0. 57 (1 8) 7. 78 ± 0. 05 (3 4) B es tE ffi ci en tN et (T an & L e, 20 19 ) 11 .7 3 ± 0. 10 (B 0) 35 .1 7 ± 0. 42 (B 6) 77 .7 3 ± 0. 29 (B 0) 61 .0 1 ± 0. 62 (B 0) 13 .2 4 ± 0. 58 (B 1) N A SB en ch -2 01 or ac le † 5. 63 26 .4 9 52 .6 9 - - R S 6. 39 ± 0. 18 6. 77 ± 0. 10 28 .7 5 ± 0. 57 29 .4 9 ± 0. 57 54 .8 3 ± 0. 78 54 .7 0± 0. 82 52 .7 2 ± 0. 45 40 .9 3 ± 0. 81 7. 82 ± 0. 36 8. 05 ± 0. 29 N A SW O T (N =1 0) (M el lo re ta l., 20 21 ) 6. 55 ± 0. 10 8. 18 ± 0. 46 29 .3 5 ± 0. 53 31 .7 3 ± 0. 96 56 .8 0± 1. 35 58 .6 6 ± 0. 29 41 .8 3 ± 2. 29 49 .4 6 ± 2. 95 10 .1 1 ± 0. 69 11 .8 1 ± 1. 55 N A SW O T (N =1 00 )( M el lo re ta l., 20 21 ) 6. 59 ± 0. 17 8. 56 ± 0. 87 28 .9 1 ± 0. 25 31 .6 5 ± 1. 95 55 .9 9 ± 1. 30 58 .4 7 ± 2. 74 41 .6 3 ± 1. 02 43 .3 1 ± 2. 00 10 .7 5 ± 0. 23 14 .4 7 ± 1. 44 N A SW O T (N =1 00 0) (M el lo re ta l., 20 21 ) 6. 68 ± 0. 12 8. 26 ± 0. 38 29 .3 7 ± 0. 17 31 .6 6 ± 0. 72 58 .9 3 ± 2. 92 58 .3 3 ± 0. 91 39 .6 1 ± 1. 12 45 .6 6 ± 1. 29 10 .6 8 ± 0. 27 13 .5 7 ± 1. 89 N A SW O T (N =1 00 00 )( M el lo re ta l., 20 21 ) 6. 98 ± 0. 43 8. 40 ± 0. 52 29 .9 5 ± 0. 42 32 .0 9 ± 1. 61 54 .2 0± 0. 49 57 .5 8 ± 1. 53 39 .9 0± 1. 20 42 .4 5 ± 0. 67 10 .7 2 ± 0. 53 14 .8 2 ± 0. 66 R E (R ea le ta l., 20 19 ;L iu et al ., 20 18 ) 5. 76 ± 0. 17 6. 88 ± 0. 24 27 .6 8 ± 0. 55 30 .0 0± 0. 32 53 .9 2 ± 0. 60 55 .3 9 ± 0. 54 52 .7 9 ± 0. 59 40 .9 9 ± 2. 89 7. 69 ± 0. 35 7. 56 ± 0. 69 B A N A T (W L )( R u et al ., 20 21 ) 5. 68 ± 0. 11 6. 98
1. What is the focus of the paper regarding context free grammars? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its flexibility and ability to discover new architectures? 3. Do you have any concerns or questions regarding the paper's claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes using context free grammars to define a search space. This grammar is very flexible and one can use it to define all kinds of search spaces. The production rules can be formed to imposed constraints as well. Their search algorithm, BANAT is a bayesian optimization based one, where the surrogate model is a hierarchical Weisfeiler-Lehman kernel (adapted Weisfeiler-Lehman kernel used by NASBOWL to make it work for their search space) and the acquisition function is expected improvement. They used a hierarchical NASBENCH 201 search space. Strengths And Weaknesses Strength: This paper defines a language for defining search spaces. Their grammar can be used for all the search spaces. Their search algorithm is able to find better architectures faster than others in hierarchical NAS-Bench 201 search space (as they are using a surrogate model). In their empirical evaluations, it was highlighted that while cell-based search space works well for cifar-10, cifar-100 and imagenet-16-120, it does not perform well on CIFARTile or AddNIST. ** Comments / Questions ** It is a bit hard to understand why we need a special grammar to design the search space. The authors claim it is more flexible and can discover new architectures. But one still needs to design the primitives and how they are connected together. So how can we really discover completely new architectures? Hierarchical search space is much larger than cell spaced search space and is used for scenarios where one is interested in finding architectures with low latency, number of flops etc. So it is not surprising that BANAT was able to discover architectures with lesser number of parameters. It would have been better if they had actually demonstrated how it fared in multi-objective optimization problems and compared against other hierarchical baselines such as MNASNet. The authors claim that their search space design is more flexible than other baselines and is especially beneficial for object detection setting. But they never ran BANAT to find architectures for object detection. For figure4, Kendall Tau is the most commonly metric. Please use that. Please use a diagram to elucidate how the hierarchical search space looks like Please specify the details about bayesian optimization such as the acquisition function used in the main paper rather than in the appendix. The time taken for Imagenet-16-120 is 1.8 * 8 GPU days. Please specify that explicitly. Clarity, Quality, Novelty And Reproducibility The paper is written clearly. (I have prior knowledge of CFGs, otherwise would have been harder to follow). The novelty is limited.
ICLR
Title Towards Discovering Neural Architectures from Scratch Abstract The discovery of neural architectures from scratch is the long-standing goal of Neural Architecture Search (NAS). Searching over a wide spectrum of neural architectures can facilitate the discovery of previously unconsidered but wellperforming architectures. In this work, we take a large step towards discovering neural architectures from scratch by expressing architectures algebraically. This algebraic view leads to a more general method for designing search spaces, which allows us to compactly represent search spaces that are 100s of orders of magnitude larger than common spaces from the literature. Further, we propose a Bayesian Optimization strategy to efficiently search over such huge spaces, and demonstrate empirically that both our search space design and our search strategy can be superior to existing baselines. We open source our algebraic NAS approach and provide APIs for PyTorch and TensorFlow. 1 INTRODUCTION Neural Architecture Search (NAS), a field with over 1 000 papers in the last two years (Deng & Lindauer, 2022), is widely touted to automatically discover novel, well-performing architectural patterns. However, while state-of-the-art performance has already been demonstrated in hundreds of NAS papers (prominently, e.g., (Tan & Le, 2019; 2021; Liu et al., 2019a)), success in automatically finding truly novel architectural patterns has been very scarce (Ramachandran et al., 2017; Liu et al., 2020). For example, novel architectures, such as transformers (Vaswani et al., 2017; Dosovitskiy et al., 2021) have been crafted manually and were not found by NAS. There is an accumulating amount of evidence that over-engineered, restrictive search spaces (e.g., cell-based ones) are major impediments for NAS to discover truly novel architectures. Yang et al. (2020b) showed that in the DARTS search space (Liu et al., 2019b) the manually-defined macro architecture is more important than the searched cells, while Xie et al. (2019) and Ru et al. (2020) achieved competitive performance with randomly wired neural architectures that do not adhere to common search space limitations. As a result, there are increasing efforts to break these impediments, and the discovery of novel neural architectures has been referred to as the holy grail of NAS. Hierarchical search spaces are a promising step towards this holy grail. In an initial work, Liu et al. (2018) proposed a hierarchical cell, which is shared across a fixed macro architecture, imitating the compositional neural architecture design pattern widely used by human experts. However, subsequent works showed the importance of both layer diversity (Tan & Le, 2019) and macro architecture (Xie et al., 2019; Ru et al., 2020). In this work, we introduce a general formalism for the representation of hierarchical search spaces, allowing both for layer diversity and a flexible macro architecture. The key observation is that any neural architecture can be represented algebraically; e.g., two residual blocks followed by a fullyconnected layer in a linear macro topology can be represented as the algebraic term ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (1) We build upon this observation and employ Context-Free Grammars (CFGs) to construct large spaces of such algebraic architecture terms. Although a particular search space is of course limited in its overall expressiveness, with this approach, we could effectively represent any neural architecture, facilitating the discovery of truly novel ones. Due to the hierarchical structure of algebraic terms, the number of candidate neural architectures scales exponentially with the number of hierarchical levels, leading to search spaces 100s of orders of magnitudes larger than commonly used ones. To search in these huge spaces, we propose an efficient search strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), which leverages hierarchical information, capturing the topological patterns across the hierarchical levels, in its tailored kernel design. Our contributions are as follows: • We present a novel technique to construct hierarchical NAS spaces based on an algebraic notion views neural architectures as algebraic architecture terms and CFGs to create algebraic search spaces (Section 2). • We propose BANAT, a Bayesian Optimization (BO) strategy that uses a tailored modeling strategy to efficiently and effectively search over our huge search spaces (Section 3). • After surveying related work (Section 4), we empirically show that search spaces of algebraic architecture terms perform on par or better than common cell-based spaces on different datasets, show the superiority of BANAT over common baselines, demonstrate the importance of incorporating hierarchical information in the modeling, and show that we can find novel architectural parts from basic mathematical operations (Section 5). We open source our code and provide APIs for PyTorch (Paszke et al., 2019) and TensorFlow (Abadi et al., 2015) at https://anonymous.4open.science/r/iclr23_tdnafs. 2 ALGEBRAIC NEURAL ARCHITECTURE SEARCH SPACE CONSTRUCTION In this section we present an algebraic view on Neural Architecture Search (NAS) (Section 2.1) and propose a construction mechanism based on Context-Free Grammars (CFGs) (Section 2.2 and 2.3). 2.1 ALGEBRAIC ARCHITECTURE TERMS FOR NEURAL ARCHITECTURE SEARCH We introduce algebraic architecture terms as a string representation for neural architectures from a (term) algebra. Formally, an algebra (A,F) consists of a non-empty set A (universe) and a set of operators f : An → A ∈ F of different arities n ≥ 0 (Birkhoff, 1935). In our case, A corresponds to the set of all (sub-)architectures and we distinguish between two types of operators: (i) nullary operators representing primitive computations (e.g., conv() or fc()) and (ii) k-ary operators with k > 0 representing topological operators (e.g., Linear(·, ·, ·) or Residual(·, ·, ·)). For sake of notational simplicity, we omit parenthesis for nullary operators (i.e., we write conv). Term algebras (Baader & Nipkow, 1999) are a special type of algebra mapping an algebraic expression to its string representation. E.g., we can represent a neural architecture as the algebraic architecture term ω as shown in Equation 1. Term algebras also allow for variables xi that are set to terms themselves that can be re-used across a term. In our case, the intermediate variables xi can therefore share patterns across the architecture, e.g., a shared cell. For example, we could define the intermediate variable x1 to map to the residual block in ω from Equation 1 as follows: ω′ = Linear(x1, x1, fc), x1 = Residual(conv, id, conv) . (2) Algebraic NAS We formulate our algebraic view on NAS, where we search over algebraic architecture terms ω ∈ Ω representing their associated architectures Φ(ω), as follows: argmin ω∈Ω f(Φ(ω)) , (3) where f(·) is an error measure that we seek to minimize, e.g., final validation error of a fixed training protocol. For example, we can represent the popular cell-based NAS-Bench-201 search space(Dong & Yang, 2020) as algebraic search space Ω. The algebraic search space Ω is characterized by a fixed macro architecture Macro(. . .) that stacks 15 instances of a shared cell Cell(pi,pi,pi,pi,pi,pi), where the cell has six edges, on each of which one of five primitive computations can be placed (i.e., pi for i ∈ {1, 2, 3, 4, 5} corresponding to zero, id, conv1x1, conv3x3, or avg pool, respectively). By leveraging the intermediate variable x1 we can effectively share the cell topology across the architecture. For example, we can express an architecture ωi ∈ Ω from the NAS-Bench-201 search space Ω as: ωi = Macro(x1, x1, ..., x1︸ ︷︷ ︸ 15× ), x1 = Cell(p1,p2,p1,p5,p4,p3) . (4) Algebraic NAS over such algebraic architecture terms then amounts to finding the best-performing primitive computation pi for each edge, as the macro architecture is fixed. In contrast to this simple cell-based algebraic space, the search spaces we consider can be much more expressive and, e.g., allow for layer diversity and a flexible macro architecture over several hierarchical levels (Section 5.1). 2.2 CONSTRUCTING NEURAL ARCHITECTURE TERMS WITH CONTEXT-FREE GRAMMARS We propose to use Context-Free Grammars (CFGs) (Chomsky, 1956) since they can naturally generate (hierarchical) algebraic architecture terms. Compared to other search space designs, CFGs give us a formally grounded way to naturally and compactly define very expressive hierarchical search spaces (e.g., see Section 5.1). We can also unify popular search spaces from the literature with our general search space design in one framework (Appendix E). They give us further a simple mechanism to evolve architectures while staying within the defined search space (Section 3). Formally, a CFG G = ⟨N,Σ, P, S⟩ consists of a finite set of nonterminals N and terminals Σ with N ∩Σ = ∅, a finite set of production rules P = {A→ β|A ∈ N, β ∈ (N ∪Σ)∗}, where the asterisk ∗ denotes the Kleene star operation (Kleene et al., 1956), and a start symbol S ∈ N . To generate an algebraic architecture term, starting from the start symbol S, we recursively replace nonterminals of the current algebraic term with a right-hand side of a production rule consisting of nonterminals and terminals, until the resulting string does not contain any nonterminals. For example, consider the following CFG in extended Backus-Naur form (Backus, 1959) (see Appendix B for background): S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc (5) From this CFG, we can derive the algebraic architecture term ω (with three hierarchical levels) from Equation 1 as follows: S→ Linear(S, S, S) Level 1 → Linear(Residual(S, S, S), Residual(S, S, S), fc) Level 2 (6) → Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) Level 3 Figure 1 makes the above derivation and the connection to the associated architecture explicit. The set of all (potentially infinite) algebraic terms generated by a CFG G is the language L(G), which naturally forms our search space Ω. Thus, the algebraic NAS problem from Equation 3 becomes: argmin ω∈L(G) f(Φ(ω)) . (7) 2.3 EXTENSIONS TO THE CONSTRUCTION MECHANISM Constraints In many search space designs, we want to adhere to some constraints, e.g., to limit the number of nodes or to ensure that for all architectures in the search space there exists at least one path from the input to the output. We can simply do so by allowing only the application of production rules which guarantee compliance to such constraints. For example, to ensure that there is at least one path from the input to the output, it is sufficient to ensure that each derivation connects its input to the output due to the recursive nature of CFGs. Note that this makes CFGs context-sensitive w.r.t. those constraints. For more details, please refer to Appendix D. Fostering regularity through substitution To implement intermediate variables xi (Section 2.1) we leverage that context-free languages are closed under substitution: we map terminals, representing the intermediate variables xi, from one language to algebraic terms of other languages, e.g., a shared cell. For example, we can split a CFG G, constructing entire algebraic architecture terms, into the CFGs Gmacro and Gcell for the macro- or cell-level, respectively. Further, we add a single (or multiple) intermediate terminal(s) x1 to Gmacro which maps to an algebraic term ω1 ∈ L(Gcell), e.g., the searchable cell. Thus, we effectively search over the macro-level as well as a single, shared cell. Note that by using a fixed macro architecture (i.e., |L(Gmacro)| = 1), we can represent cell-based search spaces, e.g., NAS-Bench-201 (Dong & Yang, 2020), while also being able to represent more expressive search spaces (e.g., see Section 5.1). More generally, we could extend this by adding further intermediate terminals which map to other languages L(Gj), or by adding intermediate terminals to G2 which map to languages L(Gj ̸=1). In this way, we can effectively foster regularity. Representing common architecture patterns for object recognition Neural architectures for object recognition commonly build a hierarchy of features that are gradually downsampled, e.g., by pooling operations. However, previous works in NAS were either limited to a fixed macro architecture (Zoph et al., 2018), only allowed for linear macro architectures (Liu et al., 2019a), or required post-sampling testing for resolution mismatches (Stanley & Miikkulainen, 2002; Ru et al., 2020). While this produced impressive performance on popular benchmarks (Tan & Le, 2019; 2021; Liu et al., 2019a), it is an open research question whether a different type of macro architecture (e.g., one with multiple branches) could yield even better performance. To accommodate flexible macro architectures, we propose to overload the nonterminals. In particular, the nonterminals indicate how often we apply downsampling operations in the subsequent derivations of the nonterminal. Consider the production rule D2 → Residual(D1, D2, D1), where Di with i ∈ {1, 2} are a nonterminals which indicate that i downsampling operations have to be applied in their subsequent derivations. That is, in both paths of the residual the input features will be downsampled twice and, consequently, the merging paths will have the same spatial resolution. Thereby, this mechanism distributes the downsampling operations recursively across the architecture. For the channels, we adopted the common design to double the number of channels whenever we halve the spatial resolution in our experiments. Note that we could also handle a varying number of channels by using, e.g., depthwise concatenation as merge operation. 3 BAYESIAN OPTIMIZATION FOR ALGEBRAIC NEURAL ARCHITECTURE SEARCH We propose a BO strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), to efficiently search in the huge search spaces spanned by our algebraic architecture terms: we introduce a novel surrogate model which combines a Gaussian Process (GP) surrogate with a tailored kernel that leverages the hierarchical structure of algebraic neural architecture terms (see below), and adopt expected improvement as the acquisition function (Mockus et al., 1978). Given the discrete nature of architectures, we adopt ideas from grammar-guided genetic programming (McKay et al., 2010; Moss et al., 2020) for acquisition function optimization. Furthermore, to reduce wallclock time by leveraging parallel computing resources, we adapt the Kriging Believer (Ginsbourger et al., 2010) to select architectures at every search iteration so that we can train and evaluate them in parallel. Specifically, Kriging Believer assigns hallucinated values (i.e., posterior mean) of pending evaluations at each iteration to avoid redundant evaluations. For a more detailed explanation of BANAT, please refer to Appendix F. Hierarchical Weisfeiler-Lehman kernel (hWL) Inspired by the state-of-the-art BO approach for NAS (Ru et al., 2021), we adopt the WL graph kernel (Shervashidze et al., 2011) in a GP surrogate, modeling performance of the algebraic architecture terms ωi with the associated architectures Φ(ωi). However, modeling solely based on the final architecture ignores the useful hierarchical information inherent in our algebraic representation. Moreover, the large size of the architectures also makes it difficult to use a single WL kernel to capture the more global topological patterns. Since our hierarchical construction can be viewed as a series of gradually unfolding architectures, with the final architecture containing only primitive computations, we propose a novel hierarchical kernel design assigning a WL kernel to each hierarchy and combine them in a weighted sum. To this end, we introduce fold operators Fl, that removes algebraic terms beyond the l-th hierarchical level. For example, the fold operators F1, F2 and F3 yield for the algebraic term ω (Equation 1) F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (8) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear . Note the similarity to the derivations in Figure 1. Furthermore note that, in practice, we also add the corresponding nonterminals to integrate information from our hierarchical construction process. We define our hierarchical WL kernel (hWL) for two architectures Φ(ωi) and Φ(ωj) with algebraic architecture terms ωi or ωj , respectively, constructed over a hierarchy of L levels, as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (9) where the weights λl govern the importance of the learned graph information at different hierarchical levels (granularities of the architecture) and can be tuned (along with other hyperparameters of the GP) by maximizing the marginal likelihood. We omit l = 1 in the additive kernel as F1(ω) does not contain any edge features which are required for our WL kernel kWL. For more details on our novel hierarchical kernel design, please refer to Appendix F.2. Our proposed kernel efficiently captures the information in all algebraic term construction levels, which substantially improves its search and surrogate regression performance on our search space as demonstrated in Section 5. Acquisition function optimization To optimize the acquisition function, we adopt ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). For mutation, we randomly replace a sub-architecture term with a new randomly generated term, using the same nonterminal as start symbol. For crossover, we randomly swap two sub-architecture terms with the same corresponding nonterminal. We consider two crossover operators: a novel self-crossover operation swaps two sub-terms of a single architecture term, and the common crossover operation swaps subterms of two different architecture terms. Importantly, all evolutionary operations by design only result in valid terms. We provide examples for the evolutionary operations in Appendix F. 4 RELATED WORK We discuss related works in NAS below and discuss works beyond NAS in Appendix G. Neural Architecture Search Neural Architecture Search (NAS) aims to automatically discover architectural patterns (or even entire architectures) (Elsken et al., 2019). Previous approaches, e.g., used reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), evolution (Real et al., 2017), gradient descent (Liu et al., 2019b), or Bayesian Optimization (BO) (Kandasamy et al., 2018; White et al., 2021; Ru et al., 2021). To enable the effective use of BO on graph-like inputs for NAS, previous works have proposed to use a GP with specialized kernels (Kandasamy et al., 2018; Ru et al., 2021), encoding schemes (Ying et al., 2019; White et al., 2021), or graph neural networks as surrogate model (Ma et al., 2019; Shi et al., 2020; Zhang et al., 2019). Different to prior works, we explicitly leverage the hierarchical construction of architectures for modeling. Searching for novel architectural patterns Previous works mostly focused on finding a shared cell (Zoph et al., 2018) with a fixed macro architecture while only few works considered more expressive hierarchical search spaces (Liu et al., 2018; 2019a; Tan et al., 2019). The latter works considered hierarchical assembly (Liu et al., 2018), combination of a cell- and network-level search space (Liu et al., 2019a; Zhang et al., 2020), evolution of network topologies (Miikkulainen et al., 2019), factorization of the search space (Tan et al., 2019), parameterization of a hierarchy of random graph generators (Ru et al., 2020), a formal language over computational graphs (Negrinho et al., 2019), or a hierarchical construction of TensorFlow programs (So et al., 2021). Similarly, our formalism allows to design search spaces covering a general set of architecture design choices, but also permits the search for macro architectures with spatial resolution changes and multiple branches. We also handle spatial resolution changes without requiring post-hoc testing or resizing of the feature maps unlike prior works (Stanley & Miikkulainen, 2002; Miikkulainen et al., 2019; Stanley et al., 2019). Other works proposed approaches based on string rewriting systems (Kitano, 1990; Boers et al., 1993), cellular (or tree-structured) encoding schemes (Gruau, 1994; Luke & Spector, 1996; De Jong & Pollack, 2001; Cai et al., 2018), hyperedge replacement graph grammars Luerssen & Powers (2003); Luerssen (2005), attribute grammars (Mouret & Doncieux, 2008), CFGs (Jacob & Rehder, 1993; Couchet et al., 2007; Ahmadizar et al., 2015; Ahmad et al., 2019; Assunção et al., 2017; 2019; Lima et al., 2019; de la Fuente Castillo et al., 2020), or And-Or-grammars (Li et al., 2019). Different to these prior works, we construct entire architectures with spatial resolution changes across multiple branches, and propose techniques to incorporate constraints and foster regularity. Orthogonal to the aforementioned approaches, Roberts et al. (2021) searched over neural (XD-)operations, which is orthogonal to our approach, i.e., our predefined primitive computations could be replaced by their proposed XD-operations. 5 EXPERIMENTS In this section, we investigate potential benefits of hierarchical search spaces and our search strategy BANAT. More specifically, we address the following questions: Q1 Can hierarchical search spaces yield on par or superior architectures compared to cell-based search spaces with a limited number of evaluations? Q2 Can our search strategy BANAT improve performance over common baselines? Q3 Does leveraging the hierarchical information improve performance? Q4 Do zero-cost proxies work in vast hierarchical search spaces? Q5 Can we discover novel architectural patterns (e.g., activation functions)? To answer questions Q1-Q4, we introduce a hierarchical search space based on the popular NASBench-201 search space (Dong & Yang, 2020) in Section 5.1. To answer question Q5, we search for activation functions (Ramachandran et al., 2017) and defer the search space definition to Appendix J.1. We provide complementary results and analyses in Appendix I.2 and J.3. 5.1 HIERARCHICAL NAS-BENCH-201 We propose a hierarchical variant of the popular cell-based NAS-Bench-201 search space (Dong & Yang, 2020) by adding a hierarchical macro space (i.e., spatial resolution flow and wiring at the macro-level) and parameterizable convolutional blocks (i.e., choice of convolutions, activations, and normalizations). We express the hierarchical NAS-Bench-201 search space with CFG Gh as follows: D2 ::= Linear3(D1, D1, D0) | Linear3(D0, D1, D1) | Linear4(D1, D1, D0, D0) D1 ::= Linear3(C, C, D) | Linear4(C, C, C, D) | Residual3(C, C, D, D) D0 ::= Linear3(C, C, CL) | Linear4(C, C, C, CL) | Residual3(C, C, CL, CL) D ::= Linear2(CL, down) | Linear3(CL, CL, down) | Residual2(C, down, down) C ::= Linear2(CL, CL) | Linear3(CL, CL) | Residual2(CL, CL, CL) CL ::= Cell(OP, OP, OP, OP, OP, OP) OP ::= zero | id | BLOCK | avg pool BLOCK ::= Linear3(ACT, CONV, NORM) ACT ::= relu | hardswish | mish CONV ::= conv1x1 | conv3x3 | dconv3x3 NORM ::= batch | instance | layer . (10) See Appendix A for the terminal vocabulary of topological operators and primitive computations. The productions with the nonterminals {D2, D1, D0, D} define the spatial resolution flow and together with {C} define the macro architecture containing possibly multiple branches. The productions for {CL, OP} construct the NAS-Bench-201 cell and {BLOCK, ACT, CONV, NORM} parameterize the convolutional block. To ensure that we use the same distribution over the primitive computations as in NAS-Bench-201, we reweigh the sampling probabilities of the productions generated by the nonterminal OP, i.e., all production choices have sampling probability of 20%, but BLOCK has 40%. Note that we omit the stem (i.e., 3x3 convolution followed by batch normalization) and classifier (i.e., batch normalization followed by ReLU, global average pooling, and fully-connected layer) for simplicity. We implemented the merge operation as element-wise summation. Different to the cell-based NAS-Bench-201 search space, we exclude degenerated architectures by introducing a constraint that ensures that each subterm maps the input to the output (i.e., in the associated computational graph there is at least one path from source to sink). Our search space consists of ca. 10446 algebraic architecture terms (please refer to Appendix C on how to compute the search space size), which is significantly larger than other popular search spaces from the literature. For comparison, the cell-based NAS-Bench-201 search space is just a minuscule subspace of size 104.18, where we apply only the blue-colored production rules and replace the CL nonterminals with a placeholder terminal x1 that will be substituted by the searched, shared cell. 5.2 EVALUATION DETAILS For all search experiments, we compared the search strategies BANAT, Random Search (RS), Regularized Evolution (RE) (Real et al., 2019; Liu et al., 2018), and BANAT (WL) (Ru et al., 2021). For implementation details of the search strategies, please refer to Appendix H. We ran search for a total of 100 evaluations with a random initial design of 10 on three seeds {777, 888, 999} on the hierarchical NAS-Bench-201 search space or 1000 evaluations with a random initial design of 50 on one seed {777} on the activation function search space using 8 asynchronous workers each with a single NVIDIA RTX 2080 Ti GPU. In each evaluation, we fully trained the architectures and recorded their last validation error. For training details on the hierarchical NAS-Bench-201 search space and activation function search space, please refer to Appendix I.1 or Appendix J.2, respectively. To assess the modeling performance of our surrogate, we compared regression performance of GPs with different kernels, i.e., our hierarchical WL kernel (hWL), (standard) WL kernel (Ru et al., 2021), and NASBOT’s kernel (Kandasamy et al., 2018). We also tried the GCN encoding (Shi et al., 2020) but it could not capture the mapping from the complex graph space to performance, resulting in constant performance predictions. Further, note that the adjacency encoding (Ying et al., 2019) and path encoding (White et al., 2021) cannot be used in our hierarchical search spaces since the former requires the same amount of nodes across graphs and the latter scales exponentially in the number of nodes. We ran 20 trials over the seeds {0, 1, ..., 19} and re-used the data from the search runs. In every trial, we sampled a training and test set of 700 or 500 architecture and validation error pairs, respectively. We fitted the surrogates with a varying number of training samples by randomly choosing samples from the training set without replacement, and recorded Kendall’s τ rank correlation between the predicted and true validation error. To assess zero-cost proxies, we re-used the data from the search runs and recorded Kendall’s τ rank correlation. 5.3 RESULTS In the following we answer all of the questions Q1-Q5. Figure 2 compares the results of the cellbased and hierarchical search space design using our search strategy BANAT. Results with BANAT are on par on CIFAR-10/100, superior on ImageNet-16-120, and clearly superior on CIFARTile and AddNIST (answering Q1). We emphasize that the NAS community has engineered the cell-based search space to achieve strong performance on those popular image classification datasets for over a decade, making it unsurprising that our improvements are much larger for the novel datasets. Yet, our best found architecture on ImageNet-16-120 from the hierarchical search space also achieves an excellent test error of 52.78% with only 0.626MB parameters (Appendix I.2); this is superior to the architecture found by the state-of-the-art method Shapley-NAS (i.e., 53.15%) (Xiao et al., 2022) and on par with the optimal architecture of the cell-based NAS-Bench-201 search space (i.e., 52.69% with 0.866MB). Figure 3 shows that our search strategy BANAT is also superior 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based Evaluations Evaluations Evaluations Evaluations Evaluations BANAT (ours) RS RE BANAT (WL) Figure 3: Comparison of search strategies on the hierarchical search space. We plot mean and ±1 standard error of the validation error on the hierarchical NAS-Bench-201 search space for our search strategy BANAT (solid blue), RS (dashed orange), RE (dotted green), and BANAT (WL) (dashdotted red). We report test errors, best architectures, and conduct further analyses in Appendix I.2. to common baselines (answering Q2) and leveraging hierarchical information clearly improves performance (answering Q3). Further, the evaluation of surrogate performance in Figure 4 shows that incorporating hierarchical information with our hierarchical WL kernel (hWL) improves modeling, especially on smaller amounts of training data (further answering Q3). Table 1 shows that the baseline zero-cost proxies flops and l2-norm yield competitive (or often superior) results to more sophisticated zero-cost proxies; making hierarchical search spaces an interesting future research direction for them (answering Q4). Finally, Table 2 shows that we can find novel well-performing activation functions from basic mathematical operations with BANAT (answering Q5). 6 DISCUSSION AND LIMITATIONS While our grammar-based construction mechanism is a powerful mechanism to construct huge hierarchical search space, we can not construct any architecture with our grammar-based construction approach (Section 2.2 and 2.3) since we are limited to context-free languages; e.g., architectures of the type {anbncn|n ∈ N>0} cannot be generated by CFGs (this can be proven using Odgen’s lemma (Ogden, 1968)). Further, due to the discrete nature of CFGs we can not easily integrate continuous design choices, e.g., dropout probability. Furthermore, our grammar-based mechanism does not (generally) support simple scalability of discovered neural architectures (e.g., repetition of building blocks) without special consideration in the search space design. Nevertheless, our search spaces still significantly increase the expressiveness, including the ability to represent common search spaces from the literature (see Appendix E for how we can represent the search spaces of DARTS, Auto-Deeplab, the hierarchical cell search space of Liu et al. (2018), the Mobile-net search space, and the hierarchical random graph generator search space), as well as allowing search for entire neural architectures based around the popular NAS-Bench-201 search space CIFAR-10 CIFAR-100 ImageNet-16-120 CIFARTile AddNIST (Section 5). Thus, our search space design can facilitate the discovery of novel well-performing neural architectures in those huge search spaces of algebraic architecture terms. However, there is an inherent trade-off between the expressiveness and the difficulty of search. The much greater expressiveness facilitates search in a richer set of architectures that may include better architectures than in more restrictive search spaces, which however need not exist. Besides that, the (potential) existence of such a well-performing architecture does not result in a search strategy discovering it, even with large amounts of computing power available. Note that the tradeoff manifests itself also in the acquisition function optimization of our search strategy BANAT. In addition, a well-performing neural architecture may not work with current training protocols and hyperparameters due to interaction effects, i.e., training protocols and hyperparameters may be overoptimized for specific types of neural architectures. To overcome this limitation, one could consider a joint optimization of neural architectures, training protocols, and hyperparameters. However, this further fuels the trade-off between expressiveness and the difficulty of search. 7 CONCLUSION We introduced very expressive search spaces of algebraic architecture terms constructed with CFGs. To efficiently search over the huge search spaces, we proposed BANAT, an efficient BO strategy with a tailored kernel leveraging the available hierarchical information. Our experiments indicate that both our search space design and our search strategy can yield strong performance over existing baselines. Our results motivate further steps towards the discovery of neural architectures based on even more atomic primitive computations. Furthermore, future works could (simultaneously) learn the search space (i.e., learn the grammar) or improve search efficiency by means of multi-fidelity optimization or gradient-based search strategies. REPRODUCIBILITY STATEMENT To ensure reproducibility, we address all points of the best practices checklist for NAS research (Lindauer & Hutter, 2020) in Appendix K. ETHICS STATEMENT NAS has immense potential to facilitate systematic, automated discovery of high-performing (novel) architecture designs. However, the restrictive cell-based search spaces most commonly used in NAS render it impossible to discover truly novel neural architectures. With our general formalism based on algebraic terms, we hope to provide fertile foundation towards discovering high-performing and efficient architectures; potentially from scratch. However, search in such huge search spaces is expensive, particularly in the context of the ongoing detrimental climate crisis. While on the one hand, the discovered neural architectures, like other AI technologies, could potentially be exploited to have a negative societal impact; on the other hand, our work could also lead to advances across scientific disciplines like healthcare and chemistry. A FROM TERMINALS TO PRIMITIVE COMPUTATIONS AND TOPOLOGICAL OPERATORS Table 3 and Figure 5 describe the primitive computations and topological operators used throughout our experiments in Section 5 and Appendix I, respectively. Note that by adding more primitive computations and/or topological operators we could construct even more expressive search spaces. B EXTENDED BACKUS-NAUR FORM The (extended) Backus-Naur form (Backus, 1959) is a meta-language to describe the syntax of CFGs. We use meta-rules of the form S ::= α where S ∈ N is a nonterminal and α ∈ (N ∪ Σ)∗ is a string of nonterminals and/or terminals. We denote nonterminals in UPPER CASE, terminals corresponding to topological operators in Initial upper case/teletype, and terminals corresponding to primitive computations in lower case/teletype, e.g., S ::= Residual(S, S, id). To compactly express production rules with the same left-hand side nonterminal, we use the vertical bar | to indicate a choice of production rules with the same left-hand side, e.g., S ::= Linear(S, S, S) | Residual(S, S, id) | conv. C SEARCH SPACE SIZE In this section, we show how to efficiently compute the size of our search spaces constructed by CFGs. There are two cases to consider: (i) a CFG contains cycles (i.e., part of the derivation can be repeated infinitely many times) , yielding an open-ended, infinite search space; and (ii) a CFG contains no cycles, yielding in a finite search space whose size we can compute. Consider a production A → Residual(B, B, B) where Residual is a terminal, and A and B are nonterminals with B → conv | id. Consequently, there are 23 = 8 possible instances of the residual block. If we add another production choice for the nonterminal A, e.g., A → Linear(B, B, B), we would have 23 + 23 = 16 possible instances. Further, adding a production C → Linear(A, A, A) would yield a search space size of (23 + 23)3 = 4096. More generally, we introduce the function PA that returns the set of productions for nonterminal A ∈ N , and the function µ : P → N that returns all the nonterminals for a production p ∈ P . We can then recursively compute the size of the search space as follows: f(A) = ∑ p∈PA { 1 , µ(p) = ∅,∏ A′∈µ(p) f(A′) , otherwise . (11) When a CFG contains some constraint, we ensure to only account for valid architectures (i.e., compliant with the constraints) by ignoring productions which would lead to invalid architectures. D MORE DETAILS ON SEARCH SPACE CONSTRAINTS During the design of the search space, we may want to comply with some constraints, e.g., only consider valid neural architectures or impose structural constraints on architectures. We can guarantee compliance with constraints by modifying sampling (and evolution): we only allow the application of production rules, which guarantee compliance with the constraint(s). In the following, we show exemplary how this can be implemented for the former constraint mentioned above. Note that other constraints can be implemented in a similar manner To implement the constraint ”only consider valid neural architectures”, we note that our search space design only creates neural architectures where neither the spatial resolution nor the channels can be mismatched; please refer to Section 2.3 for details. Thus, the only way a neural architecture can become invalid is through zero operations, which could remove edges from the computational graph and possibly disassociate the input from the output. Since we recursively assemble neural architectures, it is sufficient to ensure that the derived algebraic architecture term (i.e., the associated computational graph) is compliant with the constraint, i.e.,there is at least one path from input to output. Thus, during sampling (and similarly during evolution), we modify the current production rule choices when an application of the zero operation would disassociate the input from the output. E COMMON SEARCH SPACES FROM THE LITERATURE In Section 5.1, we demonstrated how to construct the popular NAS-Bench-201 search space within our algebraic search space design, and below we show how to reconstruct the following popular search spaces: DARTS search space (Liu et al., 2019b), Auto-DeepLab search space (Liu et al., 2019a), hierarchical cell search space (Liu et al., 2018), Mobile-net search space (Tan et al., 2019), and hierarchical random graph generator search space (Ru et al., 2020). For implementation details we refer to the respective works. DARTS SEARCH SPACE The DARTS search space (Liu et al., 2019b) consists of a fixed macro architecture and a cell, i.e., a seven node directed acyclic graph (Darts; see Figure 6 for the topological operator). We omit the fixed macro architecture from our search space design for simplicity. Each cell receives the feature maps from the two preceding cells as input and outputs a single feature map. All intermediate nodes (i.e., Node3, Node4, Node5, and Node6) is computed based on all of its predecessors. Thus, we can define the DARTS search space as follows: DARTS ::= Darts(NODE3, NODE4, NODE5, NODE6) NODE3 ::= Node3(OP, OP) NODE4 ::= Node4(OP, OP, OP) NODE5 ::= Node5(OP, OP, OP, OP) NODE6 ::= Node6(OP, OP, OP, OP, OP) OP ::= sep conv 3x3 | sep conv 5x5 | dil conv 3x3 | dil conv 5x5 | max pool | avg pool | id | zero , (12) where the topological operator Node3 receives two inputs, applies the operations separately on them, and sums them up. Similarly, Node4, Node5, and Node6 apply their operations separately to the given inputs and sum them up. The topological operator Darts feeds the corresponding feature maps into each of those topological operators and finally concatenates all intermediate feature maps. AUTO-DEEPLAB SEARCH SPACE Auto-DeepLab (Liu et al., 2019a) combines a cell-level with a network-level search space to search for segmentation networks, where the cell is shared across the searched macro architecture, i.e., a twelve step (linear) path across different spatial resolutions. The cell-level design is adopted from Liu et al. (2019b) and, thus, we can re-use the CFG from Equation 12. For the network-level, we introduce a constraint that ensures that the path is of length twelve, i.e., we ensure exactly twelve derivations in our CFG. Further, we overload the nonterminals so that they correspond to the respective spatial resolution level, e.g., D4 indicates that the original input is downsampled by a factor of four; please refer to Section 2.3 for details on overloading nonterminals. For the sake of simplicity, we omit the first two layers and atrous spatial pyramid poolings as they are fixed, and hence define the network-level search space as follows: D4 ::= Same(CELL, D4) | Down(CELL, D8) D8 ::= Up(CELL, D4) | Same(CELL, D8) | Down(CELL, D16) D16 ::= Up(CELL, D8) | Same(CELL, D16) | Down(CELL, D32) D32 ::= Up(CELL, D16) | Same(CELL, D32) , (13) where the topological operators Up, Same, and Down upsample/halve, do not change/do not change, or downsample/double the spatial resolution/channels, respectively. The placeholder variable CELL maps to the shared DARTS cell from the language generated by the CFG from Equation 12. HIERARCHICAL CELL SEARCH SPACE The hierarchical cell search space (Liu et al., 2018) consists of a fixed (linear) macro architecture and a hierarchically assembled cell with three levels which is shared across the macro architecture. Thus, we can omit the fixed macro architecture from our search space design for simplicity. Their first, second, and third hierarchical levels correspond to the primitive computations (i.e., id, max pool, avg pool, sep conv, depth conv, conv, zero), six densely connected four node directed acyclic graphs (DAG4), and a densely connected five node directed acyclic graph (DAG5), respectively. The zero operation could lead to directed acyclic graphs which have fewer nodes. Therefore, we introduce a constraint enforcing that there are always four (level 2) or five (level 3) nodes for every directed acyclic graph. Further, since a densely connected five node directed acyclic graph graph has ten edges, we need to introduce placeholder variables (i.e., M1, ..., M6) to enforce that only six (possibly) different four node directed acyclic graphs are used, and consequently define a CFG for the third level LEVEL3 ::= DAG5(LEVEL2, ..., LEVEL2︸ ︷︷ ︸ ×10 ) LEVEL2 ::= M1 | M2 | M3 | M4 | M5 | M6 | zero , (14) mapping the placeholder variables M1, ..., M6 to the six lower-level motifs constructed by the first and second hierarchical level LEVEL2 ::= DAG4(LEVEL1, ..., LEVEL1)︸ ︷︷ ︸ ×6 LEVEL1 ::= id | max pool | avg pool | sep conv | depth conv | conv | zero . (15) MOBILE-NET SEARCH SPACE Factorized hierarchical search spaces, e.g., the Mobile-net search space (Tan et al., 2019), allow for layer diversity. They factorize a (fixed) macro architecture – often based on an already wellperforming reference architecture – into separate blocks (e.g., cells). For the sake of simplicity, we assume here a three sequential blocks (Block) architecture (Linear). In each of those blocks, we search for the convolution operations (CONV), kernel sizes (KSIZE), squeeze-and-excitation ratio (SERATIO) (Hu et al., 2018), skip connections (SKIP), number of output channels (FSIZE), and number of layers per block (#LAYERS), where the latter two are discretized using a reference architecture, e.g., MobileNetV2 (Sandler et al., 2018). Consequently, we can express this search space as follows: MACRO ::= Linear(BLOCK, BLOCK, BLOCK) BLOCK ::= Block(CONV, KSIZE, SERATIO, SKIP, FSIZE, #LAYERS) CONV ::= conv | dconv | mbconv KSIZE ::= 3 | 5 SERATIO ::= 0 | 0.25 SKIP ::= pooling | id residual | no skip FSIZE ::= 0.75 | 1.0 | 1.25 #LAYERS ::= -1 | 0 | 1 , (16) where conv, donv and mbconv correspond to convolution, depthwise convolution, and mobile inverted bottleneck convolution (Sandler et al., 2018), respectively. HIERARCHICAL RANDOM GRAPH GENERATOR SEARCH SPACE The hierarchical random graph generator search space (Ru et al., 2020) consists of three hierarchical levels of random graph generators (i.e., Watts-Strogatz (Watts & Strogatz, 1998) and Erdõs-Rényi (Erdős et al., 1960)). We denote with Watts-Strogatz i the random graph generated by the Watts-Strogatz model with i nodes. Thus, we can represent the search space as follows: TOP ::= Watts-Strogatz 3(K, Pt)(MID, MID, MID) | ... | Watts-Strogatz 10(K, Pt)(MID, ..., MID︸ ︷︷ ︸ ×10 ) MID ::= Erdõs-Rényi 1(Pm)(BOT) | ... | Erdõs-Rényi 10(Pm)(BOT, ..., BOT︸ ︷︷ ︸ ×10 ) BOT ::= Watts-Strogatz 3(K, Pb)(NODE, NODE, NODE) | ... | Watts-Strogatz 10(K, Pb)(NODE ..., NODE︸ ︷︷ ︸ ×10 ) K ::= 2 | 3 | 4 | 5 , (17) Algorithm 1 Bayesian Optimization algorithm (Brochu et al., 2010). Input: Initial observed data Dt, a black-box objective function f , total number of BO iterations T Output: The best recommendation about the global optimizer x∗ for t = 1, . . . , T do Select the next xt+1 by maximizing acquisition function α(x|Dt) Evaluate the objective function at ft+1 = f(xt+1) Dt+1 ← Dt ∪ (xt+1, ft+1) Update the surrogate model with Dt+1 end for where each terminal Pt, Pm, and Pb maps to a continuous number in [0.1, 0.9]1 and the placeholder variable NODEmaps to a primitive computation, e.g., separable convolution. Note that we omit other hyperparameters, such as stage ratio, channel ratio etc., for simplicity. F MORE DETAILS ON THE SEARCH STRATEGY In this section, we provide more details and examples for our search strategy Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT) presented in Section 3. F.1 BAYESIAN OPTIMIZATION Bayesian Optimization (BO) is a powerful family of search techniques for finding the global optimum of a black-box objective problem. It is particularly useful when the objective is expensive to evaluate and thus sample efficiency is highly important (Brochu et al., 2010). To minimize a black-box objective problem with BO, we first need to build a probabilistic surrogate to model the objective based on the observed data so far. Based on the surrogate model, we design an acquisition function to evaluate the utility of potential candidate points by trading off exploitation (where the posterior mean of the surrogate model is low) and exploration (where the posterior variance of the surrogate model is high). The next candidate points to evaluate is then selected by maximizing the acquisition function (Shahriari et al., 2015). The general procedures of BO is summarized in Algorithm 1. We adopted the widely used acquisition function, expected improvement (EI) (Mockus et al., 1978), in our BO strategy. EI evaluates the expected amount of improvement of a candidate point x over the minimal value f ′ observed so far. Specifically, denote the improvement function as I(x) = max(0, f ′ − f(x)), the EI acquisition function has the form αEI(x|Dt) = E[I(x)|Dt] = ∫ f ′ −∞ (f ′ − f)N ( f ;µ(x|Dt), σ2(x|Dt) ) df = (f ′ − f)Φ ( f ′;µ(x|Dt), σ2(x|Dt) ) + σ2(x|Dt)ϕ(f ′;µ(x|Dt), σ2(x|Dt)) , where µ(x|Dt) and σ2(x|Dt) are the mean and variance of the predictive posterior distribution at a candidate point x, and ϕ(·) and Φ(·) denote the PDF and CDF of the standard normal distribution, respectively. To make use of ample distributed computing resource, we adopted Kriging Believer (Ginsbourger et al., 2010) which uses the predictive posterior of the surrogate model to assign hallucinated function values {f̃p}p∈{1, ..., P} to the P candidate points with pending evaluations {x̃p}p∈{1, ..., P} and perform next BO recommendation in the batch by pseudo-augmenting the observation data with D̃p = {(x̃p, f̃p)}p∈{1, ..., P}, namely D̃t = Dt ∪ D̃p. The algorithm of Kriging Believer at one BO iteration to select a batch of recommended candidate points is summarized in Algorithm 2. 1Theoretically, this is not possible with CFGs. However, we can extend the notion of substitution by substituting a string representation of a Python (float) variable for the placeholder variables Pt, Pm, and Pb. Algorithm 2 Kriging Believer algorithm to select one batch of points. Input: Observation data Dt, batch size b Output: The batch points Bt+1 = {x(1)t+1, . . . ,x (b) t+1} D̃t = Dt ∪ D̃p for j = 1, . . . , b do Select the next x(j)t+1 by maximizing acquisition function α(x|D̃t) Compute the predictive posterior mean µ(x(j)t+1|D̃t) D̃t ← D̃t ∪ (xt+1, µ(x(j)t+1|D̃t)) end for Algorithm 3 Weisfeiler-Lehman subtree kernel computation (Shervashidze et al., 2011). Input: Graphs G1, G2, maximum iterations H Output: Kernel function value between the graphs Initialize the feature vectors ϕ(G1) = ϕ0(G1), ϕ(G2) = ϕ0(G2) with the respective counts of original node labels (i.e., the h = 0 WL features) for h = 1, . . . H do Assign a multiset Mh(v) = {lh−1(u)|u ∈ N (v)} to each node v ∈ G, where lh−1 is the node label function of the h− 1-th WL iteration and N is the node neighbor function Sort elements in multiset Mh(v) and concatenate them to string sh(v) Compress each string sh(v) using the hash function f s.t. f(sh(v)) = f(sh(w)) ⇐⇒ sh(v) = sh(u) Add lh−1 as prefix for sh(v) Concatenate the WL features ϕh(G1), ϕh(G2) with the respective counts of the new labels: ϕ(G1) = [ϕ(G1), ϕh(G1)], ϕ(G2) = [ϕ(G2), ϕh(G2)] Set lh(v) := f(sh(v)) ∀v ∈ G end for Compute inner product k = ⟨ϕh(G1), ϕh(G2)⟩ between WL features ϕh(G1), ϕh(G2) in RKHS H F.2 HIERARCHICAL WEISFEILER-LEHMAN KERNEL Inspired by Ru et al. (2021), we adopted the Weisfeiler-Lehman (WL) graph kernel (Shervashidze et al., 2011) in the GP surrogate model to handle the graph nature of neural architectures. The basic idea of the WL kernel is to first compare node labels, and then iteratively aggregate labels of neighboring nodes, compress them into a new label and compare them. Algorithm 3 summarizes the WL kernel procedure. Ru et al. (2021) identified three reasons for using the WL kernel: (1) it is able to compare labeled and directed graphs of different sizes, (2) it is expressive, and (3) it is relatively efficient and scalable. Our search space design can afford a diverse spectrum of neural architectures with very heterogeneous topological structure. Therefore, reason (1) is a very important property of the WL kernel to account for the diversity of neural architectures. Moreover, if we allow many hierarchical levels, we can construct very large neural architectures. Therefore, reasons (2) and (3) are essential for accurate and fast modeling. However, neural architectures in our search spaces may be significantly larger, which makes it difficult for a single WL kernel to capture the more global topological patterns. Moreover, modeling solely based on the final neural architecture ignores the useful macro-level information from earlier hierarchical levels. In our experiments (Section 5 and I), we have found stronger neural architectures by incorporating the hierarchical information in the kernel design, which provides experimental support for above arguments. However, modeling solely based on the (standard) WL graph kernel neglects the useful hierarchical information from our assembly process. Moreover, the large size of neural architectures make it still challenging to capture the more global topological patterns. We therefore propose to use hierarchical information through a hierarchy of WL graph kernels that take into account the different granularities of the architectures and combine them in a weighted sum. To obtain the different granularities, we use the fold operators Fl that removes algebraic terms beyond the l-th hierarchical level. Thereby, Residual Residual fc we obtain the folds F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (18) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear , for the algebraic architecture term ω. Note that we ignore the first fold since it does not represent a labeled DAG. Figure 7 visualizes the labeled graphs Φ(F2) and Φ(F3) of the folds F2 or F3, respectively. These graphs can be fed into (standard) WL graph kernels. Therefore, we can construct a hierarchy of WL graph kernels kWL as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (19) where ωi and ωj are two algebraic architecture terms. Note that λl govern the importance of the learned graph information across the hierarchical levels and can be optimized through the marginal likelihood. F.3 EXAMPLES FOR THE EVOLUTIONARY OPERATIONS For the evolutionary operations, we adopted ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). In the following, we will show how these evolutionary operations manipulate algebraic terms, e.g., Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) , (20) from the search space S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc , (21) to generate evolved algebraic terms. Figure 1 shows how we can derive the algebraic term in Equation 20 from the search space in Equation 21. For mutation operations, we first randomly pick a subterm of the algebraic term, e.g., Residual(conv, id, conv). Then, we randomly sample a new subterm with the same nonterminal symbol S as start symbol, e.g., Linear(conv, id, fc), and replace the previous subterm, yielding Linear(Linear(conv, id, fc), Residual(conv, id, conv), fc) . (22) For (self-)crossover operations, we swap two subterms, e.g., Residual(conv, id, conv) and Residual(conv, id, conv) with the same nonterminal S as start symbol, yielding Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (23) Note that unlike the commonly used crossover operation, which uses two parents, self-crossover has only one parent. In future work, we could also add a self-copy operation that copies a subterm to another part of the algebraic term, explicitly regularizing diversity and thus potentially speeding up the search. G RELATED WORK BEYOND NEURAL ARCHITECTURE SEARCH While our work focuses exclusively on NAS, we will discuss below how it relates to the areas of optimizer search (as well as from scratch automated machine learning) and neural-symbolic programming. Optimizer search is a closely related field to NAS, where we automatically search for an optimizer (i.e., an update function for the weights) instead of an architecture. Initial works used learnable parametric or non-parametric optimizers. While the former approaches (Andrychowicz et al., 2016; Li & Malik, 2017; Chen et al., 2017; 2022a) have poor scalability and generality, the latter works overcome those limitations. Bello et al. (2017) searched for an instantiation of hand-crafted patterns via reinforcement learning, while Wang et al. (2022) proposed a tree-structured search space2 and searched for optimizers via a modified Monte Carlo sampling approach. AutoML-Zero (Real et al., 2020) took an even more general approach by searching over entire machine learning algorithms, including optimizers, from a generic search space built from basic mathematical operations with an evolutionary algorithm. Chen et al. (2022b) used RE to discover optimizers from a generic search space (inspired by AutoML-Zero) for training vision transformers (Dosovitskiy et al., 2021). Complementary to the above, there is recent interest in automatically synthesizing programs from domain-specific languages. Gaunt et al. (2017) proposed a hand-crafted program template and simultaneously optimized the parameters of the differentiable program with gradient descent. The HOUDINI framework (Valkov et al., 2018) proposed type-directed (top-down) enumeration and evolution approaches over differentiable functional programs. Shah et al. (2020) hierarchically assembled differentiable programs and used neural networks for the approximation of missing expression in partial programs. Cui & Zhu (2021) treated CFGs stochastically with trainable production rule sampling weights, which were optimized with a gradient-based approach (Liu et al., 2019b). However, naı̈vely applying gradient-based approaches does not work in our search spaces due to the exponential explosion of supernet weights, but still renders an interesting direction for future work. Compared to these lines of work, we extended CFGs to handle changes in spatial resolution, promote regularity, and (compared to most of them) incorporate constraints, the latter two of which could also be applied in those domains. We also proposed a BO search strategy to search efficiently with a tailored kernel design to handle the hierarchical nature of the search space (i.e., the architectures). H IMPLEMENTATION DETAILS OF THE SEARCH STRATEGIES BANAT & BANAT (WL) The only difference between BANAT and BANAT (WL) is that the former uses our proposed hierarchy of WL kernels (hWL), whereas the latter only uses a single WL kernel (WL) for the entire architecture (c.f., (Ru et al., 2021)). We ran BANAT asynchronously in parallel throughout our experiments with a batch size of B = 1, i.e., at each BO iteration a single architecture is proposed for evaluation. For the acquisition function optimization, we used a pool size of P = 200, where the initial population consisted of the current ten best-performing architectures and the remainder were randomly sampled architectures to encourage exploration in the huge search spaces. During evolution, the mutation probability was set to pmut = 0.5 and crossover probability was set to pcross = 0.5. From the crossovers, half of them were self-crossovers of one parent and the other half were common crossovers between two parents. The tournament selection probability was set to ptour = 0.2. We evolved the population at least for ten iterations and a maximum of 50 iterations using a early stopping criterion based on the fitness value improvements over the last five iterations. Regularized Evolution (RE) RE (Real et al., 2019; Liu et al., 2018) iteratively mutates the best architectures out of a sample of the population. We reduced the population size from 50 to 30 to account for fewer evaluations, and used a sample size of 10. We also ran RE asynchronously for better comparability. I SEARCHING THE HIERARCHICAL NAS-BENCH-201 SEARCH SPACE In this section, we provide training details (Section I.1) and provide complementary results as well as conduct extensive analyses (Section I.2). 2Note that the tree-structured search space can equivalently be described with a CFG (with a constraint on the number of maximum depth of the syntax trees). I.1 TRAINING DETAILS Training protocol We evaluated all search strategies on CIFAR-10/100 (Krizhevsky et al., 2009), ImageNet-16-120 (Chrabaszcz et al., 2017), CIFARTile, and AddNIST (Geada et al., 2021). Note that CIFARTile and AddNIST are novel datasets and therefore have not yet been optimized by the research community. We provide further dataset details below. For training of architectures on CIFAR-10/100 and ImageNet-16-120, we followed Dong & Yang (2020). We trained architectures with SGD with learning rate of 0.1, Nesterov momentum of 0.9, weight decay of 0.0005 with cosine annealing (Loshchilov & Hutter, 2019), and batch size of 256 for 200 epochs. The initial channels were set to 16. For both CIFAR-10 and CIFAR-100, we used random flip with probability 0.5 followed by a random crop (32x32 with 4 pixel padding) and normalization. For ImageNet-16120, we used a 16x16 random crop with 2 pixel padding instead. For training of architectures on AddNIST and CIFARTile, we followed the training protocol from the CVPR-NAS 2021 competition (Geada et al., 2021): We trained architectures with SGD with learning rate of 0.01, momentum of 0.9, and weight decay of 0.0003 with cosine annealing, and batch size of 64 for 64 epochs. We set the initial channels to 16 and did not apply any further data augmentation. Dataset details In Table 4, we provide the licenses for the datasets used in our experiments. For training of architectures on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet-16-120 (Chrabaszcz et al., 2017), we followed the dataset splits and training protocol of NAS-Bench-201 (Dong & Yang, 2020). For CIFAR-10, we split the original training set into a new training set with 25k images and validation set with 25k images following Dong & Yang (2020). The test set remained unchanged. For evaluation, we trained architectures on both the training and validation set. For CIFAR-100, the training set remained unchanged, but the test set was partitioned in a validation set and new test set with each 5K images. For ImageNet-16-120, all splits remained unchanged. For AddNIST and CIFARTile, we used the training, validation, and test splits as defined in the CVPR-NAS 2021 competition (Geada et al., 2021). I.2 EXTENDED SEARCH RESULTS AND ANALYSES Supplementary to Figure 2, Figure 8 compares the cell-based vs. hierarchical NAS-Bench-201 search space from Section 6.1 using RS, RE, and BANAT (WL). The cell-based search space design shows on par or stronger performance on all datasets except for CIFARTile for the three search strategies. In contrast, for our proposed search strategy BANAT we find on par (CIFAR-10/100) or superior (ImageNet-16-120, CIFARTile, and AddNIST) performance using the hierarchical search space design. This clearly shows that the increase of the search space does not necessarily yields the discovery of stronger neural architectures. Further, it exemplifies the importance of a strong search strategy to search effectively and efficiently in huge hierarchical search spaces (Q2), and provides further evidence that the incorporation of hierarchical information is a key contributor for search efficiency (Q3). Based on this, we believe that future work using, e.g., graph neural networks as a surrogate, may benefit from the incorporation of hierarchical information. We report the test errors of our best found architectures in Table 5. We observe that our search strategy BANAT finds the strongest performing architectures across all dataset (Q2, Q3). Also note that we achieve better (validation and) test performance on ImageNet-16-120 on the hierarchical than the state-of-the-art search strategy on the cell-based NAS-Bench-201 search space (i.e., +0.37%p compared to Shapley-NAS (Xiao et al., 2022)) (Q1). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (a) Random Search (RS). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (b) Regularized Evolution (RE). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST Ta bl e 5: Te st er ro rs (a nd ± 1 st an da rd er ro r) of po pu la r ba se lin e ar ch ite ct ur es (e .g ., R es N et (H e et al ., 20 16 ) an d E ffi ci en tN et (T an & L e, 20 19 ) va ri an ts ), an d ou r be st fo un d ar ch ite ct ur es on th e ce ll- ba se d an d hi er ar ch ic al N A S- B en ch -2 01 se ar ch sp ac e. N ot e th at w e pi ck ed th e R es N et an d E ffi ci en tN et va ri an tb as ed on th e te st er ro r, co ns eq ue nt ly gi vi ng an ov er es tim at e of th ei rt es tp er fo rm an ce . † op tim al nu m be rs as re po rt ed in D on g & Y an g (2 02 0) . (b es t) te st er ro r( an d ± 1 st an da rd er ro r) ac ro ss th re e se ed s {7 7 7 , 8 8 8 , 9 9 9 } of th e be st ar ch ite ct ur e of th e th re e se ar ch ru ns w ith lo w es tv al id at io n er ro r. M et ho d C IF A R -1 0 C IF A R -1 00 Im ag eN et -1 6- 12 0 C IF A R Ti le A dd N IS T ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al B es tR es N et (H e et al ., 20 16 ) 06 .4 9 ± 0. 24 (3 2) 27 .1 0± 0. 67 (1 10 ) 53 .6 7 ± 0. 18 (5 6) 57 .8 0± 0. 57 (1 8) 7. 78 ± 0. 05 (3 4) B es tE ffi ci en tN et (T an & L e, 20 19 ) 11 .7 3 ± 0. 10 (B 0) 35 .1 7 ± 0. 42 (B 6) 77 .7 3 ± 0. 29 (B 0) 61 .0 1 ± 0. 62 (B 0) 13 .2 4 ± 0. 58 (B 1) N A SB en ch -2 01 or ac le † 5. 63 26 .4 9 52 .6 9 - - R S 6. 39 ± 0. 18 6. 77 ± 0. 10 28 .7 5 ± 0. 57 29 .4 9 ± 0. 57 54 .8 3 ± 0. 78 54 .7 0± 0. 82 52 .7 2 ± 0. 45 40 .9 3 ± 0. 81 7. 82 ± 0. 36 8. 05 ± 0. 29 N A SW O T (N =1 0) (M el lo re ta l., 20 21 ) 6. 55 ± 0. 10 8. 18 ± 0. 46 29 .3 5 ± 0. 53 31 .7 3 ± 0. 96 56 .8 0± 1. 35 58 .6 6 ± 0. 29 41 .8 3 ± 2. 29 49 .4 6 ± 2. 95 10 .1 1 ± 0. 69 11 .8 1 ± 1. 55 N A SW O T (N =1 00 )( M el lo re ta l., 20 21 ) 6. 59 ± 0. 17 8. 56 ± 0. 87 28 .9 1 ± 0. 25 31 .6 5 ± 1. 95 55 .9 9 ± 1. 30 58 .4 7 ± 2. 74 41 .6 3 ± 1. 02 43 .3 1 ± 2. 00 10 .7 5 ± 0. 23 14 .4 7 ± 1. 44 N A SW O T (N =1 00 0) (M el lo re ta l., 20 21 ) 6. 68 ± 0. 12 8. 26 ± 0. 38 29 .3 7 ± 0. 17 31 .6 6 ± 0. 72 58 .9 3 ± 2. 92 58 .3 3 ± 0. 91 39 .6 1 ± 1. 12 45 .6 6 ± 1. 29 10 .6 8 ± 0. 27 13 .5 7 ± 1. 89 N A SW O T (N =1 00 00 )( M el lo re ta l., 20 21 ) 6. 98 ± 0. 43 8. 40 ± 0. 52 29 .9 5 ± 0. 42 32 .0 9 ± 1. 61 54 .2 0± 0. 49 57 .5 8 ± 1. 53 39 .9 0± 1. 20 42 .4 5 ± 0. 67 10 .7 2 ± 0. 53 14 .8 2 ± 0. 66 R E (R ea le ta l., 20 19 ;L iu et al ., 20 18 ) 5. 76 ± 0. 17 6. 88 ± 0. 24 27 .6 8 ± 0. 55 30 .0 0± 0. 32 53 .9 2 ± 0. 60 55 .3 9 ± 0. 54 52 .7 9 ± 0. 59 40 .9 9 ± 2. 89 7. 69 ± 0. 35 7. 56 ± 0. 69 B A N A T (W L )( R u et al ., 20 21 ) 5. 68 ± 0. 11 6. 98
1. What is the main contribution of the paper regarding neural architecture search? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper's methodology or results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes to represent neural architectures as algebraic terms and the design space (i.e. neural architecture search space) as context-free grammars (CFGs). The authors then develop a Bayesian optimization algorithm building on top of the work by Ru et al., 2021 called BANAT, that exploits the CFG representation to define a hierarchical kernel. Results show BANAT benefits from the hierarchical kernel and outperforms common NAS baselines like random search and evolutionary search on NASBench-201 style search spaces. Strengths And Weaknesses Pros: The CFG formulation of NAS search spaces is natural and flexible. Cons: The experiments are only conducted on NASBench201 style search spaces, which are small and do not achieve SOTA accuracy. BANAT is only evaluated against simple baselines like random search and evolutionary search and not more adaptive competitors, in particular, other Bayesian NAS methods like HNAS (Ru et al., 2021), BANANAS, NASBOT, etc. No evidence that the additional expressivity of the search space offered by BANAT results in interesting novel architectures. Clarity, Quality, Novelty And Reproducibility Clarity The writing could be improved, especially in the introduction, which focuses a lot on discovering novel architectures although this is not shown in the experiments. It is unclear how we go from algebraic representation to graph for the hierarchical Weisfeller-Lehman kernel (hWL). Quality Quality of the experimental section is low due to missing comparison to other BO NAS methods and just one studied search space of limited size. Reproducibility Aside from the point of how we go from algebraic representation to graph for hWL, the experimental description is thorough and appears reproducible.
ICLR
Title Towards Discovering Neural Architectures from Scratch Abstract The discovery of neural architectures from scratch is the long-standing goal of Neural Architecture Search (NAS). Searching over a wide spectrum of neural architectures can facilitate the discovery of previously unconsidered but wellperforming architectures. In this work, we take a large step towards discovering neural architectures from scratch by expressing architectures algebraically. This algebraic view leads to a more general method for designing search spaces, which allows us to compactly represent search spaces that are 100s of orders of magnitude larger than common spaces from the literature. Further, we propose a Bayesian Optimization strategy to efficiently search over such huge spaces, and demonstrate empirically that both our search space design and our search strategy can be superior to existing baselines. We open source our algebraic NAS approach and provide APIs for PyTorch and TensorFlow. 1 INTRODUCTION Neural Architecture Search (NAS), a field with over 1 000 papers in the last two years (Deng & Lindauer, 2022), is widely touted to automatically discover novel, well-performing architectural patterns. However, while state-of-the-art performance has already been demonstrated in hundreds of NAS papers (prominently, e.g., (Tan & Le, 2019; 2021; Liu et al., 2019a)), success in automatically finding truly novel architectural patterns has been very scarce (Ramachandran et al., 2017; Liu et al., 2020). For example, novel architectures, such as transformers (Vaswani et al., 2017; Dosovitskiy et al., 2021) have been crafted manually and were not found by NAS. There is an accumulating amount of evidence that over-engineered, restrictive search spaces (e.g., cell-based ones) are major impediments for NAS to discover truly novel architectures. Yang et al. (2020b) showed that in the DARTS search space (Liu et al., 2019b) the manually-defined macro architecture is more important than the searched cells, while Xie et al. (2019) and Ru et al. (2020) achieved competitive performance with randomly wired neural architectures that do not adhere to common search space limitations. As a result, there are increasing efforts to break these impediments, and the discovery of novel neural architectures has been referred to as the holy grail of NAS. Hierarchical search spaces are a promising step towards this holy grail. In an initial work, Liu et al. (2018) proposed a hierarchical cell, which is shared across a fixed macro architecture, imitating the compositional neural architecture design pattern widely used by human experts. However, subsequent works showed the importance of both layer diversity (Tan & Le, 2019) and macro architecture (Xie et al., 2019; Ru et al., 2020). In this work, we introduce a general formalism for the representation of hierarchical search spaces, allowing both for layer diversity and a flexible macro architecture. The key observation is that any neural architecture can be represented algebraically; e.g., two residual blocks followed by a fullyconnected layer in a linear macro topology can be represented as the algebraic term ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (1) We build upon this observation and employ Context-Free Grammars (CFGs) to construct large spaces of such algebraic architecture terms. Although a particular search space is of course limited in its overall expressiveness, with this approach, we could effectively represent any neural architecture, facilitating the discovery of truly novel ones. Due to the hierarchical structure of algebraic terms, the number of candidate neural architectures scales exponentially with the number of hierarchical levels, leading to search spaces 100s of orders of magnitudes larger than commonly used ones. To search in these huge spaces, we propose an efficient search strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), which leverages hierarchical information, capturing the topological patterns across the hierarchical levels, in its tailored kernel design. Our contributions are as follows: • We present a novel technique to construct hierarchical NAS spaces based on an algebraic notion views neural architectures as algebraic architecture terms and CFGs to create algebraic search spaces (Section 2). • We propose BANAT, a Bayesian Optimization (BO) strategy that uses a tailored modeling strategy to efficiently and effectively search over our huge search spaces (Section 3). • After surveying related work (Section 4), we empirically show that search spaces of algebraic architecture terms perform on par or better than common cell-based spaces on different datasets, show the superiority of BANAT over common baselines, demonstrate the importance of incorporating hierarchical information in the modeling, and show that we can find novel architectural parts from basic mathematical operations (Section 5). We open source our code and provide APIs for PyTorch (Paszke et al., 2019) and TensorFlow (Abadi et al., 2015) at https://anonymous.4open.science/r/iclr23_tdnafs. 2 ALGEBRAIC NEURAL ARCHITECTURE SEARCH SPACE CONSTRUCTION In this section we present an algebraic view on Neural Architecture Search (NAS) (Section 2.1) and propose a construction mechanism based on Context-Free Grammars (CFGs) (Section 2.2 and 2.3). 2.1 ALGEBRAIC ARCHITECTURE TERMS FOR NEURAL ARCHITECTURE SEARCH We introduce algebraic architecture terms as a string representation for neural architectures from a (term) algebra. Formally, an algebra (A,F) consists of a non-empty set A (universe) and a set of operators f : An → A ∈ F of different arities n ≥ 0 (Birkhoff, 1935). In our case, A corresponds to the set of all (sub-)architectures and we distinguish between two types of operators: (i) nullary operators representing primitive computations (e.g., conv() or fc()) and (ii) k-ary operators with k > 0 representing topological operators (e.g., Linear(·, ·, ·) or Residual(·, ·, ·)). For sake of notational simplicity, we omit parenthesis for nullary operators (i.e., we write conv). Term algebras (Baader & Nipkow, 1999) are a special type of algebra mapping an algebraic expression to its string representation. E.g., we can represent a neural architecture as the algebraic architecture term ω as shown in Equation 1. Term algebras also allow for variables xi that are set to terms themselves that can be re-used across a term. In our case, the intermediate variables xi can therefore share patterns across the architecture, e.g., a shared cell. For example, we could define the intermediate variable x1 to map to the residual block in ω from Equation 1 as follows: ω′ = Linear(x1, x1, fc), x1 = Residual(conv, id, conv) . (2) Algebraic NAS We formulate our algebraic view on NAS, where we search over algebraic architecture terms ω ∈ Ω representing their associated architectures Φ(ω), as follows: argmin ω∈Ω f(Φ(ω)) , (3) where f(·) is an error measure that we seek to minimize, e.g., final validation error of a fixed training protocol. For example, we can represent the popular cell-based NAS-Bench-201 search space(Dong & Yang, 2020) as algebraic search space Ω. The algebraic search space Ω is characterized by a fixed macro architecture Macro(. . .) that stacks 15 instances of a shared cell Cell(pi,pi,pi,pi,pi,pi), where the cell has six edges, on each of which one of five primitive computations can be placed (i.e., pi for i ∈ {1, 2, 3, 4, 5} corresponding to zero, id, conv1x1, conv3x3, or avg pool, respectively). By leveraging the intermediate variable x1 we can effectively share the cell topology across the architecture. For example, we can express an architecture ωi ∈ Ω from the NAS-Bench-201 search space Ω as: ωi = Macro(x1, x1, ..., x1︸ ︷︷ ︸ 15× ), x1 = Cell(p1,p2,p1,p5,p4,p3) . (4) Algebraic NAS over such algebraic architecture terms then amounts to finding the best-performing primitive computation pi for each edge, as the macro architecture is fixed. In contrast to this simple cell-based algebraic space, the search spaces we consider can be much more expressive and, e.g., allow for layer diversity and a flexible macro architecture over several hierarchical levels (Section 5.1). 2.2 CONSTRUCTING NEURAL ARCHITECTURE TERMS WITH CONTEXT-FREE GRAMMARS We propose to use Context-Free Grammars (CFGs) (Chomsky, 1956) since they can naturally generate (hierarchical) algebraic architecture terms. Compared to other search space designs, CFGs give us a formally grounded way to naturally and compactly define very expressive hierarchical search spaces (e.g., see Section 5.1). We can also unify popular search spaces from the literature with our general search space design in one framework (Appendix E). They give us further a simple mechanism to evolve architectures while staying within the defined search space (Section 3). Formally, a CFG G = ⟨N,Σ, P, S⟩ consists of a finite set of nonterminals N and terminals Σ with N ∩Σ = ∅, a finite set of production rules P = {A→ β|A ∈ N, β ∈ (N ∪Σ)∗}, where the asterisk ∗ denotes the Kleene star operation (Kleene et al., 1956), and a start symbol S ∈ N . To generate an algebraic architecture term, starting from the start symbol S, we recursively replace nonterminals of the current algebraic term with a right-hand side of a production rule consisting of nonterminals and terminals, until the resulting string does not contain any nonterminals. For example, consider the following CFG in extended Backus-Naur form (Backus, 1959) (see Appendix B for background): S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc (5) From this CFG, we can derive the algebraic architecture term ω (with three hierarchical levels) from Equation 1 as follows: S→ Linear(S, S, S) Level 1 → Linear(Residual(S, S, S), Residual(S, S, S), fc) Level 2 (6) → Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) Level 3 Figure 1 makes the above derivation and the connection to the associated architecture explicit. The set of all (potentially infinite) algebraic terms generated by a CFG G is the language L(G), which naturally forms our search space Ω. Thus, the algebraic NAS problem from Equation 3 becomes: argmin ω∈L(G) f(Φ(ω)) . (7) 2.3 EXTENSIONS TO THE CONSTRUCTION MECHANISM Constraints In many search space designs, we want to adhere to some constraints, e.g., to limit the number of nodes or to ensure that for all architectures in the search space there exists at least one path from the input to the output. We can simply do so by allowing only the application of production rules which guarantee compliance to such constraints. For example, to ensure that there is at least one path from the input to the output, it is sufficient to ensure that each derivation connects its input to the output due to the recursive nature of CFGs. Note that this makes CFGs context-sensitive w.r.t. those constraints. For more details, please refer to Appendix D. Fostering regularity through substitution To implement intermediate variables xi (Section 2.1) we leverage that context-free languages are closed under substitution: we map terminals, representing the intermediate variables xi, from one language to algebraic terms of other languages, e.g., a shared cell. For example, we can split a CFG G, constructing entire algebraic architecture terms, into the CFGs Gmacro and Gcell for the macro- or cell-level, respectively. Further, we add a single (or multiple) intermediate terminal(s) x1 to Gmacro which maps to an algebraic term ω1 ∈ L(Gcell), e.g., the searchable cell. Thus, we effectively search over the macro-level as well as a single, shared cell. Note that by using a fixed macro architecture (i.e., |L(Gmacro)| = 1), we can represent cell-based search spaces, e.g., NAS-Bench-201 (Dong & Yang, 2020), while also being able to represent more expressive search spaces (e.g., see Section 5.1). More generally, we could extend this by adding further intermediate terminals which map to other languages L(Gj), or by adding intermediate terminals to G2 which map to languages L(Gj ̸=1). In this way, we can effectively foster regularity. Representing common architecture patterns for object recognition Neural architectures for object recognition commonly build a hierarchy of features that are gradually downsampled, e.g., by pooling operations. However, previous works in NAS were either limited to a fixed macro architecture (Zoph et al., 2018), only allowed for linear macro architectures (Liu et al., 2019a), or required post-sampling testing for resolution mismatches (Stanley & Miikkulainen, 2002; Ru et al., 2020). While this produced impressive performance on popular benchmarks (Tan & Le, 2019; 2021; Liu et al., 2019a), it is an open research question whether a different type of macro architecture (e.g., one with multiple branches) could yield even better performance. To accommodate flexible macro architectures, we propose to overload the nonterminals. In particular, the nonterminals indicate how often we apply downsampling operations in the subsequent derivations of the nonterminal. Consider the production rule D2 → Residual(D1, D2, D1), where Di with i ∈ {1, 2} are a nonterminals which indicate that i downsampling operations have to be applied in their subsequent derivations. That is, in both paths of the residual the input features will be downsampled twice and, consequently, the merging paths will have the same spatial resolution. Thereby, this mechanism distributes the downsampling operations recursively across the architecture. For the channels, we adopted the common design to double the number of channels whenever we halve the spatial resolution in our experiments. Note that we could also handle a varying number of channels by using, e.g., depthwise concatenation as merge operation. 3 BAYESIAN OPTIMIZATION FOR ALGEBRAIC NEURAL ARCHITECTURE SEARCH We propose a BO strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), to efficiently search in the huge search spaces spanned by our algebraic architecture terms: we introduce a novel surrogate model which combines a Gaussian Process (GP) surrogate with a tailored kernel that leverages the hierarchical structure of algebraic neural architecture terms (see below), and adopt expected improvement as the acquisition function (Mockus et al., 1978). Given the discrete nature of architectures, we adopt ideas from grammar-guided genetic programming (McKay et al., 2010; Moss et al., 2020) for acquisition function optimization. Furthermore, to reduce wallclock time by leveraging parallel computing resources, we adapt the Kriging Believer (Ginsbourger et al., 2010) to select architectures at every search iteration so that we can train and evaluate them in parallel. Specifically, Kriging Believer assigns hallucinated values (i.e., posterior mean) of pending evaluations at each iteration to avoid redundant evaluations. For a more detailed explanation of BANAT, please refer to Appendix F. Hierarchical Weisfeiler-Lehman kernel (hWL) Inspired by the state-of-the-art BO approach for NAS (Ru et al., 2021), we adopt the WL graph kernel (Shervashidze et al., 2011) in a GP surrogate, modeling performance of the algebraic architecture terms ωi with the associated architectures Φ(ωi). However, modeling solely based on the final architecture ignores the useful hierarchical information inherent in our algebraic representation. Moreover, the large size of the architectures also makes it difficult to use a single WL kernel to capture the more global topological patterns. Since our hierarchical construction can be viewed as a series of gradually unfolding architectures, with the final architecture containing only primitive computations, we propose a novel hierarchical kernel design assigning a WL kernel to each hierarchy and combine them in a weighted sum. To this end, we introduce fold operators Fl, that removes algebraic terms beyond the l-th hierarchical level. For example, the fold operators F1, F2 and F3 yield for the algebraic term ω (Equation 1) F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (8) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear . Note the similarity to the derivations in Figure 1. Furthermore note that, in practice, we also add the corresponding nonterminals to integrate information from our hierarchical construction process. We define our hierarchical WL kernel (hWL) for two architectures Φ(ωi) and Φ(ωj) with algebraic architecture terms ωi or ωj , respectively, constructed over a hierarchy of L levels, as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (9) where the weights λl govern the importance of the learned graph information at different hierarchical levels (granularities of the architecture) and can be tuned (along with other hyperparameters of the GP) by maximizing the marginal likelihood. We omit l = 1 in the additive kernel as F1(ω) does not contain any edge features which are required for our WL kernel kWL. For more details on our novel hierarchical kernel design, please refer to Appendix F.2. Our proposed kernel efficiently captures the information in all algebraic term construction levels, which substantially improves its search and surrogate regression performance on our search space as demonstrated in Section 5. Acquisition function optimization To optimize the acquisition function, we adopt ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). For mutation, we randomly replace a sub-architecture term with a new randomly generated term, using the same nonterminal as start symbol. For crossover, we randomly swap two sub-architecture terms with the same corresponding nonterminal. We consider two crossover operators: a novel self-crossover operation swaps two sub-terms of a single architecture term, and the common crossover operation swaps subterms of two different architecture terms. Importantly, all evolutionary operations by design only result in valid terms. We provide examples for the evolutionary operations in Appendix F. 4 RELATED WORK We discuss related works in NAS below and discuss works beyond NAS in Appendix G. Neural Architecture Search Neural Architecture Search (NAS) aims to automatically discover architectural patterns (or even entire architectures) (Elsken et al., 2019). Previous approaches, e.g., used reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), evolution (Real et al., 2017), gradient descent (Liu et al., 2019b), or Bayesian Optimization (BO) (Kandasamy et al., 2018; White et al., 2021; Ru et al., 2021). To enable the effective use of BO on graph-like inputs for NAS, previous works have proposed to use a GP with specialized kernels (Kandasamy et al., 2018; Ru et al., 2021), encoding schemes (Ying et al., 2019; White et al., 2021), or graph neural networks as surrogate model (Ma et al., 2019; Shi et al., 2020; Zhang et al., 2019). Different to prior works, we explicitly leverage the hierarchical construction of architectures for modeling. Searching for novel architectural patterns Previous works mostly focused on finding a shared cell (Zoph et al., 2018) with a fixed macro architecture while only few works considered more expressive hierarchical search spaces (Liu et al., 2018; 2019a; Tan et al., 2019). The latter works considered hierarchical assembly (Liu et al., 2018), combination of a cell- and network-level search space (Liu et al., 2019a; Zhang et al., 2020), evolution of network topologies (Miikkulainen et al., 2019), factorization of the search space (Tan et al., 2019), parameterization of a hierarchy of random graph generators (Ru et al., 2020), a formal language over computational graphs (Negrinho et al., 2019), or a hierarchical construction of TensorFlow programs (So et al., 2021). Similarly, our formalism allows to design search spaces covering a general set of architecture design choices, but also permits the search for macro architectures with spatial resolution changes and multiple branches. We also handle spatial resolution changes without requiring post-hoc testing or resizing of the feature maps unlike prior works (Stanley & Miikkulainen, 2002; Miikkulainen et al., 2019; Stanley et al., 2019). Other works proposed approaches based on string rewriting systems (Kitano, 1990; Boers et al., 1993), cellular (or tree-structured) encoding schemes (Gruau, 1994; Luke & Spector, 1996; De Jong & Pollack, 2001; Cai et al., 2018), hyperedge replacement graph grammars Luerssen & Powers (2003); Luerssen (2005), attribute grammars (Mouret & Doncieux, 2008), CFGs (Jacob & Rehder, 1993; Couchet et al., 2007; Ahmadizar et al., 2015; Ahmad et al., 2019; Assunção et al., 2017; 2019; Lima et al., 2019; de la Fuente Castillo et al., 2020), or And-Or-grammars (Li et al., 2019). Different to these prior works, we construct entire architectures with spatial resolution changes across multiple branches, and propose techniques to incorporate constraints and foster regularity. Orthogonal to the aforementioned approaches, Roberts et al. (2021) searched over neural (XD-)operations, which is orthogonal to our approach, i.e., our predefined primitive computations could be replaced by their proposed XD-operations. 5 EXPERIMENTS In this section, we investigate potential benefits of hierarchical search spaces and our search strategy BANAT. More specifically, we address the following questions: Q1 Can hierarchical search spaces yield on par or superior architectures compared to cell-based search spaces with a limited number of evaluations? Q2 Can our search strategy BANAT improve performance over common baselines? Q3 Does leveraging the hierarchical information improve performance? Q4 Do zero-cost proxies work in vast hierarchical search spaces? Q5 Can we discover novel architectural patterns (e.g., activation functions)? To answer questions Q1-Q4, we introduce a hierarchical search space based on the popular NASBench-201 search space (Dong & Yang, 2020) in Section 5.1. To answer question Q5, we search for activation functions (Ramachandran et al., 2017) and defer the search space definition to Appendix J.1. We provide complementary results and analyses in Appendix I.2 and J.3. 5.1 HIERARCHICAL NAS-BENCH-201 We propose a hierarchical variant of the popular cell-based NAS-Bench-201 search space (Dong & Yang, 2020) by adding a hierarchical macro space (i.e., spatial resolution flow and wiring at the macro-level) and parameterizable convolutional blocks (i.e., choice of convolutions, activations, and normalizations). We express the hierarchical NAS-Bench-201 search space with CFG Gh as follows: D2 ::= Linear3(D1, D1, D0) | Linear3(D0, D1, D1) | Linear4(D1, D1, D0, D0) D1 ::= Linear3(C, C, D) | Linear4(C, C, C, D) | Residual3(C, C, D, D) D0 ::= Linear3(C, C, CL) | Linear4(C, C, C, CL) | Residual3(C, C, CL, CL) D ::= Linear2(CL, down) | Linear3(CL, CL, down) | Residual2(C, down, down) C ::= Linear2(CL, CL) | Linear3(CL, CL) | Residual2(CL, CL, CL) CL ::= Cell(OP, OP, OP, OP, OP, OP) OP ::= zero | id | BLOCK | avg pool BLOCK ::= Linear3(ACT, CONV, NORM) ACT ::= relu | hardswish | mish CONV ::= conv1x1 | conv3x3 | dconv3x3 NORM ::= batch | instance | layer . (10) See Appendix A for the terminal vocabulary of topological operators and primitive computations. The productions with the nonterminals {D2, D1, D0, D} define the spatial resolution flow and together with {C} define the macro architecture containing possibly multiple branches. The productions for {CL, OP} construct the NAS-Bench-201 cell and {BLOCK, ACT, CONV, NORM} parameterize the convolutional block. To ensure that we use the same distribution over the primitive computations as in NAS-Bench-201, we reweigh the sampling probabilities of the productions generated by the nonterminal OP, i.e., all production choices have sampling probability of 20%, but BLOCK has 40%. Note that we omit the stem (i.e., 3x3 convolution followed by batch normalization) and classifier (i.e., batch normalization followed by ReLU, global average pooling, and fully-connected layer) for simplicity. We implemented the merge operation as element-wise summation. Different to the cell-based NAS-Bench-201 search space, we exclude degenerated architectures by introducing a constraint that ensures that each subterm maps the input to the output (i.e., in the associated computational graph there is at least one path from source to sink). Our search space consists of ca. 10446 algebraic architecture terms (please refer to Appendix C on how to compute the search space size), which is significantly larger than other popular search spaces from the literature. For comparison, the cell-based NAS-Bench-201 search space is just a minuscule subspace of size 104.18, where we apply only the blue-colored production rules and replace the CL nonterminals with a placeholder terminal x1 that will be substituted by the searched, shared cell. 5.2 EVALUATION DETAILS For all search experiments, we compared the search strategies BANAT, Random Search (RS), Regularized Evolution (RE) (Real et al., 2019; Liu et al., 2018), and BANAT (WL) (Ru et al., 2021). For implementation details of the search strategies, please refer to Appendix H. We ran search for a total of 100 evaluations with a random initial design of 10 on three seeds {777, 888, 999} on the hierarchical NAS-Bench-201 search space or 1000 evaluations with a random initial design of 50 on one seed {777} on the activation function search space using 8 asynchronous workers each with a single NVIDIA RTX 2080 Ti GPU. In each evaluation, we fully trained the architectures and recorded their last validation error. For training details on the hierarchical NAS-Bench-201 search space and activation function search space, please refer to Appendix I.1 or Appendix J.2, respectively. To assess the modeling performance of our surrogate, we compared regression performance of GPs with different kernels, i.e., our hierarchical WL kernel (hWL), (standard) WL kernel (Ru et al., 2021), and NASBOT’s kernel (Kandasamy et al., 2018). We also tried the GCN encoding (Shi et al., 2020) but it could not capture the mapping from the complex graph space to performance, resulting in constant performance predictions. Further, note that the adjacency encoding (Ying et al., 2019) and path encoding (White et al., 2021) cannot be used in our hierarchical search spaces since the former requires the same amount of nodes across graphs and the latter scales exponentially in the number of nodes. We ran 20 trials over the seeds {0, 1, ..., 19} and re-used the data from the search runs. In every trial, we sampled a training and test set of 700 or 500 architecture and validation error pairs, respectively. We fitted the surrogates with a varying number of training samples by randomly choosing samples from the training set without replacement, and recorded Kendall’s τ rank correlation between the predicted and true validation error. To assess zero-cost proxies, we re-used the data from the search runs and recorded Kendall’s τ rank correlation. 5.3 RESULTS In the following we answer all of the questions Q1-Q5. Figure 2 compares the results of the cellbased and hierarchical search space design using our search strategy BANAT. Results with BANAT are on par on CIFAR-10/100, superior on ImageNet-16-120, and clearly superior on CIFARTile and AddNIST (answering Q1). We emphasize that the NAS community has engineered the cell-based search space to achieve strong performance on those popular image classification datasets for over a decade, making it unsurprising that our improvements are much larger for the novel datasets. Yet, our best found architecture on ImageNet-16-120 from the hierarchical search space also achieves an excellent test error of 52.78% with only 0.626MB parameters (Appendix I.2); this is superior to the architecture found by the state-of-the-art method Shapley-NAS (i.e., 53.15%) (Xiao et al., 2022) and on par with the optimal architecture of the cell-based NAS-Bench-201 search space (i.e., 52.69% with 0.866MB). Figure 3 shows that our search strategy BANAT is also superior 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based Evaluations Evaluations Evaluations Evaluations Evaluations BANAT (ours) RS RE BANAT (WL) Figure 3: Comparison of search strategies on the hierarchical search space. We plot mean and ±1 standard error of the validation error on the hierarchical NAS-Bench-201 search space for our search strategy BANAT (solid blue), RS (dashed orange), RE (dotted green), and BANAT (WL) (dashdotted red). We report test errors, best architectures, and conduct further analyses in Appendix I.2. to common baselines (answering Q2) and leveraging hierarchical information clearly improves performance (answering Q3). Further, the evaluation of surrogate performance in Figure 4 shows that incorporating hierarchical information with our hierarchical WL kernel (hWL) improves modeling, especially on smaller amounts of training data (further answering Q3). Table 1 shows that the baseline zero-cost proxies flops and l2-norm yield competitive (or often superior) results to more sophisticated zero-cost proxies; making hierarchical search spaces an interesting future research direction for them (answering Q4). Finally, Table 2 shows that we can find novel well-performing activation functions from basic mathematical operations with BANAT (answering Q5). 6 DISCUSSION AND LIMITATIONS While our grammar-based construction mechanism is a powerful mechanism to construct huge hierarchical search space, we can not construct any architecture with our grammar-based construction approach (Section 2.2 and 2.3) since we are limited to context-free languages; e.g., architectures of the type {anbncn|n ∈ N>0} cannot be generated by CFGs (this can be proven using Odgen’s lemma (Ogden, 1968)). Further, due to the discrete nature of CFGs we can not easily integrate continuous design choices, e.g., dropout probability. Furthermore, our grammar-based mechanism does not (generally) support simple scalability of discovered neural architectures (e.g., repetition of building blocks) without special consideration in the search space design. Nevertheless, our search spaces still significantly increase the expressiveness, including the ability to represent common search spaces from the literature (see Appendix E for how we can represent the search spaces of DARTS, Auto-Deeplab, the hierarchical cell search space of Liu et al. (2018), the Mobile-net search space, and the hierarchical random graph generator search space), as well as allowing search for entire neural architectures based around the popular NAS-Bench-201 search space CIFAR-10 CIFAR-100 ImageNet-16-120 CIFARTile AddNIST (Section 5). Thus, our search space design can facilitate the discovery of novel well-performing neural architectures in those huge search spaces of algebraic architecture terms. However, there is an inherent trade-off between the expressiveness and the difficulty of search. The much greater expressiveness facilitates search in a richer set of architectures that may include better architectures than in more restrictive search spaces, which however need not exist. Besides that, the (potential) existence of such a well-performing architecture does not result in a search strategy discovering it, even with large amounts of computing power available. Note that the tradeoff manifests itself also in the acquisition function optimization of our search strategy BANAT. In addition, a well-performing neural architecture may not work with current training protocols and hyperparameters due to interaction effects, i.e., training protocols and hyperparameters may be overoptimized for specific types of neural architectures. To overcome this limitation, one could consider a joint optimization of neural architectures, training protocols, and hyperparameters. However, this further fuels the trade-off between expressiveness and the difficulty of search. 7 CONCLUSION We introduced very expressive search spaces of algebraic architecture terms constructed with CFGs. To efficiently search over the huge search spaces, we proposed BANAT, an efficient BO strategy with a tailored kernel leveraging the available hierarchical information. Our experiments indicate that both our search space design and our search strategy can yield strong performance over existing baselines. Our results motivate further steps towards the discovery of neural architectures based on even more atomic primitive computations. Furthermore, future works could (simultaneously) learn the search space (i.e., learn the grammar) or improve search efficiency by means of multi-fidelity optimization or gradient-based search strategies. REPRODUCIBILITY STATEMENT To ensure reproducibility, we address all points of the best practices checklist for NAS research (Lindauer & Hutter, 2020) in Appendix K. ETHICS STATEMENT NAS has immense potential to facilitate systematic, automated discovery of high-performing (novel) architecture designs. However, the restrictive cell-based search spaces most commonly used in NAS render it impossible to discover truly novel neural architectures. With our general formalism based on algebraic terms, we hope to provide fertile foundation towards discovering high-performing and efficient architectures; potentially from scratch. However, search in such huge search spaces is expensive, particularly in the context of the ongoing detrimental climate crisis. While on the one hand, the discovered neural architectures, like other AI technologies, could potentially be exploited to have a negative societal impact; on the other hand, our work could also lead to advances across scientific disciplines like healthcare and chemistry. A FROM TERMINALS TO PRIMITIVE COMPUTATIONS AND TOPOLOGICAL OPERATORS Table 3 and Figure 5 describe the primitive computations and topological operators used throughout our experiments in Section 5 and Appendix I, respectively. Note that by adding more primitive computations and/or topological operators we could construct even more expressive search spaces. B EXTENDED BACKUS-NAUR FORM The (extended) Backus-Naur form (Backus, 1959) is a meta-language to describe the syntax of CFGs. We use meta-rules of the form S ::= α where S ∈ N is a nonterminal and α ∈ (N ∪ Σ)∗ is a string of nonterminals and/or terminals. We denote nonterminals in UPPER CASE, terminals corresponding to topological operators in Initial upper case/teletype, and terminals corresponding to primitive computations in lower case/teletype, e.g., S ::= Residual(S, S, id). To compactly express production rules with the same left-hand side nonterminal, we use the vertical bar | to indicate a choice of production rules with the same left-hand side, e.g., S ::= Linear(S, S, S) | Residual(S, S, id) | conv. C SEARCH SPACE SIZE In this section, we show how to efficiently compute the size of our search spaces constructed by CFGs. There are two cases to consider: (i) a CFG contains cycles (i.e., part of the derivation can be repeated infinitely many times) , yielding an open-ended, infinite search space; and (ii) a CFG contains no cycles, yielding in a finite search space whose size we can compute. Consider a production A → Residual(B, B, B) where Residual is a terminal, and A and B are nonterminals with B → conv | id. Consequently, there are 23 = 8 possible instances of the residual block. If we add another production choice for the nonterminal A, e.g., A → Linear(B, B, B), we would have 23 + 23 = 16 possible instances. Further, adding a production C → Linear(A, A, A) would yield a search space size of (23 + 23)3 = 4096. More generally, we introduce the function PA that returns the set of productions for nonterminal A ∈ N , and the function µ : P → N that returns all the nonterminals for a production p ∈ P . We can then recursively compute the size of the search space as follows: f(A) = ∑ p∈PA { 1 , µ(p) = ∅,∏ A′∈µ(p) f(A′) , otherwise . (11) When a CFG contains some constraint, we ensure to only account for valid architectures (i.e., compliant with the constraints) by ignoring productions which would lead to invalid architectures. D MORE DETAILS ON SEARCH SPACE CONSTRAINTS During the design of the search space, we may want to comply with some constraints, e.g., only consider valid neural architectures or impose structural constraints on architectures. We can guarantee compliance with constraints by modifying sampling (and evolution): we only allow the application of production rules, which guarantee compliance with the constraint(s). In the following, we show exemplary how this can be implemented for the former constraint mentioned above. Note that other constraints can be implemented in a similar manner To implement the constraint ”only consider valid neural architectures”, we note that our search space design only creates neural architectures where neither the spatial resolution nor the channels can be mismatched; please refer to Section 2.3 for details. Thus, the only way a neural architecture can become invalid is through zero operations, which could remove edges from the computational graph and possibly disassociate the input from the output. Since we recursively assemble neural architectures, it is sufficient to ensure that the derived algebraic architecture term (i.e., the associated computational graph) is compliant with the constraint, i.e.,there is at least one path from input to output. Thus, during sampling (and similarly during evolution), we modify the current production rule choices when an application of the zero operation would disassociate the input from the output. E COMMON SEARCH SPACES FROM THE LITERATURE In Section 5.1, we demonstrated how to construct the popular NAS-Bench-201 search space within our algebraic search space design, and below we show how to reconstruct the following popular search spaces: DARTS search space (Liu et al., 2019b), Auto-DeepLab search space (Liu et al., 2019a), hierarchical cell search space (Liu et al., 2018), Mobile-net search space (Tan et al., 2019), and hierarchical random graph generator search space (Ru et al., 2020). For implementation details we refer to the respective works. DARTS SEARCH SPACE The DARTS search space (Liu et al., 2019b) consists of a fixed macro architecture and a cell, i.e., a seven node directed acyclic graph (Darts; see Figure 6 for the topological operator). We omit the fixed macro architecture from our search space design for simplicity. Each cell receives the feature maps from the two preceding cells as input and outputs a single feature map. All intermediate nodes (i.e., Node3, Node4, Node5, and Node6) is computed based on all of its predecessors. Thus, we can define the DARTS search space as follows: DARTS ::= Darts(NODE3, NODE4, NODE5, NODE6) NODE3 ::= Node3(OP, OP) NODE4 ::= Node4(OP, OP, OP) NODE5 ::= Node5(OP, OP, OP, OP) NODE6 ::= Node6(OP, OP, OP, OP, OP) OP ::= sep conv 3x3 | sep conv 5x5 | dil conv 3x3 | dil conv 5x5 | max pool | avg pool | id | zero , (12) where the topological operator Node3 receives two inputs, applies the operations separately on them, and sums them up. Similarly, Node4, Node5, and Node6 apply their operations separately to the given inputs and sum them up. The topological operator Darts feeds the corresponding feature maps into each of those topological operators and finally concatenates all intermediate feature maps. AUTO-DEEPLAB SEARCH SPACE Auto-DeepLab (Liu et al., 2019a) combines a cell-level with a network-level search space to search for segmentation networks, where the cell is shared across the searched macro architecture, i.e., a twelve step (linear) path across different spatial resolutions. The cell-level design is adopted from Liu et al. (2019b) and, thus, we can re-use the CFG from Equation 12. For the network-level, we introduce a constraint that ensures that the path is of length twelve, i.e., we ensure exactly twelve derivations in our CFG. Further, we overload the nonterminals so that they correspond to the respective spatial resolution level, e.g., D4 indicates that the original input is downsampled by a factor of four; please refer to Section 2.3 for details on overloading nonterminals. For the sake of simplicity, we omit the first two layers and atrous spatial pyramid poolings as they are fixed, and hence define the network-level search space as follows: D4 ::= Same(CELL, D4) | Down(CELL, D8) D8 ::= Up(CELL, D4) | Same(CELL, D8) | Down(CELL, D16) D16 ::= Up(CELL, D8) | Same(CELL, D16) | Down(CELL, D32) D32 ::= Up(CELL, D16) | Same(CELL, D32) , (13) where the topological operators Up, Same, and Down upsample/halve, do not change/do not change, or downsample/double the spatial resolution/channels, respectively. The placeholder variable CELL maps to the shared DARTS cell from the language generated by the CFG from Equation 12. HIERARCHICAL CELL SEARCH SPACE The hierarchical cell search space (Liu et al., 2018) consists of a fixed (linear) macro architecture and a hierarchically assembled cell with three levels which is shared across the macro architecture. Thus, we can omit the fixed macro architecture from our search space design for simplicity. Their first, second, and third hierarchical levels correspond to the primitive computations (i.e., id, max pool, avg pool, sep conv, depth conv, conv, zero), six densely connected four node directed acyclic graphs (DAG4), and a densely connected five node directed acyclic graph (DAG5), respectively. The zero operation could lead to directed acyclic graphs which have fewer nodes. Therefore, we introduce a constraint enforcing that there are always four (level 2) or five (level 3) nodes for every directed acyclic graph. Further, since a densely connected five node directed acyclic graph graph has ten edges, we need to introduce placeholder variables (i.e., M1, ..., M6) to enforce that only six (possibly) different four node directed acyclic graphs are used, and consequently define a CFG for the third level LEVEL3 ::= DAG5(LEVEL2, ..., LEVEL2︸ ︷︷ ︸ ×10 ) LEVEL2 ::= M1 | M2 | M3 | M4 | M5 | M6 | zero , (14) mapping the placeholder variables M1, ..., M6 to the six lower-level motifs constructed by the first and second hierarchical level LEVEL2 ::= DAG4(LEVEL1, ..., LEVEL1)︸ ︷︷ ︸ ×6 LEVEL1 ::= id | max pool | avg pool | sep conv | depth conv | conv | zero . (15) MOBILE-NET SEARCH SPACE Factorized hierarchical search spaces, e.g., the Mobile-net search space (Tan et al., 2019), allow for layer diversity. They factorize a (fixed) macro architecture – often based on an already wellperforming reference architecture – into separate blocks (e.g., cells). For the sake of simplicity, we assume here a three sequential blocks (Block) architecture (Linear). In each of those blocks, we search for the convolution operations (CONV), kernel sizes (KSIZE), squeeze-and-excitation ratio (SERATIO) (Hu et al., 2018), skip connections (SKIP), number of output channels (FSIZE), and number of layers per block (#LAYERS), where the latter two are discretized using a reference architecture, e.g., MobileNetV2 (Sandler et al., 2018). Consequently, we can express this search space as follows: MACRO ::= Linear(BLOCK, BLOCK, BLOCK) BLOCK ::= Block(CONV, KSIZE, SERATIO, SKIP, FSIZE, #LAYERS) CONV ::= conv | dconv | mbconv KSIZE ::= 3 | 5 SERATIO ::= 0 | 0.25 SKIP ::= pooling | id residual | no skip FSIZE ::= 0.75 | 1.0 | 1.25 #LAYERS ::= -1 | 0 | 1 , (16) where conv, donv and mbconv correspond to convolution, depthwise convolution, and mobile inverted bottleneck convolution (Sandler et al., 2018), respectively. HIERARCHICAL RANDOM GRAPH GENERATOR SEARCH SPACE The hierarchical random graph generator search space (Ru et al., 2020) consists of three hierarchical levels of random graph generators (i.e., Watts-Strogatz (Watts & Strogatz, 1998) and Erdõs-Rényi (Erdős et al., 1960)). We denote with Watts-Strogatz i the random graph generated by the Watts-Strogatz model with i nodes. Thus, we can represent the search space as follows: TOP ::= Watts-Strogatz 3(K, Pt)(MID, MID, MID) | ... | Watts-Strogatz 10(K, Pt)(MID, ..., MID︸ ︷︷ ︸ ×10 ) MID ::= Erdõs-Rényi 1(Pm)(BOT) | ... | Erdõs-Rényi 10(Pm)(BOT, ..., BOT︸ ︷︷ ︸ ×10 ) BOT ::= Watts-Strogatz 3(K, Pb)(NODE, NODE, NODE) | ... | Watts-Strogatz 10(K, Pb)(NODE ..., NODE︸ ︷︷ ︸ ×10 ) K ::= 2 | 3 | 4 | 5 , (17) Algorithm 1 Bayesian Optimization algorithm (Brochu et al., 2010). Input: Initial observed data Dt, a black-box objective function f , total number of BO iterations T Output: The best recommendation about the global optimizer x∗ for t = 1, . . . , T do Select the next xt+1 by maximizing acquisition function α(x|Dt) Evaluate the objective function at ft+1 = f(xt+1) Dt+1 ← Dt ∪ (xt+1, ft+1) Update the surrogate model with Dt+1 end for where each terminal Pt, Pm, and Pb maps to a continuous number in [0.1, 0.9]1 and the placeholder variable NODEmaps to a primitive computation, e.g., separable convolution. Note that we omit other hyperparameters, such as stage ratio, channel ratio etc., for simplicity. F MORE DETAILS ON THE SEARCH STRATEGY In this section, we provide more details and examples for our search strategy Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT) presented in Section 3. F.1 BAYESIAN OPTIMIZATION Bayesian Optimization (BO) is a powerful family of search techniques for finding the global optimum of a black-box objective problem. It is particularly useful when the objective is expensive to evaluate and thus sample efficiency is highly important (Brochu et al., 2010). To minimize a black-box objective problem with BO, we first need to build a probabilistic surrogate to model the objective based on the observed data so far. Based on the surrogate model, we design an acquisition function to evaluate the utility of potential candidate points by trading off exploitation (where the posterior mean of the surrogate model is low) and exploration (where the posterior variance of the surrogate model is high). The next candidate points to evaluate is then selected by maximizing the acquisition function (Shahriari et al., 2015). The general procedures of BO is summarized in Algorithm 1. We adopted the widely used acquisition function, expected improvement (EI) (Mockus et al., 1978), in our BO strategy. EI evaluates the expected amount of improvement of a candidate point x over the minimal value f ′ observed so far. Specifically, denote the improvement function as I(x) = max(0, f ′ − f(x)), the EI acquisition function has the form αEI(x|Dt) = E[I(x)|Dt] = ∫ f ′ −∞ (f ′ − f)N ( f ;µ(x|Dt), σ2(x|Dt) ) df = (f ′ − f)Φ ( f ′;µ(x|Dt), σ2(x|Dt) ) + σ2(x|Dt)ϕ(f ′;µ(x|Dt), σ2(x|Dt)) , where µ(x|Dt) and σ2(x|Dt) are the mean and variance of the predictive posterior distribution at a candidate point x, and ϕ(·) and Φ(·) denote the PDF and CDF of the standard normal distribution, respectively. To make use of ample distributed computing resource, we adopted Kriging Believer (Ginsbourger et al., 2010) which uses the predictive posterior of the surrogate model to assign hallucinated function values {f̃p}p∈{1, ..., P} to the P candidate points with pending evaluations {x̃p}p∈{1, ..., P} and perform next BO recommendation in the batch by pseudo-augmenting the observation data with D̃p = {(x̃p, f̃p)}p∈{1, ..., P}, namely D̃t = Dt ∪ D̃p. The algorithm of Kriging Believer at one BO iteration to select a batch of recommended candidate points is summarized in Algorithm 2. 1Theoretically, this is not possible with CFGs. However, we can extend the notion of substitution by substituting a string representation of a Python (float) variable for the placeholder variables Pt, Pm, and Pb. Algorithm 2 Kriging Believer algorithm to select one batch of points. Input: Observation data Dt, batch size b Output: The batch points Bt+1 = {x(1)t+1, . . . ,x (b) t+1} D̃t = Dt ∪ D̃p for j = 1, . . . , b do Select the next x(j)t+1 by maximizing acquisition function α(x|D̃t) Compute the predictive posterior mean µ(x(j)t+1|D̃t) D̃t ← D̃t ∪ (xt+1, µ(x(j)t+1|D̃t)) end for Algorithm 3 Weisfeiler-Lehman subtree kernel computation (Shervashidze et al., 2011). Input: Graphs G1, G2, maximum iterations H Output: Kernel function value between the graphs Initialize the feature vectors ϕ(G1) = ϕ0(G1), ϕ(G2) = ϕ0(G2) with the respective counts of original node labels (i.e., the h = 0 WL features) for h = 1, . . . H do Assign a multiset Mh(v) = {lh−1(u)|u ∈ N (v)} to each node v ∈ G, where lh−1 is the node label function of the h− 1-th WL iteration and N is the node neighbor function Sort elements in multiset Mh(v) and concatenate them to string sh(v) Compress each string sh(v) using the hash function f s.t. f(sh(v)) = f(sh(w)) ⇐⇒ sh(v) = sh(u) Add lh−1 as prefix for sh(v) Concatenate the WL features ϕh(G1), ϕh(G2) with the respective counts of the new labels: ϕ(G1) = [ϕ(G1), ϕh(G1)], ϕ(G2) = [ϕ(G2), ϕh(G2)] Set lh(v) := f(sh(v)) ∀v ∈ G end for Compute inner product k = ⟨ϕh(G1), ϕh(G2)⟩ between WL features ϕh(G1), ϕh(G2) in RKHS H F.2 HIERARCHICAL WEISFEILER-LEHMAN KERNEL Inspired by Ru et al. (2021), we adopted the Weisfeiler-Lehman (WL) graph kernel (Shervashidze et al., 2011) in the GP surrogate model to handle the graph nature of neural architectures. The basic idea of the WL kernel is to first compare node labels, and then iteratively aggregate labels of neighboring nodes, compress them into a new label and compare them. Algorithm 3 summarizes the WL kernel procedure. Ru et al. (2021) identified three reasons for using the WL kernel: (1) it is able to compare labeled and directed graphs of different sizes, (2) it is expressive, and (3) it is relatively efficient and scalable. Our search space design can afford a diverse spectrum of neural architectures with very heterogeneous topological structure. Therefore, reason (1) is a very important property of the WL kernel to account for the diversity of neural architectures. Moreover, if we allow many hierarchical levels, we can construct very large neural architectures. Therefore, reasons (2) and (3) are essential for accurate and fast modeling. However, neural architectures in our search spaces may be significantly larger, which makes it difficult for a single WL kernel to capture the more global topological patterns. Moreover, modeling solely based on the final neural architecture ignores the useful macro-level information from earlier hierarchical levels. In our experiments (Section 5 and I), we have found stronger neural architectures by incorporating the hierarchical information in the kernel design, which provides experimental support for above arguments. However, modeling solely based on the (standard) WL graph kernel neglects the useful hierarchical information from our assembly process. Moreover, the large size of neural architectures make it still challenging to capture the more global topological patterns. We therefore propose to use hierarchical information through a hierarchy of WL graph kernels that take into account the different granularities of the architectures and combine them in a weighted sum. To obtain the different granularities, we use the fold operators Fl that removes algebraic terms beyond the l-th hierarchical level. Thereby, Residual Residual fc we obtain the folds F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (18) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear , for the algebraic architecture term ω. Note that we ignore the first fold since it does not represent a labeled DAG. Figure 7 visualizes the labeled graphs Φ(F2) and Φ(F3) of the folds F2 or F3, respectively. These graphs can be fed into (standard) WL graph kernels. Therefore, we can construct a hierarchy of WL graph kernels kWL as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (19) where ωi and ωj are two algebraic architecture terms. Note that λl govern the importance of the learned graph information across the hierarchical levels and can be optimized through the marginal likelihood. F.3 EXAMPLES FOR THE EVOLUTIONARY OPERATIONS For the evolutionary operations, we adopted ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). In the following, we will show how these evolutionary operations manipulate algebraic terms, e.g., Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) , (20) from the search space S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc , (21) to generate evolved algebraic terms. Figure 1 shows how we can derive the algebraic term in Equation 20 from the search space in Equation 21. For mutation operations, we first randomly pick a subterm of the algebraic term, e.g., Residual(conv, id, conv). Then, we randomly sample a new subterm with the same nonterminal symbol S as start symbol, e.g., Linear(conv, id, fc), and replace the previous subterm, yielding Linear(Linear(conv, id, fc), Residual(conv, id, conv), fc) . (22) For (self-)crossover operations, we swap two subterms, e.g., Residual(conv, id, conv) and Residual(conv, id, conv) with the same nonterminal S as start symbol, yielding Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (23) Note that unlike the commonly used crossover operation, which uses two parents, self-crossover has only one parent. In future work, we could also add a self-copy operation that copies a subterm to another part of the algebraic term, explicitly regularizing diversity and thus potentially speeding up the search. G RELATED WORK BEYOND NEURAL ARCHITECTURE SEARCH While our work focuses exclusively on NAS, we will discuss below how it relates to the areas of optimizer search (as well as from scratch automated machine learning) and neural-symbolic programming. Optimizer search is a closely related field to NAS, where we automatically search for an optimizer (i.e., an update function for the weights) instead of an architecture. Initial works used learnable parametric or non-parametric optimizers. While the former approaches (Andrychowicz et al., 2016; Li & Malik, 2017; Chen et al., 2017; 2022a) have poor scalability and generality, the latter works overcome those limitations. Bello et al. (2017) searched for an instantiation of hand-crafted patterns via reinforcement learning, while Wang et al. (2022) proposed a tree-structured search space2 and searched for optimizers via a modified Monte Carlo sampling approach. AutoML-Zero (Real et al., 2020) took an even more general approach by searching over entire machine learning algorithms, including optimizers, from a generic search space built from basic mathematical operations with an evolutionary algorithm. Chen et al. (2022b) used RE to discover optimizers from a generic search space (inspired by AutoML-Zero) for training vision transformers (Dosovitskiy et al., 2021). Complementary to the above, there is recent interest in automatically synthesizing programs from domain-specific languages. Gaunt et al. (2017) proposed a hand-crafted program template and simultaneously optimized the parameters of the differentiable program with gradient descent. The HOUDINI framework (Valkov et al., 2018) proposed type-directed (top-down) enumeration and evolution approaches over differentiable functional programs. Shah et al. (2020) hierarchically assembled differentiable programs and used neural networks for the approximation of missing expression in partial programs. Cui & Zhu (2021) treated CFGs stochastically with trainable production rule sampling weights, which were optimized with a gradient-based approach (Liu et al., 2019b). However, naı̈vely applying gradient-based approaches does not work in our search spaces due to the exponential explosion of supernet weights, but still renders an interesting direction for future work. Compared to these lines of work, we extended CFGs to handle changes in spatial resolution, promote regularity, and (compared to most of them) incorporate constraints, the latter two of which could also be applied in those domains. We also proposed a BO search strategy to search efficiently with a tailored kernel design to handle the hierarchical nature of the search space (i.e., the architectures). H IMPLEMENTATION DETAILS OF THE SEARCH STRATEGIES BANAT & BANAT (WL) The only difference between BANAT and BANAT (WL) is that the former uses our proposed hierarchy of WL kernels (hWL), whereas the latter only uses a single WL kernel (WL) for the entire architecture (c.f., (Ru et al., 2021)). We ran BANAT asynchronously in parallel throughout our experiments with a batch size of B = 1, i.e., at each BO iteration a single architecture is proposed for evaluation. For the acquisition function optimization, we used a pool size of P = 200, where the initial population consisted of the current ten best-performing architectures and the remainder were randomly sampled architectures to encourage exploration in the huge search spaces. During evolution, the mutation probability was set to pmut = 0.5 and crossover probability was set to pcross = 0.5. From the crossovers, half of them were self-crossovers of one parent and the other half were common crossovers between two parents. The tournament selection probability was set to ptour = 0.2. We evolved the population at least for ten iterations and a maximum of 50 iterations using a early stopping criterion based on the fitness value improvements over the last five iterations. Regularized Evolution (RE) RE (Real et al., 2019; Liu et al., 2018) iteratively mutates the best architectures out of a sample of the population. We reduced the population size from 50 to 30 to account for fewer evaluations, and used a sample size of 10. We also ran RE asynchronously for better comparability. I SEARCHING THE HIERARCHICAL NAS-BENCH-201 SEARCH SPACE In this section, we provide training details (Section I.1) and provide complementary results as well as conduct extensive analyses (Section I.2). 2Note that the tree-structured search space can equivalently be described with a CFG (with a constraint on the number of maximum depth of the syntax trees). I.1 TRAINING DETAILS Training protocol We evaluated all search strategies on CIFAR-10/100 (Krizhevsky et al., 2009), ImageNet-16-120 (Chrabaszcz et al., 2017), CIFARTile, and AddNIST (Geada et al., 2021). Note that CIFARTile and AddNIST are novel datasets and therefore have not yet been optimized by the research community. We provide further dataset details below. For training of architectures on CIFAR-10/100 and ImageNet-16-120, we followed Dong & Yang (2020). We trained architectures with SGD with learning rate of 0.1, Nesterov momentum of 0.9, weight decay of 0.0005 with cosine annealing (Loshchilov & Hutter, 2019), and batch size of 256 for 200 epochs. The initial channels were set to 16. For both CIFAR-10 and CIFAR-100, we used random flip with probability 0.5 followed by a random crop (32x32 with 4 pixel padding) and normalization. For ImageNet-16120, we used a 16x16 random crop with 2 pixel padding instead. For training of architectures on AddNIST and CIFARTile, we followed the training protocol from the CVPR-NAS 2021 competition (Geada et al., 2021): We trained architectures with SGD with learning rate of 0.01, momentum of 0.9, and weight decay of 0.0003 with cosine annealing, and batch size of 64 for 64 epochs. We set the initial channels to 16 and did not apply any further data augmentation. Dataset details In Table 4, we provide the licenses for the datasets used in our experiments. For training of architectures on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet-16-120 (Chrabaszcz et al., 2017), we followed the dataset splits and training protocol of NAS-Bench-201 (Dong & Yang, 2020). For CIFAR-10, we split the original training set into a new training set with 25k images and validation set with 25k images following Dong & Yang (2020). The test set remained unchanged. For evaluation, we trained architectures on both the training and validation set. For CIFAR-100, the training set remained unchanged, but the test set was partitioned in a validation set and new test set with each 5K images. For ImageNet-16-120, all splits remained unchanged. For AddNIST and CIFARTile, we used the training, validation, and test splits as defined in the CVPR-NAS 2021 competition (Geada et al., 2021). I.2 EXTENDED SEARCH RESULTS AND ANALYSES Supplementary to Figure 2, Figure 8 compares the cell-based vs. hierarchical NAS-Bench-201 search space from Section 6.1 using RS, RE, and BANAT (WL). The cell-based search space design shows on par or stronger performance on all datasets except for CIFARTile for the three search strategies. In contrast, for our proposed search strategy BANAT we find on par (CIFAR-10/100) or superior (ImageNet-16-120, CIFARTile, and AddNIST) performance using the hierarchical search space design. This clearly shows that the increase of the search space does not necessarily yields the discovery of stronger neural architectures. Further, it exemplifies the importance of a strong search strategy to search effectively and efficiently in huge hierarchical search spaces (Q2), and provides further evidence that the incorporation of hierarchical information is a key contributor for search efficiency (Q3). Based on this, we believe that future work using, e.g., graph neural networks as a surrogate, may benefit from the incorporation of hierarchical information. We report the test errors of our best found architectures in Table 5. We observe that our search strategy BANAT finds the strongest performing architectures across all dataset (Q2, Q3). Also note that we achieve better (validation and) test performance on ImageNet-16-120 on the hierarchical than the state-of-the-art search strategy on the cell-based NAS-Bench-201 search space (i.e., +0.37%p compared to Shapley-NAS (Xiao et al., 2022)) (Q1). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (a) Random Search (RS). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (b) Regularized Evolution (RE). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST Ta bl e 5: Te st er ro rs (a nd ± 1 st an da rd er ro r) of po pu la r ba se lin e ar ch ite ct ur es (e .g ., R es N et (H e et al ., 20 16 ) an d E ffi ci en tN et (T an & L e, 20 19 ) va ri an ts ), an d ou r be st fo un d ar ch ite ct ur es on th e ce ll- ba se d an d hi er ar ch ic al N A S- B en ch -2 01 se ar ch sp ac e. N ot e th at w e pi ck ed th e R es N et an d E ffi ci en tN et va ri an tb as ed on th e te st er ro r, co ns eq ue nt ly gi vi ng an ov er es tim at e of th ei rt es tp er fo rm an ce . † op tim al nu m be rs as re po rt ed in D on g & Y an g (2 02 0) . (b es t) te st er ro r( an d ± 1 st an da rd er ro r) ac ro ss th re e se ed s {7 7 7 , 8 8 8 , 9 9 9 } of th e be st ar ch ite ct ur e of th e th re e se ar ch ru ns w ith lo w es tv al id at io n er ro r. M et ho d C IF A R -1 0 C IF A R -1 00 Im ag eN et -1 6- 12 0 C IF A R Ti le A dd N IS T ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al B es tR es N et (H e et al ., 20 16 ) 06 .4 9 ± 0. 24 (3 2) 27 .1 0± 0. 67 (1 10 ) 53 .6 7 ± 0. 18 (5 6) 57 .8 0± 0. 57 (1 8) 7. 78 ± 0. 05 (3 4) B es tE ffi ci en tN et (T an & L e, 20 19 ) 11 .7 3 ± 0. 10 (B 0) 35 .1 7 ± 0. 42 (B 6) 77 .7 3 ± 0. 29 (B 0) 61 .0 1 ± 0. 62 (B 0) 13 .2 4 ± 0. 58 (B 1) N A SB en ch -2 01 or ac le † 5. 63 26 .4 9 52 .6 9 - - R S 6. 39 ± 0. 18 6. 77 ± 0. 10 28 .7 5 ± 0. 57 29 .4 9 ± 0. 57 54 .8 3 ± 0. 78 54 .7 0± 0. 82 52 .7 2 ± 0. 45 40 .9 3 ± 0. 81 7. 82 ± 0. 36 8. 05 ± 0. 29 N A SW O T (N =1 0) (M el lo re ta l., 20 21 ) 6. 55 ± 0. 10 8. 18 ± 0. 46 29 .3 5 ± 0. 53 31 .7 3 ± 0. 96 56 .8 0± 1. 35 58 .6 6 ± 0. 29 41 .8 3 ± 2. 29 49 .4 6 ± 2. 95 10 .1 1 ± 0. 69 11 .8 1 ± 1. 55 N A SW O T (N =1 00 )( M el lo re ta l., 20 21 ) 6. 59 ± 0. 17 8. 56 ± 0. 87 28 .9 1 ± 0. 25 31 .6 5 ± 1. 95 55 .9 9 ± 1. 30 58 .4 7 ± 2. 74 41 .6 3 ± 1. 02 43 .3 1 ± 2. 00 10 .7 5 ± 0. 23 14 .4 7 ± 1. 44 N A SW O T (N =1 00 0) (M el lo re ta l., 20 21 ) 6. 68 ± 0. 12 8. 26 ± 0. 38 29 .3 7 ± 0. 17 31 .6 6 ± 0. 72 58 .9 3 ± 2. 92 58 .3 3 ± 0. 91 39 .6 1 ± 1. 12 45 .6 6 ± 1. 29 10 .6 8 ± 0. 27 13 .5 7 ± 1. 89 N A SW O T (N =1 00 00 )( M el lo re ta l., 20 21 ) 6. 98 ± 0. 43 8. 40 ± 0. 52 29 .9 5 ± 0. 42 32 .0 9 ± 1. 61 54 .2 0± 0. 49 57 .5 8 ± 1. 53 39 .9 0± 1. 20 42 .4 5 ± 0. 67 10 .7 2 ± 0. 53 14 .8 2 ± 0. 66 R E (R ea le ta l., 20 19 ;L iu et al ., 20 18 ) 5. 76 ± 0. 17 6. 88 ± 0. 24 27 .6 8 ± 0. 55 30 .0 0± 0. 32 53 .9 2 ± 0. 60 55 .3 9 ± 0. 54 52 .7 9 ± 0. 59 40 .9 9 ± 2. 89 7. 69 ± 0. 35 7. 56 ± 0. 69 B A N A T (W L )( R u et al ., 20 21 ) 5. 68 ± 0. 11 6. 98
1. What is the main contribution of the paper regarding CFG-based approach in architecture search spaces? 2. What are the strengths and weaknesses of the proposed approach, particularly in its expressivity, BO-based schemes, and experimental results? 3. Do you have any concerns or questions regarding the paper's claims, assumptions, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces a CFG-based approach for specifying architecture search spaces. They demonstrate the expressivity of their approach, propose a BO-based schemes for searching it while handling the massive hierarchical search space, and evaluate by searching for architectures on several vision datasets. Strengths And Weaknesses Strengths: The CFG-based approach for specifying large search spaces seems fairly novel and interesting, in-particular allowing for very large search spaces while dealing with issues of dimensionality and other constraints. The authors provide code and furthermore make an effort to ensure impact via both PyTorch and Tensorflow APIs. The presentation of the experimental section is very clear and directly lays out what questions are being answered. Empirically, the results show that the larger search space can be fruitfully searched for stronger architectures than are contained in the NB201 search space, at least on ImageNet-16-120 and the two recently introduced tasks. Weaknesses: The “from Scratch” nature of the work may be somewhat overstated, as the user must still specify primitives such as convolutions being used. The authors motivate the work by noting the Transformer was not discovered by NAS, but is there any hope that a search space generated along these lines would encode its crucial attention mechanism while not being so vast as to be unsearchable? Notably, there has been recent work on “from Scratch” AutoML (Real et al., 2020) and NAS (Roberts et al., 2021) that do aim for such generality. It is not entirely clear that the CFG formalism is crucial for defining and constraining search spaces. It could be useful to compare to other AutoML search space definitions, e.g. the domain-specific language for optimizer search of Bello et al. (2017). The related work noted in the two points above is missing. The methods sections of the paper are difficult to follow without either familiarity with CFGs (not safe to assume for ICLR). There is a lot of relegation of detail to the appendix, both for the search space design and for the search methods. The experimental section could benefit from demonstrations with search spaces beyond NB201, and on tasks beyond computer vision, especially since as the authors note the algorithms for vision have already been highly optimized. Some important comparisons/ablations are missing, such as whether FLOPs are also comparable to NB201, and whether it is important to have multiple kinds of activation function in the search space, or if performance is the same if only the one used by NB201 is allowed? Questions: Is “any neural architecture can be represented algebraically” a formal claim? If yes where is the definition of a neural architecture and proof of the result? Does this fact hold e.g. for recurrent nets? Equation 4: presumably f also includes some fixed training procedure for all networks? Why is “our grammar-based mechanism does not (generally) support simple scalability of discovered neural architectures (e.g., repetition of building blocks)” true given the use of the CL to capture NB-201? References: Bello, Zoph, Vasudevan, Le. Neural optimizer search with reinforcement learning. ICML 2017. Real, Liang, So, Le. AutoML-Zero: Evolving machine learning algorithms from scratch. ICML 2020. Roberts, Khodak, Dao, Li, Re, Talwalkar. Rethinking neural operations for diverse tasks. NeurIPS 2021. Clarity, Quality, Novelty And Reproducibility Clarity: difficult to follow, apart from the experimental section. Quality: the work is methodologically interesting, but some experimental justifications are missing. Novelty: the work is novel but missing comparisons on related work on from-scratch AutoML, as listed above. Reproducibility: good.
ICLR
Title Towards Discovering Neural Architectures from Scratch Abstract The discovery of neural architectures from scratch is the long-standing goal of Neural Architecture Search (NAS). Searching over a wide spectrum of neural architectures can facilitate the discovery of previously unconsidered but wellperforming architectures. In this work, we take a large step towards discovering neural architectures from scratch by expressing architectures algebraically. This algebraic view leads to a more general method for designing search spaces, which allows us to compactly represent search spaces that are 100s of orders of magnitude larger than common spaces from the literature. Further, we propose a Bayesian Optimization strategy to efficiently search over such huge spaces, and demonstrate empirically that both our search space design and our search strategy can be superior to existing baselines. We open source our algebraic NAS approach and provide APIs for PyTorch and TensorFlow. 1 INTRODUCTION Neural Architecture Search (NAS), a field with over 1 000 papers in the last two years (Deng & Lindauer, 2022), is widely touted to automatically discover novel, well-performing architectural patterns. However, while state-of-the-art performance has already been demonstrated in hundreds of NAS papers (prominently, e.g., (Tan & Le, 2019; 2021; Liu et al., 2019a)), success in automatically finding truly novel architectural patterns has been very scarce (Ramachandran et al., 2017; Liu et al., 2020). For example, novel architectures, such as transformers (Vaswani et al., 2017; Dosovitskiy et al., 2021) have been crafted manually and were not found by NAS. There is an accumulating amount of evidence that over-engineered, restrictive search spaces (e.g., cell-based ones) are major impediments for NAS to discover truly novel architectures. Yang et al. (2020b) showed that in the DARTS search space (Liu et al., 2019b) the manually-defined macro architecture is more important than the searched cells, while Xie et al. (2019) and Ru et al. (2020) achieved competitive performance with randomly wired neural architectures that do not adhere to common search space limitations. As a result, there are increasing efforts to break these impediments, and the discovery of novel neural architectures has been referred to as the holy grail of NAS. Hierarchical search spaces are a promising step towards this holy grail. In an initial work, Liu et al. (2018) proposed a hierarchical cell, which is shared across a fixed macro architecture, imitating the compositional neural architecture design pattern widely used by human experts. However, subsequent works showed the importance of both layer diversity (Tan & Le, 2019) and macro architecture (Xie et al., 2019; Ru et al., 2020). In this work, we introduce a general formalism for the representation of hierarchical search spaces, allowing both for layer diversity and a flexible macro architecture. The key observation is that any neural architecture can be represented algebraically; e.g., two residual blocks followed by a fullyconnected layer in a linear macro topology can be represented as the algebraic term ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (1) We build upon this observation and employ Context-Free Grammars (CFGs) to construct large spaces of such algebraic architecture terms. Although a particular search space is of course limited in its overall expressiveness, with this approach, we could effectively represent any neural architecture, facilitating the discovery of truly novel ones. Due to the hierarchical structure of algebraic terms, the number of candidate neural architectures scales exponentially with the number of hierarchical levels, leading to search spaces 100s of orders of magnitudes larger than commonly used ones. To search in these huge spaces, we propose an efficient search strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), which leverages hierarchical information, capturing the topological patterns across the hierarchical levels, in its tailored kernel design. Our contributions are as follows: • We present a novel technique to construct hierarchical NAS spaces based on an algebraic notion views neural architectures as algebraic architecture terms and CFGs to create algebraic search spaces (Section 2). • We propose BANAT, a Bayesian Optimization (BO) strategy that uses a tailored modeling strategy to efficiently and effectively search over our huge search spaces (Section 3). • After surveying related work (Section 4), we empirically show that search spaces of algebraic architecture terms perform on par or better than common cell-based spaces on different datasets, show the superiority of BANAT over common baselines, demonstrate the importance of incorporating hierarchical information in the modeling, and show that we can find novel architectural parts from basic mathematical operations (Section 5). We open source our code and provide APIs for PyTorch (Paszke et al., 2019) and TensorFlow (Abadi et al., 2015) at https://anonymous.4open.science/r/iclr23_tdnafs. 2 ALGEBRAIC NEURAL ARCHITECTURE SEARCH SPACE CONSTRUCTION In this section we present an algebraic view on Neural Architecture Search (NAS) (Section 2.1) and propose a construction mechanism based on Context-Free Grammars (CFGs) (Section 2.2 and 2.3). 2.1 ALGEBRAIC ARCHITECTURE TERMS FOR NEURAL ARCHITECTURE SEARCH We introduce algebraic architecture terms as a string representation for neural architectures from a (term) algebra. Formally, an algebra (A,F) consists of a non-empty set A (universe) and a set of operators f : An → A ∈ F of different arities n ≥ 0 (Birkhoff, 1935). In our case, A corresponds to the set of all (sub-)architectures and we distinguish between two types of operators: (i) nullary operators representing primitive computations (e.g., conv() or fc()) and (ii) k-ary operators with k > 0 representing topological operators (e.g., Linear(·, ·, ·) or Residual(·, ·, ·)). For sake of notational simplicity, we omit parenthesis for nullary operators (i.e., we write conv). Term algebras (Baader & Nipkow, 1999) are a special type of algebra mapping an algebraic expression to its string representation. E.g., we can represent a neural architecture as the algebraic architecture term ω as shown in Equation 1. Term algebras also allow for variables xi that are set to terms themselves that can be re-used across a term. In our case, the intermediate variables xi can therefore share patterns across the architecture, e.g., a shared cell. For example, we could define the intermediate variable x1 to map to the residual block in ω from Equation 1 as follows: ω′ = Linear(x1, x1, fc), x1 = Residual(conv, id, conv) . (2) Algebraic NAS We formulate our algebraic view on NAS, where we search over algebraic architecture terms ω ∈ Ω representing their associated architectures Φ(ω), as follows: argmin ω∈Ω f(Φ(ω)) , (3) where f(·) is an error measure that we seek to minimize, e.g., final validation error of a fixed training protocol. For example, we can represent the popular cell-based NAS-Bench-201 search space(Dong & Yang, 2020) as algebraic search space Ω. The algebraic search space Ω is characterized by a fixed macro architecture Macro(. . .) that stacks 15 instances of a shared cell Cell(pi,pi,pi,pi,pi,pi), where the cell has six edges, on each of which one of five primitive computations can be placed (i.e., pi for i ∈ {1, 2, 3, 4, 5} corresponding to zero, id, conv1x1, conv3x3, or avg pool, respectively). By leveraging the intermediate variable x1 we can effectively share the cell topology across the architecture. For example, we can express an architecture ωi ∈ Ω from the NAS-Bench-201 search space Ω as: ωi = Macro(x1, x1, ..., x1︸ ︷︷ ︸ 15× ), x1 = Cell(p1,p2,p1,p5,p4,p3) . (4) Algebraic NAS over such algebraic architecture terms then amounts to finding the best-performing primitive computation pi for each edge, as the macro architecture is fixed. In contrast to this simple cell-based algebraic space, the search spaces we consider can be much more expressive and, e.g., allow for layer diversity and a flexible macro architecture over several hierarchical levels (Section 5.1). 2.2 CONSTRUCTING NEURAL ARCHITECTURE TERMS WITH CONTEXT-FREE GRAMMARS We propose to use Context-Free Grammars (CFGs) (Chomsky, 1956) since they can naturally generate (hierarchical) algebraic architecture terms. Compared to other search space designs, CFGs give us a formally grounded way to naturally and compactly define very expressive hierarchical search spaces (e.g., see Section 5.1). We can also unify popular search spaces from the literature with our general search space design in one framework (Appendix E). They give us further a simple mechanism to evolve architectures while staying within the defined search space (Section 3). Formally, a CFG G = ⟨N,Σ, P, S⟩ consists of a finite set of nonterminals N and terminals Σ with N ∩Σ = ∅, a finite set of production rules P = {A→ β|A ∈ N, β ∈ (N ∪Σ)∗}, where the asterisk ∗ denotes the Kleene star operation (Kleene et al., 1956), and a start symbol S ∈ N . To generate an algebraic architecture term, starting from the start symbol S, we recursively replace nonterminals of the current algebraic term with a right-hand side of a production rule consisting of nonterminals and terminals, until the resulting string does not contain any nonterminals. For example, consider the following CFG in extended Backus-Naur form (Backus, 1959) (see Appendix B for background): S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc (5) From this CFG, we can derive the algebraic architecture term ω (with three hierarchical levels) from Equation 1 as follows: S→ Linear(S, S, S) Level 1 → Linear(Residual(S, S, S), Residual(S, S, S), fc) Level 2 (6) → Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) Level 3 Figure 1 makes the above derivation and the connection to the associated architecture explicit. The set of all (potentially infinite) algebraic terms generated by a CFG G is the language L(G), which naturally forms our search space Ω. Thus, the algebraic NAS problem from Equation 3 becomes: argmin ω∈L(G) f(Φ(ω)) . (7) 2.3 EXTENSIONS TO THE CONSTRUCTION MECHANISM Constraints In many search space designs, we want to adhere to some constraints, e.g., to limit the number of nodes or to ensure that for all architectures in the search space there exists at least one path from the input to the output. We can simply do so by allowing only the application of production rules which guarantee compliance to such constraints. For example, to ensure that there is at least one path from the input to the output, it is sufficient to ensure that each derivation connects its input to the output due to the recursive nature of CFGs. Note that this makes CFGs context-sensitive w.r.t. those constraints. For more details, please refer to Appendix D. Fostering regularity through substitution To implement intermediate variables xi (Section 2.1) we leverage that context-free languages are closed under substitution: we map terminals, representing the intermediate variables xi, from one language to algebraic terms of other languages, e.g., a shared cell. For example, we can split a CFG G, constructing entire algebraic architecture terms, into the CFGs Gmacro and Gcell for the macro- or cell-level, respectively. Further, we add a single (or multiple) intermediate terminal(s) x1 to Gmacro which maps to an algebraic term ω1 ∈ L(Gcell), e.g., the searchable cell. Thus, we effectively search over the macro-level as well as a single, shared cell. Note that by using a fixed macro architecture (i.e., |L(Gmacro)| = 1), we can represent cell-based search spaces, e.g., NAS-Bench-201 (Dong & Yang, 2020), while also being able to represent more expressive search spaces (e.g., see Section 5.1). More generally, we could extend this by adding further intermediate terminals which map to other languages L(Gj), or by adding intermediate terminals to G2 which map to languages L(Gj ̸=1). In this way, we can effectively foster regularity. Representing common architecture patterns for object recognition Neural architectures for object recognition commonly build a hierarchy of features that are gradually downsampled, e.g., by pooling operations. However, previous works in NAS were either limited to a fixed macro architecture (Zoph et al., 2018), only allowed for linear macro architectures (Liu et al., 2019a), or required post-sampling testing for resolution mismatches (Stanley & Miikkulainen, 2002; Ru et al., 2020). While this produced impressive performance on popular benchmarks (Tan & Le, 2019; 2021; Liu et al., 2019a), it is an open research question whether a different type of macro architecture (e.g., one with multiple branches) could yield even better performance. To accommodate flexible macro architectures, we propose to overload the nonterminals. In particular, the nonterminals indicate how often we apply downsampling operations in the subsequent derivations of the nonterminal. Consider the production rule D2 → Residual(D1, D2, D1), where Di with i ∈ {1, 2} are a nonterminals which indicate that i downsampling operations have to be applied in their subsequent derivations. That is, in both paths of the residual the input features will be downsampled twice and, consequently, the merging paths will have the same spatial resolution. Thereby, this mechanism distributes the downsampling operations recursively across the architecture. For the channels, we adopted the common design to double the number of channels whenever we halve the spatial resolution in our experiments. Note that we could also handle a varying number of channels by using, e.g., depthwise concatenation as merge operation. 3 BAYESIAN OPTIMIZATION FOR ALGEBRAIC NEURAL ARCHITECTURE SEARCH We propose a BO strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), to efficiently search in the huge search spaces spanned by our algebraic architecture terms: we introduce a novel surrogate model which combines a Gaussian Process (GP) surrogate with a tailored kernel that leverages the hierarchical structure of algebraic neural architecture terms (see below), and adopt expected improvement as the acquisition function (Mockus et al., 1978). Given the discrete nature of architectures, we adopt ideas from grammar-guided genetic programming (McKay et al., 2010; Moss et al., 2020) for acquisition function optimization. Furthermore, to reduce wallclock time by leveraging parallel computing resources, we adapt the Kriging Believer (Ginsbourger et al., 2010) to select architectures at every search iteration so that we can train and evaluate them in parallel. Specifically, Kriging Believer assigns hallucinated values (i.e., posterior mean) of pending evaluations at each iteration to avoid redundant evaluations. For a more detailed explanation of BANAT, please refer to Appendix F. Hierarchical Weisfeiler-Lehman kernel (hWL) Inspired by the state-of-the-art BO approach for NAS (Ru et al., 2021), we adopt the WL graph kernel (Shervashidze et al., 2011) in a GP surrogate, modeling performance of the algebraic architecture terms ωi with the associated architectures Φ(ωi). However, modeling solely based on the final architecture ignores the useful hierarchical information inherent in our algebraic representation. Moreover, the large size of the architectures also makes it difficult to use a single WL kernel to capture the more global topological patterns. Since our hierarchical construction can be viewed as a series of gradually unfolding architectures, with the final architecture containing only primitive computations, we propose a novel hierarchical kernel design assigning a WL kernel to each hierarchy and combine them in a weighted sum. To this end, we introduce fold operators Fl, that removes algebraic terms beyond the l-th hierarchical level. For example, the fold operators F1, F2 and F3 yield for the algebraic term ω (Equation 1) F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (8) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear . Note the similarity to the derivations in Figure 1. Furthermore note that, in practice, we also add the corresponding nonterminals to integrate information from our hierarchical construction process. We define our hierarchical WL kernel (hWL) for two architectures Φ(ωi) and Φ(ωj) with algebraic architecture terms ωi or ωj , respectively, constructed over a hierarchy of L levels, as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (9) where the weights λl govern the importance of the learned graph information at different hierarchical levels (granularities of the architecture) and can be tuned (along with other hyperparameters of the GP) by maximizing the marginal likelihood. We omit l = 1 in the additive kernel as F1(ω) does not contain any edge features which are required for our WL kernel kWL. For more details on our novel hierarchical kernel design, please refer to Appendix F.2. Our proposed kernel efficiently captures the information in all algebraic term construction levels, which substantially improves its search and surrogate regression performance on our search space as demonstrated in Section 5. Acquisition function optimization To optimize the acquisition function, we adopt ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). For mutation, we randomly replace a sub-architecture term with a new randomly generated term, using the same nonterminal as start symbol. For crossover, we randomly swap two sub-architecture terms with the same corresponding nonterminal. We consider two crossover operators: a novel self-crossover operation swaps two sub-terms of a single architecture term, and the common crossover operation swaps subterms of two different architecture terms. Importantly, all evolutionary operations by design only result in valid terms. We provide examples for the evolutionary operations in Appendix F. 4 RELATED WORK We discuss related works in NAS below and discuss works beyond NAS in Appendix G. Neural Architecture Search Neural Architecture Search (NAS) aims to automatically discover architectural patterns (or even entire architectures) (Elsken et al., 2019). Previous approaches, e.g., used reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), evolution (Real et al., 2017), gradient descent (Liu et al., 2019b), or Bayesian Optimization (BO) (Kandasamy et al., 2018; White et al., 2021; Ru et al., 2021). To enable the effective use of BO on graph-like inputs for NAS, previous works have proposed to use a GP with specialized kernels (Kandasamy et al., 2018; Ru et al., 2021), encoding schemes (Ying et al., 2019; White et al., 2021), or graph neural networks as surrogate model (Ma et al., 2019; Shi et al., 2020; Zhang et al., 2019). Different to prior works, we explicitly leverage the hierarchical construction of architectures for modeling. Searching for novel architectural patterns Previous works mostly focused on finding a shared cell (Zoph et al., 2018) with a fixed macro architecture while only few works considered more expressive hierarchical search spaces (Liu et al., 2018; 2019a; Tan et al., 2019). The latter works considered hierarchical assembly (Liu et al., 2018), combination of a cell- and network-level search space (Liu et al., 2019a; Zhang et al., 2020), evolution of network topologies (Miikkulainen et al., 2019), factorization of the search space (Tan et al., 2019), parameterization of a hierarchy of random graph generators (Ru et al., 2020), a formal language over computational graphs (Negrinho et al., 2019), or a hierarchical construction of TensorFlow programs (So et al., 2021). Similarly, our formalism allows to design search spaces covering a general set of architecture design choices, but also permits the search for macro architectures with spatial resolution changes and multiple branches. We also handle spatial resolution changes without requiring post-hoc testing or resizing of the feature maps unlike prior works (Stanley & Miikkulainen, 2002; Miikkulainen et al., 2019; Stanley et al., 2019). Other works proposed approaches based on string rewriting systems (Kitano, 1990; Boers et al., 1993), cellular (or tree-structured) encoding schemes (Gruau, 1994; Luke & Spector, 1996; De Jong & Pollack, 2001; Cai et al., 2018), hyperedge replacement graph grammars Luerssen & Powers (2003); Luerssen (2005), attribute grammars (Mouret & Doncieux, 2008), CFGs (Jacob & Rehder, 1993; Couchet et al., 2007; Ahmadizar et al., 2015; Ahmad et al., 2019; Assunção et al., 2017; 2019; Lima et al., 2019; de la Fuente Castillo et al., 2020), or And-Or-grammars (Li et al., 2019). Different to these prior works, we construct entire architectures with spatial resolution changes across multiple branches, and propose techniques to incorporate constraints and foster regularity. Orthogonal to the aforementioned approaches, Roberts et al. (2021) searched over neural (XD-)operations, which is orthogonal to our approach, i.e., our predefined primitive computations could be replaced by their proposed XD-operations. 5 EXPERIMENTS In this section, we investigate potential benefits of hierarchical search spaces and our search strategy BANAT. More specifically, we address the following questions: Q1 Can hierarchical search spaces yield on par or superior architectures compared to cell-based search spaces with a limited number of evaluations? Q2 Can our search strategy BANAT improve performance over common baselines? Q3 Does leveraging the hierarchical information improve performance? Q4 Do zero-cost proxies work in vast hierarchical search spaces? Q5 Can we discover novel architectural patterns (e.g., activation functions)? To answer questions Q1-Q4, we introduce a hierarchical search space based on the popular NASBench-201 search space (Dong & Yang, 2020) in Section 5.1. To answer question Q5, we search for activation functions (Ramachandran et al., 2017) and defer the search space definition to Appendix J.1. We provide complementary results and analyses in Appendix I.2 and J.3. 5.1 HIERARCHICAL NAS-BENCH-201 We propose a hierarchical variant of the popular cell-based NAS-Bench-201 search space (Dong & Yang, 2020) by adding a hierarchical macro space (i.e., spatial resolution flow and wiring at the macro-level) and parameterizable convolutional blocks (i.e., choice of convolutions, activations, and normalizations). We express the hierarchical NAS-Bench-201 search space with CFG Gh as follows: D2 ::= Linear3(D1, D1, D0) | Linear3(D0, D1, D1) | Linear4(D1, D1, D0, D0) D1 ::= Linear3(C, C, D) | Linear4(C, C, C, D) | Residual3(C, C, D, D) D0 ::= Linear3(C, C, CL) | Linear4(C, C, C, CL) | Residual3(C, C, CL, CL) D ::= Linear2(CL, down) | Linear3(CL, CL, down) | Residual2(C, down, down) C ::= Linear2(CL, CL) | Linear3(CL, CL) | Residual2(CL, CL, CL) CL ::= Cell(OP, OP, OP, OP, OP, OP) OP ::= zero | id | BLOCK | avg pool BLOCK ::= Linear3(ACT, CONV, NORM) ACT ::= relu | hardswish | mish CONV ::= conv1x1 | conv3x3 | dconv3x3 NORM ::= batch | instance | layer . (10) See Appendix A for the terminal vocabulary of topological operators and primitive computations. The productions with the nonterminals {D2, D1, D0, D} define the spatial resolution flow and together with {C} define the macro architecture containing possibly multiple branches. The productions for {CL, OP} construct the NAS-Bench-201 cell and {BLOCK, ACT, CONV, NORM} parameterize the convolutional block. To ensure that we use the same distribution over the primitive computations as in NAS-Bench-201, we reweigh the sampling probabilities of the productions generated by the nonterminal OP, i.e., all production choices have sampling probability of 20%, but BLOCK has 40%. Note that we omit the stem (i.e., 3x3 convolution followed by batch normalization) and classifier (i.e., batch normalization followed by ReLU, global average pooling, and fully-connected layer) for simplicity. We implemented the merge operation as element-wise summation. Different to the cell-based NAS-Bench-201 search space, we exclude degenerated architectures by introducing a constraint that ensures that each subterm maps the input to the output (i.e., in the associated computational graph there is at least one path from source to sink). Our search space consists of ca. 10446 algebraic architecture terms (please refer to Appendix C on how to compute the search space size), which is significantly larger than other popular search spaces from the literature. For comparison, the cell-based NAS-Bench-201 search space is just a minuscule subspace of size 104.18, where we apply only the blue-colored production rules and replace the CL nonterminals with a placeholder terminal x1 that will be substituted by the searched, shared cell. 5.2 EVALUATION DETAILS For all search experiments, we compared the search strategies BANAT, Random Search (RS), Regularized Evolution (RE) (Real et al., 2019; Liu et al., 2018), and BANAT (WL) (Ru et al., 2021). For implementation details of the search strategies, please refer to Appendix H. We ran search for a total of 100 evaluations with a random initial design of 10 on three seeds {777, 888, 999} on the hierarchical NAS-Bench-201 search space or 1000 evaluations with a random initial design of 50 on one seed {777} on the activation function search space using 8 asynchronous workers each with a single NVIDIA RTX 2080 Ti GPU. In each evaluation, we fully trained the architectures and recorded their last validation error. For training details on the hierarchical NAS-Bench-201 search space and activation function search space, please refer to Appendix I.1 or Appendix J.2, respectively. To assess the modeling performance of our surrogate, we compared regression performance of GPs with different kernels, i.e., our hierarchical WL kernel (hWL), (standard) WL kernel (Ru et al., 2021), and NASBOT’s kernel (Kandasamy et al., 2018). We also tried the GCN encoding (Shi et al., 2020) but it could not capture the mapping from the complex graph space to performance, resulting in constant performance predictions. Further, note that the adjacency encoding (Ying et al., 2019) and path encoding (White et al., 2021) cannot be used in our hierarchical search spaces since the former requires the same amount of nodes across graphs and the latter scales exponentially in the number of nodes. We ran 20 trials over the seeds {0, 1, ..., 19} and re-used the data from the search runs. In every trial, we sampled a training and test set of 700 or 500 architecture and validation error pairs, respectively. We fitted the surrogates with a varying number of training samples by randomly choosing samples from the training set without replacement, and recorded Kendall’s τ rank correlation between the predicted and true validation error. To assess zero-cost proxies, we re-used the data from the search runs and recorded Kendall’s τ rank correlation. 5.3 RESULTS In the following we answer all of the questions Q1-Q5. Figure 2 compares the results of the cellbased and hierarchical search space design using our search strategy BANAT. Results with BANAT are on par on CIFAR-10/100, superior on ImageNet-16-120, and clearly superior on CIFARTile and AddNIST (answering Q1). We emphasize that the NAS community has engineered the cell-based search space to achieve strong performance on those popular image classification datasets for over a decade, making it unsurprising that our improvements are much larger for the novel datasets. Yet, our best found architecture on ImageNet-16-120 from the hierarchical search space also achieves an excellent test error of 52.78% with only 0.626MB parameters (Appendix I.2); this is superior to the architecture found by the state-of-the-art method Shapley-NAS (i.e., 53.15%) (Xiao et al., 2022) and on par with the optimal architecture of the cell-based NAS-Bench-201 search space (i.e., 52.69% with 0.866MB). Figure 3 shows that our search strategy BANAT is also superior 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based Evaluations Evaluations Evaluations Evaluations Evaluations BANAT (ours) RS RE BANAT (WL) Figure 3: Comparison of search strategies on the hierarchical search space. We plot mean and ±1 standard error of the validation error on the hierarchical NAS-Bench-201 search space for our search strategy BANAT (solid blue), RS (dashed orange), RE (dotted green), and BANAT (WL) (dashdotted red). We report test errors, best architectures, and conduct further analyses in Appendix I.2. to common baselines (answering Q2) and leveraging hierarchical information clearly improves performance (answering Q3). Further, the evaluation of surrogate performance in Figure 4 shows that incorporating hierarchical information with our hierarchical WL kernel (hWL) improves modeling, especially on smaller amounts of training data (further answering Q3). Table 1 shows that the baseline zero-cost proxies flops and l2-norm yield competitive (or often superior) results to more sophisticated zero-cost proxies; making hierarchical search spaces an interesting future research direction for them (answering Q4). Finally, Table 2 shows that we can find novel well-performing activation functions from basic mathematical operations with BANAT (answering Q5). 6 DISCUSSION AND LIMITATIONS While our grammar-based construction mechanism is a powerful mechanism to construct huge hierarchical search space, we can not construct any architecture with our grammar-based construction approach (Section 2.2 and 2.3) since we are limited to context-free languages; e.g., architectures of the type {anbncn|n ∈ N>0} cannot be generated by CFGs (this can be proven using Odgen’s lemma (Ogden, 1968)). Further, due to the discrete nature of CFGs we can not easily integrate continuous design choices, e.g., dropout probability. Furthermore, our grammar-based mechanism does not (generally) support simple scalability of discovered neural architectures (e.g., repetition of building blocks) without special consideration in the search space design. Nevertheless, our search spaces still significantly increase the expressiveness, including the ability to represent common search spaces from the literature (see Appendix E for how we can represent the search spaces of DARTS, Auto-Deeplab, the hierarchical cell search space of Liu et al. (2018), the Mobile-net search space, and the hierarchical random graph generator search space), as well as allowing search for entire neural architectures based around the popular NAS-Bench-201 search space CIFAR-10 CIFAR-100 ImageNet-16-120 CIFARTile AddNIST (Section 5). Thus, our search space design can facilitate the discovery of novel well-performing neural architectures in those huge search spaces of algebraic architecture terms. However, there is an inherent trade-off between the expressiveness and the difficulty of search. The much greater expressiveness facilitates search in a richer set of architectures that may include better architectures than in more restrictive search spaces, which however need not exist. Besides that, the (potential) existence of such a well-performing architecture does not result in a search strategy discovering it, even with large amounts of computing power available. Note that the tradeoff manifests itself also in the acquisition function optimization of our search strategy BANAT. In addition, a well-performing neural architecture may not work with current training protocols and hyperparameters due to interaction effects, i.e., training protocols and hyperparameters may be overoptimized for specific types of neural architectures. To overcome this limitation, one could consider a joint optimization of neural architectures, training protocols, and hyperparameters. However, this further fuels the trade-off between expressiveness and the difficulty of search. 7 CONCLUSION We introduced very expressive search spaces of algebraic architecture terms constructed with CFGs. To efficiently search over the huge search spaces, we proposed BANAT, an efficient BO strategy with a tailored kernel leveraging the available hierarchical information. Our experiments indicate that both our search space design and our search strategy can yield strong performance over existing baselines. Our results motivate further steps towards the discovery of neural architectures based on even more atomic primitive computations. Furthermore, future works could (simultaneously) learn the search space (i.e., learn the grammar) or improve search efficiency by means of multi-fidelity optimization or gradient-based search strategies. REPRODUCIBILITY STATEMENT To ensure reproducibility, we address all points of the best practices checklist for NAS research (Lindauer & Hutter, 2020) in Appendix K. ETHICS STATEMENT NAS has immense potential to facilitate systematic, automated discovery of high-performing (novel) architecture designs. However, the restrictive cell-based search spaces most commonly used in NAS render it impossible to discover truly novel neural architectures. With our general formalism based on algebraic terms, we hope to provide fertile foundation towards discovering high-performing and efficient architectures; potentially from scratch. However, search in such huge search spaces is expensive, particularly in the context of the ongoing detrimental climate crisis. While on the one hand, the discovered neural architectures, like other AI technologies, could potentially be exploited to have a negative societal impact; on the other hand, our work could also lead to advances across scientific disciplines like healthcare and chemistry. A FROM TERMINALS TO PRIMITIVE COMPUTATIONS AND TOPOLOGICAL OPERATORS Table 3 and Figure 5 describe the primitive computations and topological operators used throughout our experiments in Section 5 and Appendix I, respectively. Note that by adding more primitive computations and/or topological operators we could construct even more expressive search spaces. B EXTENDED BACKUS-NAUR FORM The (extended) Backus-Naur form (Backus, 1959) is a meta-language to describe the syntax of CFGs. We use meta-rules of the form S ::= α where S ∈ N is a nonterminal and α ∈ (N ∪ Σ)∗ is a string of nonterminals and/or terminals. We denote nonterminals in UPPER CASE, terminals corresponding to topological operators in Initial upper case/teletype, and terminals corresponding to primitive computations in lower case/teletype, e.g., S ::= Residual(S, S, id). To compactly express production rules with the same left-hand side nonterminal, we use the vertical bar | to indicate a choice of production rules with the same left-hand side, e.g., S ::= Linear(S, S, S) | Residual(S, S, id) | conv. C SEARCH SPACE SIZE In this section, we show how to efficiently compute the size of our search spaces constructed by CFGs. There are two cases to consider: (i) a CFG contains cycles (i.e., part of the derivation can be repeated infinitely many times) , yielding an open-ended, infinite search space; and (ii) a CFG contains no cycles, yielding in a finite search space whose size we can compute. Consider a production A → Residual(B, B, B) where Residual is a terminal, and A and B are nonterminals with B → conv | id. Consequently, there are 23 = 8 possible instances of the residual block. If we add another production choice for the nonterminal A, e.g., A → Linear(B, B, B), we would have 23 + 23 = 16 possible instances. Further, adding a production C → Linear(A, A, A) would yield a search space size of (23 + 23)3 = 4096. More generally, we introduce the function PA that returns the set of productions for nonterminal A ∈ N , and the function µ : P → N that returns all the nonterminals for a production p ∈ P . We can then recursively compute the size of the search space as follows: f(A) = ∑ p∈PA { 1 , µ(p) = ∅,∏ A′∈µ(p) f(A′) , otherwise . (11) When a CFG contains some constraint, we ensure to only account for valid architectures (i.e., compliant with the constraints) by ignoring productions which would lead to invalid architectures. D MORE DETAILS ON SEARCH SPACE CONSTRAINTS During the design of the search space, we may want to comply with some constraints, e.g., only consider valid neural architectures or impose structural constraints on architectures. We can guarantee compliance with constraints by modifying sampling (and evolution): we only allow the application of production rules, which guarantee compliance with the constraint(s). In the following, we show exemplary how this can be implemented for the former constraint mentioned above. Note that other constraints can be implemented in a similar manner To implement the constraint ”only consider valid neural architectures”, we note that our search space design only creates neural architectures where neither the spatial resolution nor the channels can be mismatched; please refer to Section 2.3 for details. Thus, the only way a neural architecture can become invalid is through zero operations, which could remove edges from the computational graph and possibly disassociate the input from the output. Since we recursively assemble neural architectures, it is sufficient to ensure that the derived algebraic architecture term (i.e., the associated computational graph) is compliant with the constraint, i.e.,there is at least one path from input to output. Thus, during sampling (and similarly during evolution), we modify the current production rule choices when an application of the zero operation would disassociate the input from the output. E COMMON SEARCH SPACES FROM THE LITERATURE In Section 5.1, we demonstrated how to construct the popular NAS-Bench-201 search space within our algebraic search space design, and below we show how to reconstruct the following popular search spaces: DARTS search space (Liu et al., 2019b), Auto-DeepLab search space (Liu et al., 2019a), hierarchical cell search space (Liu et al., 2018), Mobile-net search space (Tan et al., 2019), and hierarchical random graph generator search space (Ru et al., 2020). For implementation details we refer to the respective works. DARTS SEARCH SPACE The DARTS search space (Liu et al., 2019b) consists of a fixed macro architecture and a cell, i.e., a seven node directed acyclic graph (Darts; see Figure 6 for the topological operator). We omit the fixed macro architecture from our search space design for simplicity. Each cell receives the feature maps from the two preceding cells as input and outputs a single feature map. All intermediate nodes (i.e., Node3, Node4, Node5, and Node6) is computed based on all of its predecessors. Thus, we can define the DARTS search space as follows: DARTS ::= Darts(NODE3, NODE4, NODE5, NODE6) NODE3 ::= Node3(OP, OP) NODE4 ::= Node4(OP, OP, OP) NODE5 ::= Node5(OP, OP, OP, OP) NODE6 ::= Node6(OP, OP, OP, OP, OP) OP ::= sep conv 3x3 | sep conv 5x5 | dil conv 3x3 | dil conv 5x5 | max pool | avg pool | id | zero , (12) where the topological operator Node3 receives two inputs, applies the operations separately on them, and sums them up. Similarly, Node4, Node5, and Node6 apply their operations separately to the given inputs and sum them up. The topological operator Darts feeds the corresponding feature maps into each of those topological operators and finally concatenates all intermediate feature maps. AUTO-DEEPLAB SEARCH SPACE Auto-DeepLab (Liu et al., 2019a) combines a cell-level with a network-level search space to search for segmentation networks, where the cell is shared across the searched macro architecture, i.e., a twelve step (linear) path across different spatial resolutions. The cell-level design is adopted from Liu et al. (2019b) and, thus, we can re-use the CFG from Equation 12. For the network-level, we introduce a constraint that ensures that the path is of length twelve, i.e., we ensure exactly twelve derivations in our CFG. Further, we overload the nonterminals so that they correspond to the respective spatial resolution level, e.g., D4 indicates that the original input is downsampled by a factor of four; please refer to Section 2.3 for details on overloading nonterminals. For the sake of simplicity, we omit the first two layers and atrous spatial pyramid poolings as they are fixed, and hence define the network-level search space as follows: D4 ::= Same(CELL, D4) | Down(CELL, D8) D8 ::= Up(CELL, D4) | Same(CELL, D8) | Down(CELL, D16) D16 ::= Up(CELL, D8) | Same(CELL, D16) | Down(CELL, D32) D32 ::= Up(CELL, D16) | Same(CELL, D32) , (13) where the topological operators Up, Same, and Down upsample/halve, do not change/do not change, or downsample/double the spatial resolution/channels, respectively. The placeholder variable CELL maps to the shared DARTS cell from the language generated by the CFG from Equation 12. HIERARCHICAL CELL SEARCH SPACE The hierarchical cell search space (Liu et al., 2018) consists of a fixed (linear) macro architecture and a hierarchically assembled cell with three levels which is shared across the macro architecture. Thus, we can omit the fixed macro architecture from our search space design for simplicity. Their first, second, and third hierarchical levels correspond to the primitive computations (i.e., id, max pool, avg pool, sep conv, depth conv, conv, zero), six densely connected four node directed acyclic graphs (DAG4), and a densely connected five node directed acyclic graph (DAG5), respectively. The zero operation could lead to directed acyclic graphs which have fewer nodes. Therefore, we introduce a constraint enforcing that there are always four (level 2) or five (level 3) nodes for every directed acyclic graph. Further, since a densely connected five node directed acyclic graph graph has ten edges, we need to introduce placeholder variables (i.e., M1, ..., M6) to enforce that only six (possibly) different four node directed acyclic graphs are used, and consequently define a CFG for the third level LEVEL3 ::= DAG5(LEVEL2, ..., LEVEL2︸ ︷︷ ︸ ×10 ) LEVEL2 ::= M1 | M2 | M3 | M4 | M5 | M6 | zero , (14) mapping the placeholder variables M1, ..., M6 to the six lower-level motifs constructed by the first and second hierarchical level LEVEL2 ::= DAG4(LEVEL1, ..., LEVEL1)︸ ︷︷ ︸ ×6 LEVEL1 ::= id | max pool | avg pool | sep conv | depth conv | conv | zero . (15) MOBILE-NET SEARCH SPACE Factorized hierarchical search spaces, e.g., the Mobile-net search space (Tan et al., 2019), allow for layer diversity. They factorize a (fixed) macro architecture – often based on an already wellperforming reference architecture – into separate blocks (e.g., cells). For the sake of simplicity, we assume here a three sequential blocks (Block) architecture (Linear). In each of those blocks, we search for the convolution operations (CONV), kernel sizes (KSIZE), squeeze-and-excitation ratio (SERATIO) (Hu et al., 2018), skip connections (SKIP), number of output channels (FSIZE), and number of layers per block (#LAYERS), where the latter two are discretized using a reference architecture, e.g., MobileNetV2 (Sandler et al., 2018). Consequently, we can express this search space as follows: MACRO ::= Linear(BLOCK, BLOCK, BLOCK) BLOCK ::= Block(CONV, KSIZE, SERATIO, SKIP, FSIZE, #LAYERS) CONV ::= conv | dconv | mbconv KSIZE ::= 3 | 5 SERATIO ::= 0 | 0.25 SKIP ::= pooling | id residual | no skip FSIZE ::= 0.75 | 1.0 | 1.25 #LAYERS ::= -1 | 0 | 1 , (16) where conv, donv and mbconv correspond to convolution, depthwise convolution, and mobile inverted bottleneck convolution (Sandler et al., 2018), respectively. HIERARCHICAL RANDOM GRAPH GENERATOR SEARCH SPACE The hierarchical random graph generator search space (Ru et al., 2020) consists of three hierarchical levels of random graph generators (i.e., Watts-Strogatz (Watts & Strogatz, 1998) and Erdõs-Rényi (Erdős et al., 1960)). We denote with Watts-Strogatz i the random graph generated by the Watts-Strogatz model with i nodes. Thus, we can represent the search space as follows: TOP ::= Watts-Strogatz 3(K, Pt)(MID, MID, MID) | ... | Watts-Strogatz 10(K, Pt)(MID, ..., MID︸ ︷︷ ︸ ×10 ) MID ::= Erdõs-Rényi 1(Pm)(BOT) | ... | Erdõs-Rényi 10(Pm)(BOT, ..., BOT︸ ︷︷ ︸ ×10 ) BOT ::= Watts-Strogatz 3(K, Pb)(NODE, NODE, NODE) | ... | Watts-Strogatz 10(K, Pb)(NODE ..., NODE︸ ︷︷ ︸ ×10 ) K ::= 2 | 3 | 4 | 5 , (17) Algorithm 1 Bayesian Optimization algorithm (Brochu et al., 2010). Input: Initial observed data Dt, a black-box objective function f , total number of BO iterations T Output: The best recommendation about the global optimizer x∗ for t = 1, . . . , T do Select the next xt+1 by maximizing acquisition function α(x|Dt) Evaluate the objective function at ft+1 = f(xt+1) Dt+1 ← Dt ∪ (xt+1, ft+1) Update the surrogate model with Dt+1 end for where each terminal Pt, Pm, and Pb maps to a continuous number in [0.1, 0.9]1 and the placeholder variable NODEmaps to a primitive computation, e.g., separable convolution. Note that we omit other hyperparameters, such as stage ratio, channel ratio etc., for simplicity. F MORE DETAILS ON THE SEARCH STRATEGY In this section, we provide more details and examples for our search strategy Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT) presented in Section 3. F.1 BAYESIAN OPTIMIZATION Bayesian Optimization (BO) is a powerful family of search techniques for finding the global optimum of a black-box objective problem. It is particularly useful when the objective is expensive to evaluate and thus sample efficiency is highly important (Brochu et al., 2010). To minimize a black-box objective problem with BO, we first need to build a probabilistic surrogate to model the objective based on the observed data so far. Based on the surrogate model, we design an acquisition function to evaluate the utility of potential candidate points by trading off exploitation (where the posterior mean of the surrogate model is low) and exploration (where the posterior variance of the surrogate model is high). The next candidate points to evaluate is then selected by maximizing the acquisition function (Shahriari et al., 2015). The general procedures of BO is summarized in Algorithm 1. We adopted the widely used acquisition function, expected improvement (EI) (Mockus et al., 1978), in our BO strategy. EI evaluates the expected amount of improvement of a candidate point x over the minimal value f ′ observed so far. Specifically, denote the improvement function as I(x) = max(0, f ′ − f(x)), the EI acquisition function has the form αEI(x|Dt) = E[I(x)|Dt] = ∫ f ′ −∞ (f ′ − f)N ( f ;µ(x|Dt), σ2(x|Dt) ) df = (f ′ − f)Φ ( f ′;µ(x|Dt), σ2(x|Dt) ) + σ2(x|Dt)ϕ(f ′;µ(x|Dt), σ2(x|Dt)) , where µ(x|Dt) and σ2(x|Dt) are the mean and variance of the predictive posterior distribution at a candidate point x, and ϕ(·) and Φ(·) denote the PDF and CDF of the standard normal distribution, respectively. To make use of ample distributed computing resource, we adopted Kriging Believer (Ginsbourger et al., 2010) which uses the predictive posterior of the surrogate model to assign hallucinated function values {f̃p}p∈{1, ..., P} to the P candidate points with pending evaluations {x̃p}p∈{1, ..., P} and perform next BO recommendation in the batch by pseudo-augmenting the observation data with D̃p = {(x̃p, f̃p)}p∈{1, ..., P}, namely D̃t = Dt ∪ D̃p. The algorithm of Kriging Believer at one BO iteration to select a batch of recommended candidate points is summarized in Algorithm 2. 1Theoretically, this is not possible with CFGs. However, we can extend the notion of substitution by substituting a string representation of a Python (float) variable for the placeholder variables Pt, Pm, and Pb. Algorithm 2 Kriging Believer algorithm to select one batch of points. Input: Observation data Dt, batch size b Output: The batch points Bt+1 = {x(1)t+1, . . . ,x (b) t+1} D̃t = Dt ∪ D̃p for j = 1, . . . , b do Select the next x(j)t+1 by maximizing acquisition function α(x|D̃t) Compute the predictive posterior mean µ(x(j)t+1|D̃t) D̃t ← D̃t ∪ (xt+1, µ(x(j)t+1|D̃t)) end for Algorithm 3 Weisfeiler-Lehman subtree kernel computation (Shervashidze et al., 2011). Input: Graphs G1, G2, maximum iterations H Output: Kernel function value between the graphs Initialize the feature vectors ϕ(G1) = ϕ0(G1), ϕ(G2) = ϕ0(G2) with the respective counts of original node labels (i.e., the h = 0 WL features) for h = 1, . . . H do Assign a multiset Mh(v) = {lh−1(u)|u ∈ N (v)} to each node v ∈ G, where lh−1 is the node label function of the h− 1-th WL iteration and N is the node neighbor function Sort elements in multiset Mh(v) and concatenate them to string sh(v) Compress each string sh(v) using the hash function f s.t. f(sh(v)) = f(sh(w)) ⇐⇒ sh(v) = sh(u) Add lh−1 as prefix for sh(v) Concatenate the WL features ϕh(G1), ϕh(G2) with the respective counts of the new labels: ϕ(G1) = [ϕ(G1), ϕh(G1)], ϕ(G2) = [ϕ(G2), ϕh(G2)] Set lh(v) := f(sh(v)) ∀v ∈ G end for Compute inner product k = ⟨ϕh(G1), ϕh(G2)⟩ between WL features ϕh(G1), ϕh(G2) in RKHS H F.2 HIERARCHICAL WEISFEILER-LEHMAN KERNEL Inspired by Ru et al. (2021), we adopted the Weisfeiler-Lehman (WL) graph kernel (Shervashidze et al., 2011) in the GP surrogate model to handle the graph nature of neural architectures. The basic idea of the WL kernel is to first compare node labels, and then iteratively aggregate labels of neighboring nodes, compress them into a new label and compare them. Algorithm 3 summarizes the WL kernel procedure. Ru et al. (2021) identified three reasons for using the WL kernel: (1) it is able to compare labeled and directed graphs of different sizes, (2) it is expressive, and (3) it is relatively efficient and scalable. Our search space design can afford a diverse spectrum of neural architectures with very heterogeneous topological structure. Therefore, reason (1) is a very important property of the WL kernel to account for the diversity of neural architectures. Moreover, if we allow many hierarchical levels, we can construct very large neural architectures. Therefore, reasons (2) and (3) are essential for accurate and fast modeling. However, neural architectures in our search spaces may be significantly larger, which makes it difficult for a single WL kernel to capture the more global topological patterns. Moreover, modeling solely based on the final neural architecture ignores the useful macro-level information from earlier hierarchical levels. In our experiments (Section 5 and I), we have found stronger neural architectures by incorporating the hierarchical information in the kernel design, which provides experimental support for above arguments. However, modeling solely based on the (standard) WL graph kernel neglects the useful hierarchical information from our assembly process. Moreover, the large size of neural architectures make it still challenging to capture the more global topological patterns. We therefore propose to use hierarchical information through a hierarchy of WL graph kernels that take into account the different granularities of the architectures and combine them in a weighted sum. To obtain the different granularities, we use the fold operators Fl that removes algebraic terms beyond the l-th hierarchical level. Thereby, Residual Residual fc we obtain the folds F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (18) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear , for the algebraic architecture term ω. Note that we ignore the first fold since it does not represent a labeled DAG. Figure 7 visualizes the labeled graphs Φ(F2) and Φ(F3) of the folds F2 or F3, respectively. These graphs can be fed into (standard) WL graph kernels. Therefore, we can construct a hierarchy of WL graph kernels kWL as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (19) where ωi and ωj are two algebraic architecture terms. Note that λl govern the importance of the learned graph information across the hierarchical levels and can be optimized through the marginal likelihood. F.3 EXAMPLES FOR THE EVOLUTIONARY OPERATIONS For the evolutionary operations, we adopted ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). In the following, we will show how these evolutionary operations manipulate algebraic terms, e.g., Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) , (20) from the search space S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc , (21) to generate evolved algebraic terms. Figure 1 shows how we can derive the algebraic term in Equation 20 from the search space in Equation 21. For mutation operations, we first randomly pick a subterm of the algebraic term, e.g., Residual(conv, id, conv). Then, we randomly sample a new subterm with the same nonterminal symbol S as start symbol, e.g., Linear(conv, id, fc), and replace the previous subterm, yielding Linear(Linear(conv, id, fc), Residual(conv, id, conv), fc) . (22) For (self-)crossover operations, we swap two subterms, e.g., Residual(conv, id, conv) and Residual(conv, id, conv) with the same nonterminal S as start symbol, yielding Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (23) Note that unlike the commonly used crossover operation, which uses two parents, self-crossover has only one parent. In future work, we could also add a self-copy operation that copies a subterm to another part of the algebraic term, explicitly regularizing diversity and thus potentially speeding up the search. G RELATED WORK BEYOND NEURAL ARCHITECTURE SEARCH While our work focuses exclusively on NAS, we will discuss below how it relates to the areas of optimizer search (as well as from scratch automated machine learning) and neural-symbolic programming. Optimizer search is a closely related field to NAS, where we automatically search for an optimizer (i.e., an update function for the weights) instead of an architecture. Initial works used learnable parametric or non-parametric optimizers. While the former approaches (Andrychowicz et al., 2016; Li & Malik, 2017; Chen et al., 2017; 2022a) have poor scalability and generality, the latter works overcome those limitations. Bello et al. (2017) searched for an instantiation of hand-crafted patterns via reinforcement learning, while Wang et al. (2022) proposed a tree-structured search space2 and searched for optimizers via a modified Monte Carlo sampling approach. AutoML-Zero (Real et al., 2020) took an even more general approach by searching over entire machine learning algorithms, including optimizers, from a generic search space built from basic mathematical operations with an evolutionary algorithm. Chen et al. (2022b) used RE to discover optimizers from a generic search space (inspired by AutoML-Zero) for training vision transformers (Dosovitskiy et al., 2021). Complementary to the above, there is recent interest in automatically synthesizing programs from domain-specific languages. Gaunt et al. (2017) proposed a hand-crafted program template and simultaneously optimized the parameters of the differentiable program with gradient descent. The HOUDINI framework (Valkov et al., 2018) proposed type-directed (top-down) enumeration and evolution approaches over differentiable functional programs. Shah et al. (2020) hierarchically assembled differentiable programs and used neural networks for the approximation of missing expression in partial programs. Cui & Zhu (2021) treated CFGs stochastically with trainable production rule sampling weights, which were optimized with a gradient-based approach (Liu et al., 2019b). However, naı̈vely applying gradient-based approaches does not work in our search spaces due to the exponential explosion of supernet weights, but still renders an interesting direction for future work. Compared to these lines of work, we extended CFGs to handle changes in spatial resolution, promote regularity, and (compared to most of them) incorporate constraints, the latter two of which could also be applied in those domains. We also proposed a BO search strategy to search efficiently with a tailored kernel design to handle the hierarchical nature of the search space (i.e., the architectures). H IMPLEMENTATION DETAILS OF THE SEARCH STRATEGIES BANAT & BANAT (WL) The only difference between BANAT and BANAT (WL) is that the former uses our proposed hierarchy of WL kernels (hWL), whereas the latter only uses a single WL kernel (WL) for the entire architecture (c.f., (Ru et al., 2021)). We ran BANAT asynchronously in parallel throughout our experiments with a batch size of B = 1, i.e., at each BO iteration a single architecture is proposed for evaluation. For the acquisition function optimization, we used a pool size of P = 200, where the initial population consisted of the current ten best-performing architectures and the remainder were randomly sampled architectures to encourage exploration in the huge search spaces. During evolution, the mutation probability was set to pmut = 0.5 and crossover probability was set to pcross = 0.5. From the crossovers, half of them were self-crossovers of one parent and the other half were common crossovers between two parents. The tournament selection probability was set to ptour = 0.2. We evolved the population at least for ten iterations and a maximum of 50 iterations using a early stopping criterion based on the fitness value improvements over the last five iterations. Regularized Evolution (RE) RE (Real et al., 2019; Liu et al., 2018) iteratively mutates the best architectures out of a sample of the population. We reduced the population size from 50 to 30 to account for fewer evaluations, and used a sample size of 10. We also ran RE asynchronously for better comparability. I SEARCHING THE HIERARCHICAL NAS-BENCH-201 SEARCH SPACE In this section, we provide training details (Section I.1) and provide complementary results as well as conduct extensive analyses (Section I.2). 2Note that the tree-structured search space can equivalently be described with a CFG (with a constraint on the number of maximum depth of the syntax trees). I.1 TRAINING DETAILS Training protocol We evaluated all search strategies on CIFAR-10/100 (Krizhevsky et al., 2009), ImageNet-16-120 (Chrabaszcz et al., 2017), CIFARTile, and AddNIST (Geada et al., 2021). Note that CIFARTile and AddNIST are novel datasets and therefore have not yet been optimized by the research community. We provide further dataset details below. For training of architectures on CIFAR-10/100 and ImageNet-16-120, we followed Dong & Yang (2020). We trained architectures with SGD with learning rate of 0.1, Nesterov momentum of 0.9, weight decay of 0.0005 with cosine annealing (Loshchilov & Hutter, 2019), and batch size of 256 for 200 epochs. The initial channels were set to 16. For both CIFAR-10 and CIFAR-100, we used random flip with probability 0.5 followed by a random crop (32x32 with 4 pixel padding) and normalization. For ImageNet-16120, we used a 16x16 random crop with 2 pixel padding instead. For training of architectures on AddNIST and CIFARTile, we followed the training protocol from the CVPR-NAS 2021 competition (Geada et al., 2021): We trained architectures with SGD with learning rate of 0.01, momentum of 0.9, and weight decay of 0.0003 with cosine annealing, and batch size of 64 for 64 epochs. We set the initial channels to 16 and did not apply any further data augmentation. Dataset details In Table 4, we provide the licenses for the datasets used in our experiments. For training of architectures on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet-16-120 (Chrabaszcz et al., 2017), we followed the dataset splits and training protocol of NAS-Bench-201 (Dong & Yang, 2020). For CIFAR-10, we split the original training set into a new training set with 25k images and validation set with 25k images following Dong & Yang (2020). The test set remained unchanged. For evaluation, we trained architectures on both the training and validation set. For CIFAR-100, the training set remained unchanged, but the test set was partitioned in a validation set and new test set with each 5K images. For ImageNet-16-120, all splits remained unchanged. For AddNIST and CIFARTile, we used the training, validation, and test splits as defined in the CVPR-NAS 2021 competition (Geada et al., 2021). I.2 EXTENDED SEARCH RESULTS AND ANALYSES Supplementary to Figure 2, Figure 8 compares the cell-based vs. hierarchical NAS-Bench-201 search space from Section 6.1 using RS, RE, and BANAT (WL). The cell-based search space design shows on par or stronger performance on all datasets except for CIFARTile for the three search strategies. In contrast, for our proposed search strategy BANAT we find on par (CIFAR-10/100) or superior (ImageNet-16-120, CIFARTile, and AddNIST) performance using the hierarchical search space design. This clearly shows that the increase of the search space does not necessarily yields the discovery of stronger neural architectures. Further, it exemplifies the importance of a strong search strategy to search effectively and efficiently in huge hierarchical search spaces (Q2), and provides further evidence that the incorporation of hierarchical information is a key contributor for search efficiency (Q3). Based on this, we believe that future work using, e.g., graph neural networks as a surrogate, may benefit from the incorporation of hierarchical information. We report the test errors of our best found architectures in Table 5. We observe that our search strategy BANAT finds the strongest performing architectures across all dataset (Q2, Q3). Also note that we achieve better (validation and) test performance on ImageNet-16-120 on the hierarchical than the state-of-the-art search strategy on the cell-based NAS-Bench-201 search space (i.e., +0.37%p compared to Shapley-NAS (Xiao et al., 2022)) (Q1). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (a) Random Search (RS). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (b) Regularized Evolution (RE). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST Ta bl e 5: Te st er ro rs (a nd ± 1 st an da rd er ro r) of po pu la r ba se lin e ar ch ite ct ur es (e .g ., R es N et (H e et al ., 20 16 ) an d E ffi ci en tN et (T an & L e, 20 19 ) va ri an ts ), an d ou r be st fo un d ar ch ite ct ur es on th e ce ll- ba se d an d hi er ar ch ic al N A S- B en ch -2 01 se ar ch sp ac e. N ot e th at w e pi ck ed th e R es N et an d E ffi ci en tN et va ri an tb as ed on th e te st er ro r, co ns eq ue nt ly gi vi ng an ov er es tim at e of th ei rt es tp er fo rm an ce . † op tim al nu m be rs as re po rt ed in D on g & Y an g (2 02 0) . (b es t) te st er ro r( an d ± 1 st an da rd er ro r) ac ro ss th re e se ed s {7 7 7 , 8 8 8 , 9 9 9 } of th e be st ar ch ite ct ur e of th e th re e se ar ch ru ns w ith lo w es tv al id at io n er ro r. M et ho d C IF A R -1 0 C IF A R -1 00 Im ag eN et -1 6- 12 0 C IF A R Ti le A dd N IS T ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al B es tR es N et (H e et al ., 20 16 ) 06 .4 9 ± 0. 24 (3 2) 27 .1 0± 0. 67 (1 10 ) 53 .6 7 ± 0. 18 (5 6) 57 .8 0± 0. 57 (1 8) 7. 78 ± 0. 05 (3 4) B es tE ffi ci en tN et (T an & L e, 20 19 ) 11 .7 3 ± 0. 10 (B 0) 35 .1 7 ± 0. 42 (B 6) 77 .7 3 ± 0. 29 (B 0) 61 .0 1 ± 0. 62 (B 0) 13 .2 4 ± 0. 58 (B 1) N A SB en ch -2 01 or ac le † 5. 63 26 .4 9 52 .6 9 - - R S 6. 39 ± 0. 18 6. 77 ± 0. 10 28 .7 5 ± 0. 57 29 .4 9 ± 0. 57 54 .8 3 ± 0. 78 54 .7 0± 0. 82 52 .7 2 ± 0. 45 40 .9 3 ± 0. 81 7. 82 ± 0. 36 8. 05 ± 0. 29 N A SW O T (N =1 0) (M el lo re ta l., 20 21 ) 6. 55 ± 0. 10 8. 18 ± 0. 46 29 .3 5 ± 0. 53 31 .7 3 ± 0. 96 56 .8 0± 1. 35 58 .6 6 ± 0. 29 41 .8 3 ± 2. 29 49 .4 6 ± 2. 95 10 .1 1 ± 0. 69 11 .8 1 ± 1. 55 N A SW O T (N =1 00 )( M el lo re ta l., 20 21 ) 6. 59 ± 0. 17 8. 56 ± 0. 87 28 .9 1 ± 0. 25 31 .6 5 ± 1. 95 55 .9 9 ± 1. 30 58 .4 7 ± 2. 74 41 .6 3 ± 1. 02 43 .3 1 ± 2. 00 10 .7 5 ± 0. 23 14 .4 7 ± 1. 44 N A SW O T (N =1 00 0) (M el lo re ta l., 20 21 ) 6. 68 ± 0. 12 8. 26 ± 0. 38 29 .3 7 ± 0. 17 31 .6 6 ± 0. 72 58 .9 3 ± 2. 92 58 .3 3 ± 0. 91 39 .6 1 ± 1. 12 45 .6 6 ± 1. 29 10 .6 8 ± 0. 27 13 .5 7 ± 1. 89 N A SW O T (N =1 00 00 )( M el lo re ta l., 20 21 ) 6. 98 ± 0. 43 8. 40 ± 0. 52 29 .9 5 ± 0. 42 32 .0 9 ± 1. 61 54 .2 0± 0. 49 57 .5 8 ± 1. 53 39 .9 0± 1. 20 42 .4 5 ± 0. 67 10 .7 2 ± 0. 53 14 .8 2 ± 0. 66 R E (R ea le ta l., 20 19 ;L iu et al ., 20 18 ) 5. 76 ± 0. 17 6. 88 ± 0. 24 27 .6 8 ± 0. 55 30 .0 0± 0. 32 53 .9 2 ± 0. 60 55 .3 9 ± 0. 54 52 .7 9 ± 0. 59 40 .9 9 ± 2. 89 7. 69 ± 0. 35 7. 56 ± 0. 69 B A N A T (W L )( R u et al ., 20 21 ) 5. 68 ± 0. 11 6. 98
1. What is the main contribution of the paper regarding neural architecture search? 2. What are the strengths and weaknesses of the proposed approach using context free grammars (CFGs)? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the limitations of the paper, particularly in representing complex hierarchical search spaces? 5. How does the proposed method compare with traditional ways of representing search spaces, such as using PyTorch code? 6. Are there any qualitative gains from adopting the grammar-based approach over traditional methods? 7. What are the quantitative gains of using CFGs compared to custom PyTorch code? 8. Can the proposed approach efficiently represent constraints like those found in DARTS search space? 9. Is it possible to obtain supernets from a search space description using CFGs? 10. How does the effort and process of designing a search space using CFGs differ from hierarchical NAS? 11. Why did the authors choose to limit their evaluation to simple search spaces and baselines? 12. How does the reviewer suggest improving the paper, such as including more advanced NAS methods and expanding the scope of the evaluation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes to use context free grammars (CFGs) to represent hierarchical search spaces for neural architecture search, towards the goal of discovering new architectures, rather than refining existing ones (an example of designing Transformers over CNNs is given in the introduction). Architectures are represented using algebraic terms, and search are then defined by the means of related sets of production rules, terminals and nonterminals, constituting a CFG. Further, a Bayesian optimization (BO) algorithm, utilizing hierarchical Weisfeiler-Lehman kernel (hWL), is proposed to efficiently search within large search spaces, produced with CFGs. Experiments are conducted on the NAS-Bench-201 (NB201) search space and its hierarchical variant, using CIFAR-10, CIFAR-100, ImageNet-16-120, CIFARTile and AddNIST datasets. Strengths And Weaknesses Strengths: representing neural networks and related search spaces through formal grammars is an interesting research direction with a potential to deliver a convenient tool to work with complex hierarchical search spaces the paper does a good job at explaining the idea and how different search spaces can be represented using this new approach (although this is presented mainly in the Appendix) limitations of the work seem adequately mentioned (although I would still have some questions, see below) Weaknesses: the paper does not clearly present what the benefits of adopting the proposed approach exactly are -- specifically, there are two main questions that are not sufficiently answered: 1) are there any qualitative gains from adopting grammar-based approach over traditional ways of representing search spaces (e.g., just writing PyTorch code specific to what we want to search)? By qualitative gains I mean that something becomes possible, which otherwise would not be; 2) what are the quantitative gains? That is, is it faster/easier/more flexible to use these grammars compared to using custom PyTorch code? Some important related questions: in Appendix E we can see how DARTS search space can be achieved - however, a common problem with DARTS is that even though when a supernet is created each consecutive node takes all previous outputs as its input (as in Figure 6), when deriving a final architecture we only keep two inputs, meaning that some of these connections have to be removed. Is it possible to efficiently represent this kind of constraints using CFGs? Is it possible to easily obtain supernets from a search space description (if it makes sense for the search space)? It seems that every time a searchable graph-based structure is included in a search space (NB201 cell, DARTS cell, etc.), it is represented a custom, purpose-built terminal - does it mean that CFGs are, broadly speaking, not the best choice when searching for arbitrary connectivity between operations? Or could these structures be expressed with additional production rules and simpler terminals? Related to the above, but focused more on the high-level direction of this research: considering that we have to define all terminals, nonterminals and production rules (which I imagine would involve providing implementation for each element), is the overall effort and process of designing a search space fundamentally different when using CFGs compared to just doing hierarchical NAS? Specifically, I do not see how using CFGs brings us closer to the goal of automatically discovering novel architectures (like Transfomers) - in the end it seems we are still primarily limited by the primitives (operations, architectural patterns, etc.) that we decide to include in our search space, just like it is the case for the most (all?) of the existing methods. evaluation of the proposed method is rather limited - I felt a bit disappointed by its scope and the choice of baselines, specifically: only one, very simple search space is considered; although the fact that NB201 search space is extended does help, still the evaluation does not really match what can be usually found in NAS papers selected baselines are all very simple methods - there are many very efficient NAS methods out there, e.g., the entire research field of zero-cost NAS ("Zero-Cost Proxies for Lightweight NAS", "Neural Architecture Search without Training", "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective", to name just a few papers) seems like a good fit to "search efficiently in the huge search spaces spanned by our algebraic architecture terms", but the authors only include random, evolutionary and BO search, which are one of the most basic approaches similarly, applicability of some efficient one-shot NAS methods, which constitute an important subfield in NAS, is not explored at all (related to one of my questions above) Minor shortcoming and suggestions: consider changing "Linear" to, for example, "Sequential" - the word "linear" is commonly associated with linear operations, not with a chain of sequential operations "However, there is of course no single search space that can construct any neural architecture" I assume the authors meant that it is impossible to define a single search space using CFGs, that would include any neural architecture? I would argue that an implicitly defined search space obtained by considering basic graph operations (such as add a node, add an edge, assign operation, etc.) together with a starting graph (e.g., empty graph) would include any neural network realizable in practice. How big such a search space would be, or if it would be easy to use some searching algorithms within it is a different question - although please note that there exist NAS papers that defined search spaces in such way. Equations 4 and 8 are somewhat redundant, Equation 8 does not seem to add much on top of Equation 4, just minor details - in general, in my opinion, Sections 2 and 3 could be combined and made shorter, and the saved space could be used to include some of the things I mentioned are currently missing from the paper. Clarity, Quality, Novelty And Reproducibility The paper is clearly written but it misses some important discussion regarding comparing CFGs with conventional way of defining search spaces. The idea of using formal grammars is novel, so is the introduction of hierarchical Weisfeiler-Lehman kernel (although the latter is a minor modification). The authors provide code which helps with reproducibility.
ICLR
Title Towards Discovering Neural Architectures from Scratch Abstract The discovery of neural architectures from scratch is the long-standing goal of Neural Architecture Search (NAS). Searching over a wide spectrum of neural architectures can facilitate the discovery of previously unconsidered but wellperforming architectures. In this work, we take a large step towards discovering neural architectures from scratch by expressing architectures algebraically. This algebraic view leads to a more general method for designing search spaces, which allows us to compactly represent search spaces that are 100s of orders of magnitude larger than common spaces from the literature. Further, we propose a Bayesian Optimization strategy to efficiently search over such huge spaces, and demonstrate empirically that both our search space design and our search strategy can be superior to existing baselines. We open source our algebraic NAS approach and provide APIs for PyTorch and TensorFlow. 1 INTRODUCTION Neural Architecture Search (NAS), a field with over 1 000 papers in the last two years (Deng & Lindauer, 2022), is widely touted to automatically discover novel, well-performing architectural patterns. However, while state-of-the-art performance has already been demonstrated in hundreds of NAS papers (prominently, e.g., (Tan & Le, 2019; 2021; Liu et al., 2019a)), success in automatically finding truly novel architectural patterns has been very scarce (Ramachandran et al., 2017; Liu et al., 2020). For example, novel architectures, such as transformers (Vaswani et al., 2017; Dosovitskiy et al., 2021) have been crafted manually and were not found by NAS. There is an accumulating amount of evidence that over-engineered, restrictive search spaces (e.g., cell-based ones) are major impediments for NAS to discover truly novel architectures. Yang et al. (2020b) showed that in the DARTS search space (Liu et al., 2019b) the manually-defined macro architecture is more important than the searched cells, while Xie et al. (2019) and Ru et al. (2020) achieved competitive performance with randomly wired neural architectures that do not adhere to common search space limitations. As a result, there are increasing efforts to break these impediments, and the discovery of novel neural architectures has been referred to as the holy grail of NAS. Hierarchical search spaces are a promising step towards this holy grail. In an initial work, Liu et al. (2018) proposed a hierarchical cell, which is shared across a fixed macro architecture, imitating the compositional neural architecture design pattern widely used by human experts. However, subsequent works showed the importance of both layer diversity (Tan & Le, 2019) and macro architecture (Xie et al., 2019; Ru et al., 2020). In this work, we introduce a general formalism for the representation of hierarchical search spaces, allowing both for layer diversity and a flexible macro architecture. The key observation is that any neural architecture can be represented algebraically; e.g., two residual blocks followed by a fullyconnected layer in a linear macro topology can be represented as the algebraic term ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (1) We build upon this observation and employ Context-Free Grammars (CFGs) to construct large spaces of such algebraic architecture terms. Although a particular search space is of course limited in its overall expressiveness, with this approach, we could effectively represent any neural architecture, facilitating the discovery of truly novel ones. Due to the hierarchical structure of algebraic terms, the number of candidate neural architectures scales exponentially with the number of hierarchical levels, leading to search spaces 100s of orders of magnitudes larger than commonly used ones. To search in these huge spaces, we propose an efficient search strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), which leverages hierarchical information, capturing the topological patterns across the hierarchical levels, in its tailored kernel design. Our contributions are as follows: • We present a novel technique to construct hierarchical NAS spaces based on an algebraic notion views neural architectures as algebraic architecture terms and CFGs to create algebraic search spaces (Section 2). • We propose BANAT, a Bayesian Optimization (BO) strategy that uses a tailored modeling strategy to efficiently and effectively search over our huge search spaces (Section 3). • After surveying related work (Section 4), we empirically show that search spaces of algebraic architecture terms perform on par or better than common cell-based spaces on different datasets, show the superiority of BANAT over common baselines, demonstrate the importance of incorporating hierarchical information in the modeling, and show that we can find novel architectural parts from basic mathematical operations (Section 5). We open source our code and provide APIs for PyTorch (Paszke et al., 2019) and TensorFlow (Abadi et al., 2015) at https://anonymous.4open.science/r/iclr23_tdnafs. 2 ALGEBRAIC NEURAL ARCHITECTURE SEARCH SPACE CONSTRUCTION In this section we present an algebraic view on Neural Architecture Search (NAS) (Section 2.1) and propose a construction mechanism based on Context-Free Grammars (CFGs) (Section 2.2 and 2.3). 2.1 ALGEBRAIC ARCHITECTURE TERMS FOR NEURAL ARCHITECTURE SEARCH We introduce algebraic architecture terms as a string representation for neural architectures from a (term) algebra. Formally, an algebra (A,F) consists of a non-empty set A (universe) and a set of operators f : An → A ∈ F of different arities n ≥ 0 (Birkhoff, 1935). In our case, A corresponds to the set of all (sub-)architectures and we distinguish between two types of operators: (i) nullary operators representing primitive computations (e.g., conv() or fc()) and (ii) k-ary operators with k > 0 representing topological operators (e.g., Linear(·, ·, ·) or Residual(·, ·, ·)). For sake of notational simplicity, we omit parenthesis for nullary operators (i.e., we write conv). Term algebras (Baader & Nipkow, 1999) are a special type of algebra mapping an algebraic expression to its string representation. E.g., we can represent a neural architecture as the algebraic architecture term ω as shown in Equation 1. Term algebras also allow for variables xi that are set to terms themselves that can be re-used across a term. In our case, the intermediate variables xi can therefore share patterns across the architecture, e.g., a shared cell. For example, we could define the intermediate variable x1 to map to the residual block in ω from Equation 1 as follows: ω′ = Linear(x1, x1, fc), x1 = Residual(conv, id, conv) . (2) Algebraic NAS We formulate our algebraic view on NAS, where we search over algebraic architecture terms ω ∈ Ω representing their associated architectures Φ(ω), as follows: argmin ω∈Ω f(Φ(ω)) , (3) where f(·) is an error measure that we seek to minimize, e.g., final validation error of a fixed training protocol. For example, we can represent the popular cell-based NAS-Bench-201 search space(Dong & Yang, 2020) as algebraic search space Ω. The algebraic search space Ω is characterized by a fixed macro architecture Macro(. . .) that stacks 15 instances of a shared cell Cell(pi,pi,pi,pi,pi,pi), where the cell has six edges, on each of which one of five primitive computations can be placed (i.e., pi for i ∈ {1, 2, 3, 4, 5} corresponding to zero, id, conv1x1, conv3x3, or avg pool, respectively). By leveraging the intermediate variable x1 we can effectively share the cell topology across the architecture. For example, we can express an architecture ωi ∈ Ω from the NAS-Bench-201 search space Ω as: ωi = Macro(x1, x1, ..., x1︸ ︷︷ ︸ 15× ), x1 = Cell(p1,p2,p1,p5,p4,p3) . (4) Algebraic NAS over such algebraic architecture terms then amounts to finding the best-performing primitive computation pi for each edge, as the macro architecture is fixed. In contrast to this simple cell-based algebraic space, the search spaces we consider can be much more expressive and, e.g., allow for layer diversity and a flexible macro architecture over several hierarchical levels (Section 5.1). 2.2 CONSTRUCTING NEURAL ARCHITECTURE TERMS WITH CONTEXT-FREE GRAMMARS We propose to use Context-Free Grammars (CFGs) (Chomsky, 1956) since they can naturally generate (hierarchical) algebraic architecture terms. Compared to other search space designs, CFGs give us a formally grounded way to naturally and compactly define very expressive hierarchical search spaces (e.g., see Section 5.1). We can also unify popular search spaces from the literature with our general search space design in one framework (Appendix E). They give us further a simple mechanism to evolve architectures while staying within the defined search space (Section 3). Formally, a CFG G = ⟨N,Σ, P, S⟩ consists of a finite set of nonterminals N and terminals Σ with N ∩Σ = ∅, a finite set of production rules P = {A→ β|A ∈ N, β ∈ (N ∪Σ)∗}, where the asterisk ∗ denotes the Kleene star operation (Kleene et al., 1956), and a start symbol S ∈ N . To generate an algebraic architecture term, starting from the start symbol S, we recursively replace nonterminals of the current algebraic term with a right-hand side of a production rule consisting of nonterminals and terminals, until the resulting string does not contain any nonterminals. For example, consider the following CFG in extended Backus-Naur form (Backus, 1959) (see Appendix B for background): S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc (5) From this CFG, we can derive the algebraic architecture term ω (with three hierarchical levels) from Equation 1 as follows: S→ Linear(S, S, S) Level 1 → Linear(Residual(S, S, S), Residual(S, S, S), fc) Level 2 (6) → Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) Level 3 Figure 1 makes the above derivation and the connection to the associated architecture explicit. The set of all (potentially infinite) algebraic terms generated by a CFG G is the language L(G), which naturally forms our search space Ω. Thus, the algebraic NAS problem from Equation 3 becomes: argmin ω∈L(G) f(Φ(ω)) . (7) 2.3 EXTENSIONS TO THE CONSTRUCTION MECHANISM Constraints In many search space designs, we want to adhere to some constraints, e.g., to limit the number of nodes or to ensure that for all architectures in the search space there exists at least one path from the input to the output. We can simply do so by allowing only the application of production rules which guarantee compliance to such constraints. For example, to ensure that there is at least one path from the input to the output, it is sufficient to ensure that each derivation connects its input to the output due to the recursive nature of CFGs. Note that this makes CFGs context-sensitive w.r.t. those constraints. For more details, please refer to Appendix D. Fostering regularity through substitution To implement intermediate variables xi (Section 2.1) we leverage that context-free languages are closed under substitution: we map terminals, representing the intermediate variables xi, from one language to algebraic terms of other languages, e.g., a shared cell. For example, we can split a CFG G, constructing entire algebraic architecture terms, into the CFGs Gmacro and Gcell for the macro- or cell-level, respectively. Further, we add a single (or multiple) intermediate terminal(s) x1 to Gmacro which maps to an algebraic term ω1 ∈ L(Gcell), e.g., the searchable cell. Thus, we effectively search over the macro-level as well as a single, shared cell. Note that by using a fixed macro architecture (i.e., |L(Gmacro)| = 1), we can represent cell-based search spaces, e.g., NAS-Bench-201 (Dong & Yang, 2020), while also being able to represent more expressive search spaces (e.g., see Section 5.1). More generally, we could extend this by adding further intermediate terminals which map to other languages L(Gj), or by adding intermediate terminals to G2 which map to languages L(Gj ̸=1). In this way, we can effectively foster regularity. Representing common architecture patterns for object recognition Neural architectures for object recognition commonly build a hierarchy of features that are gradually downsampled, e.g., by pooling operations. However, previous works in NAS were either limited to a fixed macro architecture (Zoph et al., 2018), only allowed for linear macro architectures (Liu et al., 2019a), or required post-sampling testing for resolution mismatches (Stanley & Miikkulainen, 2002; Ru et al., 2020). While this produced impressive performance on popular benchmarks (Tan & Le, 2019; 2021; Liu et al., 2019a), it is an open research question whether a different type of macro architecture (e.g., one with multiple branches) could yield even better performance. To accommodate flexible macro architectures, we propose to overload the nonterminals. In particular, the nonterminals indicate how often we apply downsampling operations in the subsequent derivations of the nonterminal. Consider the production rule D2 → Residual(D1, D2, D1), where Di with i ∈ {1, 2} are a nonterminals which indicate that i downsampling operations have to be applied in their subsequent derivations. That is, in both paths of the residual the input features will be downsampled twice and, consequently, the merging paths will have the same spatial resolution. Thereby, this mechanism distributes the downsampling operations recursively across the architecture. For the channels, we adopted the common design to double the number of channels whenever we halve the spatial resolution in our experiments. Note that we could also handle a varying number of channels by using, e.g., depthwise concatenation as merge operation. 3 BAYESIAN OPTIMIZATION FOR ALGEBRAIC NEURAL ARCHITECTURE SEARCH We propose a BO strategy, Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT), to efficiently search in the huge search spaces spanned by our algebraic architecture terms: we introduce a novel surrogate model which combines a Gaussian Process (GP) surrogate with a tailored kernel that leverages the hierarchical structure of algebraic neural architecture terms (see below), and adopt expected improvement as the acquisition function (Mockus et al., 1978). Given the discrete nature of architectures, we adopt ideas from grammar-guided genetic programming (McKay et al., 2010; Moss et al., 2020) for acquisition function optimization. Furthermore, to reduce wallclock time by leveraging parallel computing resources, we adapt the Kriging Believer (Ginsbourger et al., 2010) to select architectures at every search iteration so that we can train and evaluate them in parallel. Specifically, Kriging Believer assigns hallucinated values (i.e., posterior mean) of pending evaluations at each iteration to avoid redundant evaluations. For a more detailed explanation of BANAT, please refer to Appendix F. Hierarchical Weisfeiler-Lehman kernel (hWL) Inspired by the state-of-the-art BO approach for NAS (Ru et al., 2021), we adopt the WL graph kernel (Shervashidze et al., 2011) in a GP surrogate, modeling performance of the algebraic architecture terms ωi with the associated architectures Φ(ωi). However, modeling solely based on the final architecture ignores the useful hierarchical information inherent in our algebraic representation. Moreover, the large size of the architectures also makes it difficult to use a single WL kernel to capture the more global topological patterns. Since our hierarchical construction can be viewed as a series of gradually unfolding architectures, with the final architecture containing only primitive computations, we propose a novel hierarchical kernel design assigning a WL kernel to each hierarchy and combine them in a weighted sum. To this end, we introduce fold operators Fl, that removes algebraic terms beyond the l-th hierarchical level. For example, the fold operators F1, F2 and F3 yield for the algebraic term ω (Equation 1) F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (8) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear . Note the similarity to the derivations in Figure 1. Furthermore note that, in practice, we also add the corresponding nonterminals to integrate information from our hierarchical construction process. We define our hierarchical WL kernel (hWL) for two architectures Φ(ωi) and Φ(ωj) with algebraic architecture terms ωi or ωj , respectively, constructed over a hierarchy of L levels, as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (9) where the weights λl govern the importance of the learned graph information at different hierarchical levels (granularities of the architecture) and can be tuned (along with other hyperparameters of the GP) by maximizing the marginal likelihood. We omit l = 1 in the additive kernel as F1(ω) does not contain any edge features which are required for our WL kernel kWL. For more details on our novel hierarchical kernel design, please refer to Appendix F.2. Our proposed kernel efficiently captures the information in all algebraic term construction levels, which substantially improves its search and surrogate regression performance on our search space as demonstrated in Section 5. Acquisition function optimization To optimize the acquisition function, we adopt ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). For mutation, we randomly replace a sub-architecture term with a new randomly generated term, using the same nonterminal as start symbol. For crossover, we randomly swap two sub-architecture terms with the same corresponding nonterminal. We consider two crossover operators: a novel self-crossover operation swaps two sub-terms of a single architecture term, and the common crossover operation swaps subterms of two different architecture terms. Importantly, all evolutionary operations by design only result in valid terms. We provide examples for the evolutionary operations in Appendix F. 4 RELATED WORK We discuss related works in NAS below and discuss works beyond NAS in Appendix G. Neural Architecture Search Neural Architecture Search (NAS) aims to automatically discover architectural patterns (or even entire architectures) (Elsken et al., 2019). Previous approaches, e.g., used reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), evolution (Real et al., 2017), gradient descent (Liu et al., 2019b), or Bayesian Optimization (BO) (Kandasamy et al., 2018; White et al., 2021; Ru et al., 2021). To enable the effective use of BO on graph-like inputs for NAS, previous works have proposed to use a GP with specialized kernels (Kandasamy et al., 2018; Ru et al., 2021), encoding schemes (Ying et al., 2019; White et al., 2021), or graph neural networks as surrogate model (Ma et al., 2019; Shi et al., 2020; Zhang et al., 2019). Different to prior works, we explicitly leverage the hierarchical construction of architectures for modeling. Searching for novel architectural patterns Previous works mostly focused on finding a shared cell (Zoph et al., 2018) with a fixed macro architecture while only few works considered more expressive hierarchical search spaces (Liu et al., 2018; 2019a; Tan et al., 2019). The latter works considered hierarchical assembly (Liu et al., 2018), combination of a cell- and network-level search space (Liu et al., 2019a; Zhang et al., 2020), evolution of network topologies (Miikkulainen et al., 2019), factorization of the search space (Tan et al., 2019), parameterization of a hierarchy of random graph generators (Ru et al., 2020), a formal language over computational graphs (Negrinho et al., 2019), or a hierarchical construction of TensorFlow programs (So et al., 2021). Similarly, our formalism allows to design search spaces covering a general set of architecture design choices, but also permits the search for macro architectures with spatial resolution changes and multiple branches. We also handle spatial resolution changes without requiring post-hoc testing or resizing of the feature maps unlike prior works (Stanley & Miikkulainen, 2002; Miikkulainen et al., 2019; Stanley et al., 2019). Other works proposed approaches based on string rewriting systems (Kitano, 1990; Boers et al., 1993), cellular (or tree-structured) encoding schemes (Gruau, 1994; Luke & Spector, 1996; De Jong & Pollack, 2001; Cai et al., 2018), hyperedge replacement graph grammars Luerssen & Powers (2003); Luerssen (2005), attribute grammars (Mouret & Doncieux, 2008), CFGs (Jacob & Rehder, 1993; Couchet et al., 2007; Ahmadizar et al., 2015; Ahmad et al., 2019; Assunção et al., 2017; 2019; Lima et al., 2019; de la Fuente Castillo et al., 2020), or And-Or-grammars (Li et al., 2019). Different to these prior works, we construct entire architectures with spatial resolution changes across multiple branches, and propose techniques to incorporate constraints and foster regularity. Orthogonal to the aforementioned approaches, Roberts et al. (2021) searched over neural (XD-)operations, which is orthogonal to our approach, i.e., our predefined primitive computations could be replaced by their proposed XD-operations. 5 EXPERIMENTS In this section, we investigate potential benefits of hierarchical search spaces and our search strategy BANAT. More specifically, we address the following questions: Q1 Can hierarchical search spaces yield on par or superior architectures compared to cell-based search spaces with a limited number of evaluations? Q2 Can our search strategy BANAT improve performance over common baselines? Q3 Does leveraging the hierarchical information improve performance? Q4 Do zero-cost proxies work in vast hierarchical search spaces? Q5 Can we discover novel architectural patterns (e.g., activation functions)? To answer questions Q1-Q4, we introduce a hierarchical search space based on the popular NASBench-201 search space (Dong & Yang, 2020) in Section 5.1. To answer question Q5, we search for activation functions (Ramachandran et al., 2017) and defer the search space definition to Appendix J.1. We provide complementary results and analyses in Appendix I.2 and J.3. 5.1 HIERARCHICAL NAS-BENCH-201 We propose a hierarchical variant of the popular cell-based NAS-Bench-201 search space (Dong & Yang, 2020) by adding a hierarchical macro space (i.e., spatial resolution flow and wiring at the macro-level) and parameterizable convolutional blocks (i.e., choice of convolutions, activations, and normalizations). We express the hierarchical NAS-Bench-201 search space with CFG Gh as follows: D2 ::= Linear3(D1, D1, D0) | Linear3(D0, D1, D1) | Linear4(D1, D1, D0, D0) D1 ::= Linear3(C, C, D) | Linear4(C, C, C, D) | Residual3(C, C, D, D) D0 ::= Linear3(C, C, CL) | Linear4(C, C, C, CL) | Residual3(C, C, CL, CL) D ::= Linear2(CL, down) | Linear3(CL, CL, down) | Residual2(C, down, down) C ::= Linear2(CL, CL) | Linear3(CL, CL) | Residual2(CL, CL, CL) CL ::= Cell(OP, OP, OP, OP, OP, OP) OP ::= zero | id | BLOCK | avg pool BLOCK ::= Linear3(ACT, CONV, NORM) ACT ::= relu | hardswish | mish CONV ::= conv1x1 | conv3x3 | dconv3x3 NORM ::= batch | instance | layer . (10) See Appendix A for the terminal vocabulary of topological operators and primitive computations. The productions with the nonterminals {D2, D1, D0, D} define the spatial resolution flow and together with {C} define the macro architecture containing possibly multiple branches. The productions for {CL, OP} construct the NAS-Bench-201 cell and {BLOCK, ACT, CONV, NORM} parameterize the convolutional block. To ensure that we use the same distribution over the primitive computations as in NAS-Bench-201, we reweigh the sampling probabilities of the productions generated by the nonterminal OP, i.e., all production choices have sampling probability of 20%, but BLOCK has 40%. Note that we omit the stem (i.e., 3x3 convolution followed by batch normalization) and classifier (i.e., batch normalization followed by ReLU, global average pooling, and fully-connected layer) for simplicity. We implemented the merge operation as element-wise summation. Different to the cell-based NAS-Bench-201 search space, we exclude degenerated architectures by introducing a constraint that ensures that each subterm maps the input to the output (i.e., in the associated computational graph there is at least one path from source to sink). Our search space consists of ca. 10446 algebraic architecture terms (please refer to Appendix C on how to compute the search space size), which is significantly larger than other popular search spaces from the literature. For comparison, the cell-based NAS-Bench-201 search space is just a minuscule subspace of size 104.18, where we apply only the blue-colored production rules and replace the CL nonterminals with a placeholder terminal x1 that will be substituted by the searched, shared cell. 5.2 EVALUATION DETAILS For all search experiments, we compared the search strategies BANAT, Random Search (RS), Regularized Evolution (RE) (Real et al., 2019; Liu et al., 2018), and BANAT (WL) (Ru et al., 2021). For implementation details of the search strategies, please refer to Appendix H. We ran search for a total of 100 evaluations with a random initial design of 10 on three seeds {777, 888, 999} on the hierarchical NAS-Bench-201 search space or 1000 evaluations with a random initial design of 50 on one seed {777} on the activation function search space using 8 asynchronous workers each with a single NVIDIA RTX 2080 Ti GPU. In each evaluation, we fully trained the architectures and recorded their last validation error. For training details on the hierarchical NAS-Bench-201 search space and activation function search space, please refer to Appendix I.1 or Appendix J.2, respectively. To assess the modeling performance of our surrogate, we compared regression performance of GPs with different kernels, i.e., our hierarchical WL kernel (hWL), (standard) WL kernel (Ru et al., 2021), and NASBOT’s kernel (Kandasamy et al., 2018). We also tried the GCN encoding (Shi et al., 2020) but it could not capture the mapping from the complex graph space to performance, resulting in constant performance predictions. Further, note that the adjacency encoding (Ying et al., 2019) and path encoding (White et al., 2021) cannot be used in our hierarchical search spaces since the former requires the same amount of nodes across graphs and the latter scales exponentially in the number of nodes. We ran 20 trials over the seeds {0, 1, ..., 19} and re-used the data from the search runs. In every trial, we sampled a training and test set of 700 or 500 architecture and validation error pairs, respectively. We fitted the surrogates with a varying number of training samples by randomly choosing samples from the training set without replacement, and recorded Kendall’s τ rank correlation between the predicted and true validation error. To assess zero-cost proxies, we re-used the data from the search runs and recorded Kendall’s τ rank correlation. 5.3 RESULTS In the following we answer all of the questions Q1-Q5. Figure 2 compares the results of the cellbased and hierarchical search space design using our search strategy BANAT. Results with BANAT are on par on CIFAR-10/100, superior on ImageNet-16-120, and clearly superior on CIFARTile and AddNIST (answering Q1). We emphasize that the NAS community has engineered the cell-based search space to achieve strong performance on those popular image classification datasets for over a decade, making it unsurprising that our improvements are much larger for the novel datasets. Yet, our best found architecture on ImageNet-16-120 from the hierarchical search space also achieves an excellent test error of 52.78% with only 0.626MB parameters (Appendix I.2); this is superior to the architecture found by the state-of-the-art method Shapley-NAS (i.e., 53.15%) (Xiao et al., 2022) and on par with the optimal architecture of the cell-based NAS-Bench-201 search space (i.e., 52.69% with 0.866MB). Figure 3 shows that our search strategy BANAT is also superior 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based Evaluations Evaluations Evaluations Evaluations Evaluations BANAT (ours) RS RE BANAT (WL) Figure 3: Comparison of search strategies on the hierarchical search space. We plot mean and ±1 standard error of the validation error on the hierarchical NAS-Bench-201 search space for our search strategy BANAT (solid blue), RS (dashed orange), RE (dotted green), and BANAT (WL) (dashdotted red). We report test errors, best architectures, and conduct further analyses in Appendix I.2. to common baselines (answering Q2) and leveraging hierarchical information clearly improves performance (answering Q3). Further, the evaluation of surrogate performance in Figure 4 shows that incorporating hierarchical information with our hierarchical WL kernel (hWL) improves modeling, especially on smaller amounts of training data (further answering Q3). Table 1 shows that the baseline zero-cost proxies flops and l2-norm yield competitive (or often superior) results to more sophisticated zero-cost proxies; making hierarchical search spaces an interesting future research direction for them (answering Q4). Finally, Table 2 shows that we can find novel well-performing activation functions from basic mathematical operations with BANAT (answering Q5). 6 DISCUSSION AND LIMITATIONS While our grammar-based construction mechanism is a powerful mechanism to construct huge hierarchical search space, we can not construct any architecture with our grammar-based construction approach (Section 2.2 and 2.3) since we are limited to context-free languages; e.g., architectures of the type {anbncn|n ∈ N>0} cannot be generated by CFGs (this can be proven using Odgen’s lemma (Ogden, 1968)). Further, due to the discrete nature of CFGs we can not easily integrate continuous design choices, e.g., dropout probability. Furthermore, our grammar-based mechanism does not (generally) support simple scalability of discovered neural architectures (e.g., repetition of building blocks) without special consideration in the search space design. Nevertheless, our search spaces still significantly increase the expressiveness, including the ability to represent common search spaces from the literature (see Appendix E for how we can represent the search spaces of DARTS, Auto-Deeplab, the hierarchical cell search space of Liu et al. (2018), the Mobile-net search space, and the hierarchical random graph generator search space), as well as allowing search for entire neural architectures based around the popular NAS-Bench-201 search space CIFAR-10 CIFAR-100 ImageNet-16-120 CIFARTile AddNIST (Section 5). Thus, our search space design can facilitate the discovery of novel well-performing neural architectures in those huge search spaces of algebraic architecture terms. However, there is an inherent trade-off between the expressiveness and the difficulty of search. The much greater expressiveness facilitates search in a richer set of architectures that may include better architectures than in more restrictive search spaces, which however need not exist. Besides that, the (potential) existence of such a well-performing architecture does not result in a search strategy discovering it, even with large amounts of computing power available. Note that the tradeoff manifests itself also in the acquisition function optimization of our search strategy BANAT. In addition, a well-performing neural architecture may not work with current training protocols and hyperparameters due to interaction effects, i.e., training protocols and hyperparameters may be overoptimized for specific types of neural architectures. To overcome this limitation, one could consider a joint optimization of neural architectures, training protocols, and hyperparameters. However, this further fuels the trade-off between expressiveness and the difficulty of search. 7 CONCLUSION We introduced very expressive search spaces of algebraic architecture terms constructed with CFGs. To efficiently search over the huge search spaces, we proposed BANAT, an efficient BO strategy with a tailored kernel leveraging the available hierarchical information. Our experiments indicate that both our search space design and our search strategy can yield strong performance over existing baselines. Our results motivate further steps towards the discovery of neural architectures based on even more atomic primitive computations. Furthermore, future works could (simultaneously) learn the search space (i.e., learn the grammar) or improve search efficiency by means of multi-fidelity optimization or gradient-based search strategies. REPRODUCIBILITY STATEMENT To ensure reproducibility, we address all points of the best practices checklist for NAS research (Lindauer & Hutter, 2020) in Appendix K. ETHICS STATEMENT NAS has immense potential to facilitate systematic, automated discovery of high-performing (novel) architecture designs. However, the restrictive cell-based search spaces most commonly used in NAS render it impossible to discover truly novel neural architectures. With our general formalism based on algebraic terms, we hope to provide fertile foundation towards discovering high-performing and efficient architectures; potentially from scratch. However, search in such huge search spaces is expensive, particularly in the context of the ongoing detrimental climate crisis. While on the one hand, the discovered neural architectures, like other AI technologies, could potentially be exploited to have a negative societal impact; on the other hand, our work could also lead to advances across scientific disciplines like healthcare and chemistry. A FROM TERMINALS TO PRIMITIVE COMPUTATIONS AND TOPOLOGICAL OPERATORS Table 3 and Figure 5 describe the primitive computations and topological operators used throughout our experiments in Section 5 and Appendix I, respectively. Note that by adding more primitive computations and/or topological operators we could construct even more expressive search spaces. B EXTENDED BACKUS-NAUR FORM The (extended) Backus-Naur form (Backus, 1959) is a meta-language to describe the syntax of CFGs. We use meta-rules of the form S ::= α where S ∈ N is a nonterminal and α ∈ (N ∪ Σ)∗ is a string of nonterminals and/or terminals. We denote nonterminals in UPPER CASE, terminals corresponding to topological operators in Initial upper case/teletype, and terminals corresponding to primitive computations in lower case/teletype, e.g., S ::= Residual(S, S, id). To compactly express production rules with the same left-hand side nonterminal, we use the vertical bar | to indicate a choice of production rules with the same left-hand side, e.g., S ::= Linear(S, S, S) | Residual(S, S, id) | conv. C SEARCH SPACE SIZE In this section, we show how to efficiently compute the size of our search spaces constructed by CFGs. There are two cases to consider: (i) a CFG contains cycles (i.e., part of the derivation can be repeated infinitely many times) , yielding an open-ended, infinite search space; and (ii) a CFG contains no cycles, yielding in a finite search space whose size we can compute. Consider a production A → Residual(B, B, B) where Residual is a terminal, and A and B are nonterminals with B → conv | id. Consequently, there are 23 = 8 possible instances of the residual block. If we add another production choice for the nonterminal A, e.g., A → Linear(B, B, B), we would have 23 + 23 = 16 possible instances. Further, adding a production C → Linear(A, A, A) would yield a search space size of (23 + 23)3 = 4096. More generally, we introduce the function PA that returns the set of productions for nonterminal A ∈ N , and the function µ : P → N that returns all the nonterminals for a production p ∈ P . We can then recursively compute the size of the search space as follows: f(A) = ∑ p∈PA { 1 , µ(p) = ∅,∏ A′∈µ(p) f(A′) , otherwise . (11) When a CFG contains some constraint, we ensure to only account for valid architectures (i.e., compliant with the constraints) by ignoring productions which would lead to invalid architectures. D MORE DETAILS ON SEARCH SPACE CONSTRAINTS During the design of the search space, we may want to comply with some constraints, e.g., only consider valid neural architectures or impose structural constraints on architectures. We can guarantee compliance with constraints by modifying sampling (and evolution): we only allow the application of production rules, which guarantee compliance with the constraint(s). In the following, we show exemplary how this can be implemented for the former constraint mentioned above. Note that other constraints can be implemented in a similar manner To implement the constraint ”only consider valid neural architectures”, we note that our search space design only creates neural architectures where neither the spatial resolution nor the channels can be mismatched; please refer to Section 2.3 for details. Thus, the only way a neural architecture can become invalid is through zero operations, which could remove edges from the computational graph and possibly disassociate the input from the output. Since we recursively assemble neural architectures, it is sufficient to ensure that the derived algebraic architecture term (i.e., the associated computational graph) is compliant with the constraint, i.e.,there is at least one path from input to output. Thus, during sampling (and similarly during evolution), we modify the current production rule choices when an application of the zero operation would disassociate the input from the output. E COMMON SEARCH SPACES FROM THE LITERATURE In Section 5.1, we demonstrated how to construct the popular NAS-Bench-201 search space within our algebraic search space design, and below we show how to reconstruct the following popular search spaces: DARTS search space (Liu et al., 2019b), Auto-DeepLab search space (Liu et al., 2019a), hierarchical cell search space (Liu et al., 2018), Mobile-net search space (Tan et al., 2019), and hierarchical random graph generator search space (Ru et al., 2020). For implementation details we refer to the respective works. DARTS SEARCH SPACE The DARTS search space (Liu et al., 2019b) consists of a fixed macro architecture and a cell, i.e., a seven node directed acyclic graph (Darts; see Figure 6 for the topological operator). We omit the fixed macro architecture from our search space design for simplicity. Each cell receives the feature maps from the two preceding cells as input and outputs a single feature map. All intermediate nodes (i.e., Node3, Node4, Node5, and Node6) is computed based on all of its predecessors. Thus, we can define the DARTS search space as follows: DARTS ::= Darts(NODE3, NODE4, NODE5, NODE6) NODE3 ::= Node3(OP, OP) NODE4 ::= Node4(OP, OP, OP) NODE5 ::= Node5(OP, OP, OP, OP) NODE6 ::= Node6(OP, OP, OP, OP, OP) OP ::= sep conv 3x3 | sep conv 5x5 | dil conv 3x3 | dil conv 5x5 | max pool | avg pool | id | zero , (12) where the topological operator Node3 receives two inputs, applies the operations separately on them, and sums them up. Similarly, Node4, Node5, and Node6 apply their operations separately to the given inputs and sum them up. The topological operator Darts feeds the corresponding feature maps into each of those topological operators and finally concatenates all intermediate feature maps. AUTO-DEEPLAB SEARCH SPACE Auto-DeepLab (Liu et al., 2019a) combines a cell-level with a network-level search space to search for segmentation networks, where the cell is shared across the searched macro architecture, i.e., a twelve step (linear) path across different spatial resolutions. The cell-level design is adopted from Liu et al. (2019b) and, thus, we can re-use the CFG from Equation 12. For the network-level, we introduce a constraint that ensures that the path is of length twelve, i.e., we ensure exactly twelve derivations in our CFG. Further, we overload the nonterminals so that they correspond to the respective spatial resolution level, e.g., D4 indicates that the original input is downsampled by a factor of four; please refer to Section 2.3 for details on overloading nonterminals. For the sake of simplicity, we omit the first two layers and atrous spatial pyramid poolings as they are fixed, and hence define the network-level search space as follows: D4 ::= Same(CELL, D4) | Down(CELL, D8) D8 ::= Up(CELL, D4) | Same(CELL, D8) | Down(CELL, D16) D16 ::= Up(CELL, D8) | Same(CELL, D16) | Down(CELL, D32) D32 ::= Up(CELL, D16) | Same(CELL, D32) , (13) where the topological operators Up, Same, and Down upsample/halve, do not change/do not change, or downsample/double the spatial resolution/channels, respectively. The placeholder variable CELL maps to the shared DARTS cell from the language generated by the CFG from Equation 12. HIERARCHICAL CELL SEARCH SPACE The hierarchical cell search space (Liu et al., 2018) consists of a fixed (linear) macro architecture and a hierarchically assembled cell with three levels which is shared across the macro architecture. Thus, we can omit the fixed macro architecture from our search space design for simplicity. Their first, second, and third hierarchical levels correspond to the primitive computations (i.e., id, max pool, avg pool, sep conv, depth conv, conv, zero), six densely connected four node directed acyclic graphs (DAG4), and a densely connected five node directed acyclic graph (DAG5), respectively. The zero operation could lead to directed acyclic graphs which have fewer nodes. Therefore, we introduce a constraint enforcing that there are always four (level 2) or five (level 3) nodes for every directed acyclic graph. Further, since a densely connected five node directed acyclic graph graph has ten edges, we need to introduce placeholder variables (i.e., M1, ..., M6) to enforce that only six (possibly) different four node directed acyclic graphs are used, and consequently define a CFG for the third level LEVEL3 ::= DAG5(LEVEL2, ..., LEVEL2︸ ︷︷ ︸ ×10 ) LEVEL2 ::= M1 | M2 | M3 | M4 | M5 | M6 | zero , (14) mapping the placeholder variables M1, ..., M6 to the six lower-level motifs constructed by the first and second hierarchical level LEVEL2 ::= DAG4(LEVEL1, ..., LEVEL1)︸ ︷︷ ︸ ×6 LEVEL1 ::= id | max pool | avg pool | sep conv | depth conv | conv | zero . (15) MOBILE-NET SEARCH SPACE Factorized hierarchical search spaces, e.g., the Mobile-net search space (Tan et al., 2019), allow for layer diversity. They factorize a (fixed) macro architecture – often based on an already wellperforming reference architecture – into separate blocks (e.g., cells). For the sake of simplicity, we assume here a three sequential blocks (Block) architecture (Linear). In each of those blocks, we search for the convolution operations (CONV), kernel sizes (KSIZE), squeeze-and-excitation ratio (SERATIO) (Hu et al., 2018), skip connections (SKIP), number of output channels (FSIZE), and number of layers per block (#LAYERS), where the latter two are discretized using a reference architecture, e.g., MobileNetV2 (Sandler et al., 2018). Consequently, we can express this search space as follows: MACRO ::= Linear(BLOCK, BLOCK, BLOCK) BLOCK ::= Block(CONV, KSIZE, SERATIO, SKIP, FSIZE, #LAYERS) CONV ::= conv | dconv | mbconv KSIZE ::= 3 | 5 SERATIO ::= 0 | 0.25 SKIP ::= pooling | id residual | no skip FSIZE ::= 0.75 | 1.0 | 1.25 #LAYERS ::= -1 | 0 | 1 , (16) where conv, donv and mbconv correspond to convolution, depthwise convolution, and mobile inverted bottleneck convolution (Sandler et al., 2018), respectively. HIERARCHICAL RANDOM GRAPH GENERATOR SEARCH SPACE The hierarchical random graph generator search space (Ru et al., 2020) consists of three hierarchical levels of random graph generators (i.e., Watts-Strogatz (Watts & Strogatz, 1998) and Erdõs-Rényi (Erdős et al., 1960)). We denote with Watts-Strogatz i the random graph generated by the Watts-Strogatz model with i nodes. Thus, we can represent the search space as follows: TOP ::= Watts-Strogatz 3(K, Pt)(MID, MID, MID) | ... | Watts-Strogatz 10(K, Pt)(MID, ..., MID︸ ︷︷ ︸ ×10 ) MID ::= Erdõs-Rényi 1(Pm)(BOT) | ... | Erdõs-Rényi 10(Pm)(BOT, ..., BOT︸ ︷︷ ︸ ×10 ) BOT ::= Watts-Strogatz 3(K, Pb)(NODE, NODE, NODE) | ... | Watts-Strogatz 10(K, Pb)(NODE ..., NODE︸ ︷︷ ︸ ×10 ) K ::= 2 | 3 | 4 | 5 , (17) Algorithm 1 Bayesian Optimization algorithm (Brochu et al., 2010). Input: Initial observed data Dt, a black-box objective function f , total number of BO iterations T Output: The best recommendation about the global optimizer x∗ for t = 1, . . . , T do Select the next xt+1 by maximizing acquisition function α(x|Dt) Evaluate the objective function at ft+1 = f(xt+1) Dt+1 ← Dt ∪ (xt+1, ft+1) Update the surrogate model with Dt+1 end for where each terminal Pt, Pm, and Pb maps to a continuous number in [0.1, 0.9]1 and the placeholder variable NODEmaps to a primitive computation, e.g., separable convolution. Note that we omit other hyperparameters, such as stage ratio, channel ratio etc., for simplicity. F MORE DETAILS ON THE SEARCH STRATEGY In this section, we provide more details and examples for our search strategy Bayesian Optimization for Algebraic Neural Architecture Terms (BANAT) presented in Section 3. F.1 BAYESIAN OPTIMIZATION Bayesian Optimization (BO) is a powerful family of search techniques for finding the global optimum of a black-box objective problem. It is particularly useful when the objective is expensive to evaluate and thus sample efficiency is highly important (Brochu et al., 2010). To minimize a black-box objective problem with BO, we first need to build a probabilistic surrogate to model the objective based on the observed data so far. Based on the surrogate model, we design an acquisition function to evaluate the utility of potential candidate points by trading off exploitation (where the posterior mean of the surrogate model is low) and exploration (where the posterior variance of the surrogate model is high). The next candidate points to evaluate is then selected by maximizing the acquisition function (Shahriari et al., 2015). The general procedures of BO is summarized in Algorithm 1. We adopted the widely used acquisition function, expected improvement (EI) (Mockus et al., 1978), in our BO strategy. EI evaluates the expected amount of improvement of a candidate point x over the minimal value f ′ observed so far. Specifically, denote the improvement function as I(x) = max(0, f ′ − f(x)), the EI acquisition function has the form αEI(x|Dt) = E[I(x)|Dt] = ∫ f ′ −∞ (f ′ − f)N ( f ;µ(x|Dt), σ2(x|Dt) ) df = (f ′ − f)Φ ( f ′;µ(x|Dt), σ2(x|Dt) ) + σ2(x|Dt)ϕ(f ′;µ(x|Dt), σ2(x|Dt)) , where µ(x|Dt) and σ2(x|Dt) are the mean and variance of the predictive posterior distribution at a candidate point x, and ϕ(·) and Φ(·) denote the PDF and CDF of the standard normal distribution, respectively. To make use of ample distributed computing resource, we adopted Kriging Believer (Ginsbourger et al., 2010) which uses the predictive posterior of the surrogate model to assign hallucinated function values {f̃p}p∈{1, ..., P} to the P candidate points with pending evaluations {x̃p}p∈{1, ..., P} and perform next BO recommendation in the batch by pseudo-augmenting the observation data with D̃p = {(x̃p, f̃p)}p∈{1, ..., P}, namely D̃t = Dt ∪ D̃p. The algorithm of Kriging Believer at one BO iteration to select a batch of recommended candidate points is summarized in Algorithm 2. 1Theoretically, this is not possible with CFGs. However, we can extend the notion of substitution by substituting a string representation of a Python (float) variable for the placeholder variables Pt, Pm, and Pb. Algorithm 2 Kriging Believer algorithm to select one batch of points. Input: Observation data Dt, batch size b Output: The batch points Bt+1 = {x(1)t+1, . . . ,x (b) t+1} D̃t = Dt ∪ D̃p for j = 1, . . . , b do Select the next x(j)t+1 by maximizing acquisition function α(x|D̃t) Compute the predictive posterior mean µ(x(j)t+1|D̃t) D̃t ← D̃t ∪ (xt+1, µ(x(j)t+1|D̃t)) end for Algorithm 3 Weisfeiler-Lehman subtree kernel computation (Shervashidze et al., 2011). Input: Graphs G1, G2, maximum iterations H Output: Kernel function value between the graphs Initialize the feature vectors ϕ(G1) = ϕ0(G1), ϕ(G2) = ϕ0(G2) with the respective counts of original node labels (i.e., the h = 0 WL features) for h = 1, . . . H do Assign a multiset Mh(v) = {lh−1(u)|u ∈ N (v)} to each node v ∈ G, where lh−1 is the node label function of the h− 1-th WL iteration and N is the node neighbor function Sort elements in multiset Mh(v) and concatenate them to string sh(v) Compress each string sh(v) using the hash function f s.t. f(sh(v)) = f(sh(w)) ⇐⇒ sh(v) = sh(u) Add lh−1 as prefix for sh(v) Concatenate the WL features ϕh(G1), ϕh(G2) with the respective counts of the new labels: ϕ(G1) = [ϕ(G1), ϕh(G1)], ϕ(G2) = [ϕ(G2), ϕh(G2)] Set lh(v) := f(sh(v)) ∀v ∈ G end for Compute inner product k = ⟨ϕh(G1), ϕh(G2)⟩ between WL features ϕh(G1), ϕh(G2) in RKHS H F.2 HIERARCHICAL WEISFEILER-LEHMAN KERNEL Inspired by Ru et al. (2021), we adopted the Weisfeiler-Lehman (WL) graph kernel (Shervashidze et al., 2011) in the GP surrogate model to handle the graph nature of neural architectures. The basic idea of the WL kernel is to first compare node labels, and then iteratively aggregate labels of neighboring nodes, compress them into a new label and compare them. Algorithm 3 summarizes the WL kernel procedure. Ru et al. (2021) identified three reasons for using the WL kernel: (1) it is able to compare labeled and directed graphs of different sizes, (2) it is expressive, and (3) it is relatively efficient and scalable. Our search space design can afford a diverse spectrum of neural architectures with very heterogeneous topological structure. Therefore, reason (1) is a very important property of the WL kernel to account for the diversity of neural architectures. Moreover, if we allow many hierarchical levels, we can construct very large neural architectures. Therefore, reasons (2) and (3) are essential for accurate and fast modeling. However, neural architectures in our search spaces may be significantly larger, which makes it difficult for a single WL kernel to capture the more global topological patterns. Moreover, modeling solely based on the final neural architecture ignores the useful macro-level information from earlier hierarchical levels. In our experiments (Section 5 and I), we have found stronger neural architectures by incorporating the hierarchical information in the kernel design, which provides experimental support for above arguments. However, modeling solely based on the (standard) WL graph kernel neglects the useful hierarchical information from our assembly process. Moreover, the large size of neural architectures make it still challenging to capture the more global topological patterns. We therefore propose to use hierarchical information through a hierarchy of WL graph kernels that take into account the different granularities of the architectures and combine them in a weighted sum. To obtain the different granularities, we use the fold operators Fl that removes algebraic terms beyond the l-th hierarchical level. Thereby, Residual Residual fc we obtain the folds F3(ω) = ω = Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc), (18) F2(ω) = Linear(Residual, Residual, fc) , F1(ω) = Linear , for the algebraic architecture term ω. Note that we ignore the first fold since it does not represent a labeled DAG. Figure 7 visualizes the labeled graphs Φ(F2) and Φ(F3) of the folds F2 or F3, respectively. These graphs can be fed into (standard) WL graph kernels. Therefore, we can construct a hierarchy of WL graph kernels kWL as follows: khWL(ωi, ωj) = L∑ l=2 λl · kWL(Φ(Fl(ωi)),Φ(Fl(ωj))) , (19) where ωi and ωj are two algebraic architecture terms. Note that λl govern the importance of the learned graph information across the hierarchical levels and can be optimized through the marginal likelihood. F.3 EXAMPLES FOR THE EVOLUTIONARY OPERATIONS For the evolutionary operations, we adopted ideas from grammar-based genetic programming (McKay et al., 2010; Moss et al., 2020). In the following, we will show how these evolutionary operations manipulate algebraic terms, e.g., Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) , (20) from the search space S ::= Linear(S, S, S) | Residual(S, S, S) | conv | id | fc , (21) to generate evolved algebraic terms. Figure 1 shows how we can derive the algebraic term in Equation 20 from the search space in Equation 21. For mutation operations, we first randomly pick a subterm of the algebraic term, e.g., Residual(conv, id, conv). Then, we randomly sample a new subterm with the same nonterminal symbol S as start symbol, e.g., Linear(conv, id, fc), and replace the previous subterm, yielding Linear(Linear(conv, id, fc), Residual(conv, id, conv), fc) . (22) For (self-)crossover operations, we swap two subterms, e.g., Residual(conv, id, conv) and Residual(conv, id, conv) with the same nonterminal S as start symbol, yielding Linear(Residual(conv, id, conv), Residual(conv, id, conv), fc) . (23) Note that unlike the commonly used crossover operation, which uses two parents, self-crossover has only one parent. In future work, we could also add a self-copy operation that copies a subterm to another part of the algebraic term, explicitly regularizing diversity and thus potentially speeding up the search. G RELATED WORK BEYOND NEURAL ARCHITECTURE SEARCH While our work focuses exclusively on NAS, we will discuss below how it relates to the areas of optimizer search (as well as from scratch automated machine learning) and neural-symbolic programming. Optimizer search is a closely related field to NAS, where we automatically search for an optimizer (i.e., an update function for the weights) instead of an architecture. Initial works used learnable parametric or non-parametric optimizers. While the former approaches (Andrychowicz et al., 2016; Li & Malik, 2017; Chen et al., 2017; 2022a) have poor scalability and generality, the latter works overcome those limitations. Bello et al. (2017) searched for an instantiation of hand-crafted patterns via reinforcement learning, while Wang et al. (2022) proposed a tree-structured search space2 and searched for optimizers via a modified Monte Carlo sampling approach. AutoML-Zero (Real et al., 2020) took an even more general approach by searching over entire machine learning algorithms, including optimizers, from a generic search space built from basic mathematical operations with an evolutionary algorithm. Chen et al. (2022b) used RE to discover optimizers from a generic search space (inspired by AutoML-Zero) for training vision transformers (Dosovitskiy et al., 2021). Complementary to the above, there is recent interest in automatically synthesizing programs from domain-specific languages. Gaunt et al. (2017) proposed a hand-crafted program template and simultaneously optimized the parameters of the differentiable program with gradient descent. The HOUDINI framework (Valkov et al., 2018) proposed type-directed (top-down) enumeration and evolution approaches over differentiable functional programs. Shah et al. (2020) hierarchically assembled differentiable programs and used neural networks for the approximation of missing expression in partial programs. Cui & Zhu (2021) treated CFGs stochastically with trainable production rule sampling weights, which were optimized with a gradient-based approach (Liu et al., 2019b). However, naı̈vely applying gradient-based approaches does not work in our search spaces due to the exponential explosion of supernet weights, but still renders an interesting direction for future work. Compared to these lines of work, we extended CFGs to handle changes in spatial resolution, promote regularity, and (compared to most of them) incorporate constraints, the latter two of which could also be applied in those domains. We also proposed a BO search strategy to search efficiently with a tailored kernel design to handle the hierarchical nature of the search space (i.e., the architectures). H IMPLEMENTATION DETAILS OF THE SEARCH STRATEGIES BANAT & BANAT (WL) The only difference between BANAT and BANAT (WL) is that the former uses our proposed hierarchy of WL kernels (hWL), whereas the latter only uses a single WL kernel (WL) for the entire architecture (c.f., (Ru et al., 2021)). We ran BANAT asynchronously in parallel throughout our experiments with a batch size of B = 1, i.e., at each BO iteration a single architecture is proposed for evaluation. For the acquisition function optimization, we used a pool size of P = 200, where the initial population consisted of the current ten best-performing architectures and the remainder were randomly sampled architectures to encourage exploration in the huge search spaces. During evolution, the mutation probability was set to pmut = 0.5 and crossover probability was set to pcross = 0.5. From the crossovers, half of them were self-crossovers of one parent and the other half were common crossovers between two parents. The tournament selection probability was set to ptour = 0.2. We evolved the population at least for ten iterations and a maximum of 50 iterations using a early stopping criterion based on the fitness value improvements over the last five iterations. Regularized Evolution (RE) RE (Real et al., 2019; Liu et al., 2018) iteratively mutates the best architectures out of a sample of the population. We reduced the population size from 50 to 30 to account for fewer evaluations, and used a sample size of 10. We also ran RE asynchronously for better comparability. I SEARCHING THE HIERARCHICAL NAS-BENCH-201 SEARCH SPACE In this section, we provide training details (Section I.1) and provide complementary results as well as conduct extensive analyses (Section I.2). 2Note that the tree-structured search space can equivalently be described with a CFG (with a constraint on the number of maximum depth of the syntax trees). I.1 TRAINING DETAILS Training protocol We evaluated all search strategies on CIFAR-10/100 (Krizhevsky et al., 2009), ImageNet-16-120 (Chrabaszcz et al., 2017), CIFARTile, and AddNIST (Geada et al., 2021). Note that CIFARTile and AddNIST are novel datasets and therefore have not yet been optimized by the research community. We provide further dataset details below. For training of architectures on CIFAR-10/100 and ImageNet-16-120, we followed Dong & Yang (2020). We trained architectures with SGD with learning rate of 0.1, Nesterov momentum of 0.9, weight decay of 0.0005 with cosine annealing (Loshchilov & Hutter, 2019), and batch size of 256 for 200 epochs. The initial channels were set to 16. For both CIFAR-10 and CIFAR-100, we used random flip with probability 0.5 followed by a random crop (32x32 with 4 pixel padding) and normalization. For ImageNet-16120, we used a 16x16 random crop with 2 pixel padding instead. For training of architectures on AddNIST and CIFARTile, we followed the training protocol from the CVPR-NAS 2021 competition (Geada et al., 2021): We trained architectures with SGD with learning rate of 0.01, momentum of 0.9, and weight decay of 0.0003 with cosine annealing, and batch size of 64 for 64 epochs. We set the initial channels to 16 and did not apply any further data augmentation. Dataset details In Table 4, we provide the licenses for the datasets used in our experiments. For training of architectures on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet-16-120 (Chrabaszcz et al., 2017), we followed the dataset splits and training protocol of NAS-Bench-201 (Dong & Yang, 2020). For CIFAR-10, we split the original training set into a new training set with 25k images and validation set with 25k images following Dong & Yang (2020). The test set remained unchanged. For evaluation, we trained architectures on both the training and validation set. For CIFAR-100, the training set remained unchanged, but the test set was partitioned in a validation set and new test set with each 5K images. For ImageNet-16-120, all splits remained unchanged. For AddNIST and CIFARTile, we used the training, validation, and test splits as defined in the CVPR-NAS 2021 competition (Geada et al., 2021). I.2 EXTENDED SEARCH RESULTS AND ANALYSES Supplementary to Figure 2, Figure 8 compares the cell-based vs. hierarchical NAS-Bench-201 search space from Section 6.1 using RS, RE, and BANAT (WL). The cell-based search space design shows on par or stronger performance on all datasets except for CIFARTile for the three search strategies. In contrast, for our proposed search strategy BANAT we find on par (CIFAR-10/100) or superior (ImageNet-16-120, CIFARTile, and AddNIST) performance using the hierarchical search space design. This clearly shows that the increase of the search space does not necessarily yields the discovery of stronger neural architectures. Further, it exemplifies the importance of a strong search strategy to search effectively and efficiently in huge hierarchical search spaces (Q2), and provides further evidence that the incorporation of hierarchical information is a key contributor for search efficiency (Q3). Based on this, we believe that future work using, e.g., graph neural networks as a surrogate, may benefit from the incorporation of hierarchical information. We report the test errors of our best found architectures in Table 5. We observe that our search strategy BANAT finds the strongest performing architectures across all dataset (Q2, Q3). Also note that we achieve better (validation and) test performance on ImageNet-16-120 on the hierarchical than the state-of-the-art search strategy on the cell-based NAS-Bench-201 search space (i.e., +0.37%p compared to Shapley-NAS (Xiao et al., 2022)) (Q1). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (a) Random Search (RS). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST hierarchical cell-based (b) Regularized Evolution (RE). 0 25 50 75 100 Evaluations 8 9 10 11 V al er ro r [% ] CIFAR-10 0 25 50 75 100 Evaluations 26 28 30 32 CIFAR-100 0 25 50 75 100 Evaluations 52 54 56 58 ImageNet-16-120 0 25 50 75 100 Evaluations 30 40 50 60 CIFARTile 0 25 50 75 100 Evaluations 7 8 9 10 AddNIST Ta bl e 5: Te st er ro rs (a nd ± 1 st an da rd er ro r) of po pu la r ba se lin e ar ch ite ct ur es (e .g ., R es N et (H e et al ., 20 16 ) an d E ffi ci en tN et (T an & L e, 20 19 ) va ri an ts ), an d ou r be st fo un d ar ch ite ct ur es on th e ce ll- ba se d an d hi er ar ch ic al N A S- B en ch -2 01 se ar ch sp ac e. N ot e th at w e pi ck ed th e R es N et an d E ffi ci en tN et va ri an tb as ed on th e te st er ro r, co ns eq ue nt ly gi vi ng an ov er es tim at e of th ei rt es tp er fo rm an ce . † op tim al nu m be rs as re po rt ed in D on g & Y an g (2 02 0) . (b es t) te st er ro r( an d ± 1 st an da rd er ro r) ac ro ss th re e se ed s {7 7 7 , 8 8 8 , 9 9 9 } of th e be st ar ch ite ct ur e of th e th re e se ar ch ru ns w ith lo w es tv al id at io n er ro r. M et ho d C IF A R -1 0 C IF A R -1 00 Im ag eN et -1 6- 12 0 C IF A R Ti le A dd N IS T ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al ce llba se d hi er ar ch ic al B es tR es N et (H e et al ., 20 16 ) 06 .4 9 ± 0. 24 (3 2) 27 .1 0± 0. 67 (1 10 ) 53 .6 7 ± 0. 18 (5 6) 57 .8 0± 0. 57 (1 8) 7. 78 ± 0. 05 (3 4) B es tE ffi ci en tN et (T an & L e, 20 19 ) 11 .7 3 ± 0. 10 (B 0) 35 .1 7 ± 0. 42 (B 6) 77 .7 3 ± 0. 29 (B 0) 61 .0 1 ± 0. 62 (B 0) 13 .2 4 ± 0. 58 (B 1) N A SB en ch -2 01 or ac le † 5. 63 26 .4 9 52 .6 9 - - R S 6. 39 ± 0. 18 6. 77 ± 0. 10 28 .7 5 ± 0. 57 29 .4 9 ± 0. 57 54 .8 3 ± 0. 78 54 .7 0± 0. 82 52 .7 2 ± 0. 45 40 .9 3 ± 0. 81 7. 82 ± 0. 36 8. 05 ± 0. 29 N A SW O T (N =1 0) (M el lo re ta l., 20 21 ) 6. 55 ± 0. 10 8. 18 ± 0. 46 29 .3 5 ± 0. 53 31 .7 3 ± 0. 96 56 .8 0± 1. 35 58 .6 6 ± 0. 29 41 .8 3 ± 2. 29 49 .4 6 ± 2. 95 10 .1 1 ± 0. 69 11 .8 1 ± 1. 55 N A SW O T (N =1 00 )( M el lo re ta l., 20 21 ) 6. 59 ± 0. 17 8. 56 ± 0. 87 28 .9 1 ± 0. 25 31 .6 5 ± 1. 95 55 .9 9 ± 1. 30 58 .4 7 ± 2. 74 41 .6 3 ± 1. 02 43 .3 1 ± 2. 00 10 .7 5 ± 0. 23 14 .4 7 ± 1. 44 N A SW O T (N =1 00 0) (M el lo re ta l., 20 21 ) 6. 68 ± 0. 12 8. 26 ± 0. 38 29 .3 7 ± 0. 17 31 .6 6 ± 0. 72 58 .9 3 ± 2. 92 58 .3 3 ± 0. 91 39 .6 1 ± 1. 12 45 .6 6 ± 1. 29 10 .6 8 ± 0. 27 13 .5 7 ± 1. 89 N A SW O T (N =1 00 00 )( M el lo re ta l., 20 21 ) 6. 98 ± 0. 43 8. 40 ± 0. 52 29 .9 5 ± 0. 42 32 .0 9 ± 1. 61 54 .2 0± 0. 49 57 .5 8 ± 1. 53 39 .9 0± 1. 20 42 .4 5 ± 0. 67 10 .7 2 ± 0. 53 14 .8 2 ± 0. 66 R E (R ea le ta l., 20 19 ;L iu et al ., 20 18 ) 5. 76 ± 0. 17 6. 88 ± 0. 24 27 .6 8 ± 0. 55 30 .0 0± 0. 32 53 .9 2 ± 0. 60 55 .3 9 ± 0. 54 52 .7 9 ± 0. 59 40 .9 9 ± 2. 89 7. 69 ± 0. 35 7. 56 ± 0. 69 B A N A T (W L )( R u et al ., 20 21 ) 5. 68 ± 0. 11 6. 98
1. What is the focus and contribution of the paper regarding symbolic search spaces for NAS? 2. What are the strengths and weaknesses of the proposed approach compared to prior cell-based search spaces? 3. Do you have any questions or suggestions regarding the minor issues mentioned in the review, such as relying on human priors and potential search cost prohibitions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a novel symbolic search space for NAS that allows generating architectures without predefined connection patterns. The search space is derived from contextual-free gramma where the networks are presented as algebraic terms formed from neural operators. To navigate through this search space, the author leverages Bayesian Optimization with a hierarchical WL kernel. Experiments are conducted on NAS-Bench-201, where the author shows that architectures derived from this search space surpass those generated by prior cell-based search spaces. Strengths And Weaknesses Strength: The presented search space is novel and different from existing architectural spaces. By itself, it counts as a solid contribution to the community. The authors demonstrated the effectiveness of this search space over hand-engineered cell-based ones, leveraging the BO-hWL algorithm. Weakness: [Minor] The search space itself still relies on existing patterns designed by human priors, e.g. the Cell(OP, OP, OP) operators, residual, and convolutions. Moreover, considering the sheer size of the search space, the search cost could be prohibitive beyond small-scale experiments. However, I don’t think it diminishes the contributions of this work. These issues could be left for future works to explore. [Minor] Related work: The authors did a throughout survey of relevant NAS works in Section 5. Though it might also be worthwhile to also mention the topics of program synthesis or Neural-Symbolic Programming. The NSP community also has several existing works on adopting contextual-free grammar for discovering programs [1, 2]. Some also mentioned NAS since architectures can be viewed as a subset of differentiable programs [1, 2]. This work also inspires some recent AutoML methods, such as [3] where an algebraic search space is also proposed, but for a different task. [1] Shah et al, Learning Differentiable Programs with Admissible Neural Heuristics. NeurIPS 2020 [2] Cui and Zhu, Differentiable Synthesis of Program Architectures. NeurIPS 2021 [3] Wang et al. Efficient Non-Parametric Optimizer Search for Diverse Tasks. NeurIPS 2022 Clarity, Quality, Novelty And Reproducibility The paper is clear and well-written. The search space is novel for the NAS community. I don't seem to find the code in this submission, but the author mentioned in the abstract that they open source in PyTorch and Tensorflow.
ICLR
Title Nonseparable Symplectic Neural Networks Abstract Predicting the behaviors of Hamiltonian systems has been drawing increasing attention in scientific machine learning. However, the vast majority of the literature was focused on predicting separable Hamiltonian systems with their kinematic and potential energy terms being explicitly decoupled while building data-driven paradigms to predict nonseparable Hamiltonian systems that are ubiquitous in fluid dynamics and quantum mechanics were rarely explored. The main computational challenge lies in the effective embedding of symplectic priors to describe the inherently coupled evolution of position and momentum, which typically exhibits intricate dynamics. To solve the problem, we propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs), to uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data. The enabling mechanics of our approach is an augmented symplectic time integrator to decouple the position and momentum energy terms and facilitate their evolution. We demonstrated the efficacy and versatility of our method by predicting a wide range of Hamiltonian systems, both separable and nonseparable, including chaotic vortical flows. We showed the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems by rigorously enforcing symplectomorphism. 1 INTRODUCTION A Hamiltonian dynamic system refers to a formalism for modeling a physical system exhibiting some specific form of energy conservation during its temporal evolution. A typical example is a pendulum whose total energy (referred to as the system’s Hamiltonian) is conserved as a temporally invariant sum of its kinematic energy and potential energy. Mathematically, such energy conservation indicates a specific geometric structure underpinning its time integration, named as a symplectic structure, which further spawns a wide range of numerical time integrators to model Hamiltonian systems. These symplectic time integrators have proven their effectiveness in simulating a variety of energy-conserving dynamics when Hamiltonian expressions are known as a prior. Examples encompass applications in plasma physics (Morrison, 2005), electromagnetics (Li et al., 2019), fluid mechanics (Salmon, 1988), and celestial mechanics (Saari & Xia, 1996), to name a few. On another front, the emergence of the various machine learning paradigms with their particular focus on uncovering the hidden invariant quantities and their evolutionary structures enable a faithful prediction of Hamiltonian dynamics without knowing its analytical energy expression beforehand. The key mechanics underpinning these learning models lie in a proper embedding of the strong mathematical inductive priors to ensure Hamiltonian conservation in a neural network data flow. Typically, such priors are realized in a variational way or a structured way. For example, in Greydanus et al. (2019), the Hamiltonian conservation is encoded in the loss function. This category of methods does not assume any combinatorial pattern of the energy term and therefore relies on the inherent expressiveness of neural networks to distill the Hamiltonian structure from abundant training datasets (Choudhary et al., 2019). Another category of Hamiltonian networks, which we refer to as structured approaches, implements the conservation law indirectly by embedding a symplectic time integrator (DiPietro et al., 2020; Tong et al., 2020; Chen et al., 2020) or composition of linear, activation, and gradient modules (Jin et al., 2020) into the network architecture. ∗shiying.xiong@dartmouth.edu . One of the main limitations of the current structured methods lies in the separable assumption of the Hamiltonian expression. Examples of separable Hamiltonian systems include the pendulum, the Lotka–Volterra (Zhu et al., 2016), the Kepler (Antohe & Gladwell, 2004), and the Hénon–Heiles systems (Zotos, 2015). However, beyond this scope, there exist various nonseparable systems whose Hamiltonian has no explicit expression to decouple the position and momentum energies. Examples include incompressible flows (Suzuki et al., 2007), quantum systems (Bonnabel et al., 2009), rigid body dynamics (Chadaj et al., 2017), charged particle dynamics (Zhang et al., 2016), and nonlinear Schrödinger equation (Brugnano et al., 2018). This nonseparability typically causes chaos and instability, which further complicates the systems’ dynamics. Although SympNet in Jin et al. (2020) can be used to learn and predict nonseparable Hamiltonian systems, multiple matrices of the same order with system dimension are needed in the training process of SympNet, resulting in difficulties in generalizing into high-dimensional large-scale N-body problems which are common in a series of nonseparable Hamiltonian systems, such as quantum multibody problems and vortexparticle dynamics problems. Such chaotic and large-scale nature jointly adds shear difficulties for a conventional machine learning model to deliver faithful predictions. In this paper, we propose an effective machine learning paradigm to predict nonseparable Hamiltonian systems. We build a novel neural network architecture, named nonseparable symplectic neural networks (NSSNNs), to enable accurate and robust predictions of long-term Hamiltonian dynamics based on short-term observation data. Our proposed method belongs to the category of structured network architectures: it intrinsically embeds the symplectomorphism into the network design to strictly preserve the symplectic evolution and further conserves the unknown, nonseparable Hamiltonian energy. The enabling techniques we adopted in our learning framework consist of an augmented symplectic time integrator to asymptotically “decouple” the position and momentum quantities that were nonseparable in their original form. We also introduce the Lagrangian multiplier in the augmented phase space to improve the system’s numerical stability. Our network design is motivated by ideas originated from physics (Tao, 2016) and optimization (Boyd et al., 2004). The combination of these mathematical observations and numerical paradigms enables a novel neural network architecture that can drastically enhance both the scale and scope of the current predictions. We show a motivational example in Figure 1 by comparing our approach with a traditional HNN method (Greydanus et al., 2019) regarding their structural designs and predicting abilities. We refer the readers to Section 6 for a detailed discussion. As shown in Figure 1, the vortices evolved using NSSNN are separated nicely as the ground truth, while the vortices merge together using HNN due to the failure of conserving the symplectic structure of a nonseparable system. The conservative capability of NSSNN springs from our design of the auxiliary variables (red x and y) which converts the original nonseparable system into a higher dimensional quasi-separable system where we can adopt a symplectic integrator. 2 RELATED WORKS Data-driven physical prediction. Data-driven approaches have been widely applied in physical systems including fluid mechanics (Brunton et al., 2020), wave physics (Hughes et al., 2019), quantum physics (Sellier et al., 2019), thermodynamics (Hernandez et al., 2020), and material science (Teicherta et al., 2019). Among these different physical systems, data-driven fluid receives increasing attention. We refer the readers to Brunton et al. (2020) for a thorough survey of the fundamental machine learning methodologies as well as their uses for understanding, modeling, optimizing, and controlling fluid flows in experiments and simulations based on training data. One of the motivations of our work is to design a versatile learning approach that can predict complex fluid motions. On another front, many pieces of research focus on incorporating physical priors into the learning framework, e.g., by enforcing incompressibility (Mohan et al., 2020), the Galilean invariance (Ling et al., 2016), quasistatic equilibrium (Geng et al., 2020), the Lagrangian invariance (Cranmer et al., 2020), and Hamiltonian conservation (Hernandez et al., 2020; Greydanus et al., 2019; Jin et al., 2020; Zhong et al., 2020). Here, inspired by the idea of embedding physics priors into neural networks, we aim to accelerate the learning process and improve the accuracy of our model. Neural networks for Hamiltonian systems. Greydanus et al. (2019) introduced Hamiltonian neural networks (HNNs) to conserve the Hamiltonian energy of the system by reformulating the loss function. Inspired by HNN, a series of methods intrinsically embedding a symplectic integrator into the recurrent neural network was proposed, such as SRNN (Chen et al., 2020), TaylorNet (Tong et al., 2020) and SSINN (DiPietro et al., 2020), to solve separable Hamiltonian systems. Combined with graph networks (Sanchez-Gonzalez et al., 2019; Battaglia et al., 2016), these methods were further generalized to large-scale N-body problems induced by interaction force between the particle pairs. Jin et al. (2020) proposed SympNet by directly constructing the symplectic mapping of system variables within neighboring time steps to handle both separable and nonseparable Hamiltonian systems. However, the scale of parameters in SympNet for training N dimensional Hamiltonian system is O(N2), which makes it hard to be generalized to the high dimensional N-body problems. Our NSSNN overcomes these limitations by devising a new Hamiltonian network architecture that is specifically suited for nonseparable systems (see details in Section 5). In addition, the Hamiltonianbased neural networks can be extended to further applications. Toth et al. (2020) developed the Hamiltonian Generative Network (HGN) to learn Hamiltonian dynamics from high-dimensional observations (such as images). Moreover, Zhong et al. (2020) introduced Symplectic ODE-Net (SymODEN), which adds an external control term to the standard Hamiltonian dynamics. 3 FRAMEWORK 3.1 AUGMENTED HAMILTONIAN EQUATION We start by considering a Hamiltonian system with N pairs of canonical coordinates (i.e. N generalized positions and N generalized momentum). The time evolution of canonical coordinates is governed by the symplectic gradient of the Hamiltonian (Hand & Finch, 2008). Specifically, the time evolution of the system is governed by Hamilton’s equations as dq dt = ∂H ∂p , dp dt = −∂H ∂q , (1) with the initial condition (q,p)|t=t0 = (q0,p0). In a general setting, q = (q1, q2, · · · , qN ) represents the positions and p = (p1, p2, ...pN ) denotes their momentum. Function H = H(q,p) is the Hamiltonian, which corresponds to the total energy of the system. An important feature of Hamilton’s equations is its symplectomorphism (see Appendix B for a detailed overview). The symplectic structure underpinning our proposed network architecture draws inspirations from the original research of Tao (2016) in computational physics. In Tao (2016), a generic, high-order, explicit and symplectic time integrator was proposed to solve (1) of an arbitrary separable and nonseparable HamiltonianH. This is implemented by considering an augmented Hamiltonian H(q,p,x,y) := HA +HB + ωHC (2) with HA = H(q,y), HB = H(x,p), HC = 1 2 ( ‖q − x‖22 + ‖p− y‖22 ) (3) in an extended phase space with symplectic two form dq ∧ dp+ dx ∧ dy, where ω is a constant that controls the binding of the original system and the artificial restraint. Notice that the Hamilton’s equations forH dq dt = ∂H ∂p = ∂H(x,p) ∂p + ω(p− y), dp dt = −∂H ∂q = −∂H(q,y) ∂q − ω(q − x), dx dt = ∂H ∂y = ∂H(q,y) ∂y − ω(p− y), dy dt = −∂H ∂x = −∂H(x,p) ∂x + ω(q − x), (4) with the initial condition (q,p,x,y)|t=t0 = (q0,p0, q0,p0) have the same exact solution as (1) in the sense that (q,p,x,y) = (q,p, q,p). Hence, we can get the solution of (1) by solving (4). Furthermore, it is possible to construct high-order symplectic integrators forH in (4) with explicit updates. Our model aims to learn the dynamical evolution of (q,p) in (1) by embedding (4) into the framework of NeuralODE (Chen et al., 2018). The coefficient ω acts as a regularizer, which stabilizes the numerical results (see Section 4). 3.2 NONSEPARABLE HAMILTONIAN NEURAL NETWORK We learn the nonseparable Hamiltonian dynamics (1) by constructing an augmented system (4), from which we can obtain the energy function H(q,p) by training the neural network Hθ(q,p) with parameter θ and calculate the gradient∇Hθ(q,p) by taking the in-graph gradient. For the constructed network Hθ(q,p), we integrate (4) by using the second-order symplectic integrator (Tao, 2016). Specifically, we will have an input layer (q,p,x,y) = (q0,p0, q0,p0) at t = t0 and an output layer (q,p,x,y) = (qn,pn,xn,yn) at t = t0 +ndt. Algorithm 1 Integrate (4) by using the secondorder symplectic integrator Input: q0,p0, t0, t, dt; φδ1, φδ2, and φδ3 in (5); Output: (q̂, p̂, x̂, ŷ) = (qn,pn,xn,yn) 1 (q0,p0,x0,y0) = (q0,p0, q0,p0) n = floor[(t− t0)/dt] for i = 1→ n do 2 (qi,pi,xi,yi) = φ dt/2 1 ◦φ dt/2 2 ◦φdt3 ◦φ dt/2 2 ◦ φ dt/2 1 ◦ (qi−1,pi−1,xi−1,yi−1); 3 end The recursive relations of (qi,pi,xi,yi), i = 1, 2, · · · , n, can be expressed by the algorithm 1 (also see Figure 8 in Appendix A). The input functions φδ1(q,p,x,y), φ δ 2(q,p,x,y), and φ δ 3(q,p,x,y) in algorithm 1 are qp− δ[∂Hθ(q,y)/∂q]x+ δ[∂Hθ(q,y)/∂p] y , q + δ[∂Hθ(x,p)/∂p]px y − δ[∂Hθ(x,p)/∂q] , and 1 2 ( q + x p+ y ) +Rδ ( q − x p− y ) ( q + x p+ y ) −Rδ ( q − x p− y ) , (5) respectively. Here Rδ := [ cos(2ωδ)I sin(2ωδ)I − sin(2ωδ)I cos(2ωδ)I ] , where I is a identity matrix. (6) We remark that x and y are just auxiliary variables, which are theoretically equal to q and p. Therefore, we can use the data set of (q,p) to construct the data set containing variables (q,p,x,y). In addition, by constructing the networkHθ, we show that theorem B.1 in Appendix B holds, so the networks φδ1,φ δ 2, and φ δ 3 in (5) preserve the symplectic structure of the system. Suppose that Φ1 and Φ2 are two symplectomorphisms. Then, it is easy to show that their composite map Φ2 ◦ Φ1 is also symplectomorphism due to the chain rule. Thus, the symplectomorphism of algorithm 1 can be guaranteed by the theorems B.1. 4 TRAINING SETTINGS AND ABLATION TESTS We use 6 linear layers with hidden size 64 to model Hθ, all of which are followed by a Sigmoid activation function except the last one. The derivatives ∂Hθ/∂p, ∂Hθ/∂q, ∂Hθ/∂x, ∂Hθ/∂y are all obtained by automatic differentiation in Pytorch (Paszke et al., 2019). The weights of the linear layers are initialized by Xavier initializaiton (Glorot & Bengio, 2010). We generate the dataset for training and validation using high-precision numerical solver (Tao, 2016), where the ratio of training and validation datasets is 9 : 1. We set the dataset (qj0,p j 0) as the start input and (qj ,pj) as the target with j = 1, 2, · · · , Ns, and the time span between (qj0,p j 0) and (q j ,pj) is Ttrain. Feeding (q0,p0) = (q j 0,p j 0), t0 = 0, t = Ttrain, and time step dt in Algorithm 1 to get the predicted variables (q̂j , p̂j , x̂j , ŷj). Accordingly, the loss function is defined as LNSSNN = 1 Nb Nb∑ j=1 ‖q(j) − q̂(j)‖1 + ‖p(j) − p̂(j)‖1 + ‖q(j) − x̂(j)‖1 + ‖p(j) − ŷ(j)‖1, (7) where Nb = 512 is the batch size of the training samples. We use the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.05. The learning rate is multiplied by 0.8 for every 10 epoches. Taking systemH(q, p) = 0.5(q2 + 1)(p2 + 1) as an example, we carry out a series of ablation tests based on our constructed networks. Normally, we set the time span, time step and dateset size as T = 0.01, dt = 0.01 and Ns = 1280. The choice of ω in (4) is largely flexible since NSSNN is not sensitive to the parameter ω when it is larger than a certain threshold. Figure 2 shows the training and validation losses with different ω in the network trained by clean and noise datasets. Though the convergence rates are slightly different in a small scope, the examples with various ω are able to converge to the same size of training and validation losses. Here, we set ω = 2000, but ω can be smaller than 2000. The only requirement for picking ω is that it has to be larger than O(10), which is detailed in Appendix C. We pick the L1 loss function to train our network due to its better performance. Figure 3 compares the validation losses with different training loss functions in the network trained by clean and noise datasets. Figure 3(a) shows that either the network trained by L1 or MSE with a clean dataset can converge to a small validation loss, but the network trained by L1 loss converges relatively faster. Figures 3(b) and 3(c) both show that the network trained by L1 with noise dataset can converge to a smaller validation loss. In addition, we already introduced a regularization term in the symplectic integrator embedded in the network; thus, there is no need to add the regularization term in the loss function. The integral time step in the sympletic integrator is a vital parameter, and the choice of dt largely depends on the time span Ttrain. Figure 4 compares the validation losses generated by various integral time steps dt based on fixed dataset time spans Ttrain = 0.01, 0.1 and 0.2 respectively in the training process. The validation loss converges to a similar degree with various dt based on fixed Ttrain = 0.01 and Ttrain = 0.1 in 4(a) and (b), while it increases significantly as dt increases based on fixed Ttrain = 0.02 in 4(c). Thus, we should take relatively small dt for the dataset with larger time span Ttrain 5 COMPARISONS WITH OTHER METHODS 5.1 METHODOLOGIES We compare our method with other recently proposed methods, such as HNN (Greydanus et al., 2019), NeuralODE (Chen et al., 2018), TaylorNet (Tong et al., 2020), SSINN (DiPietro et al., 2020), SRNN (Chen et al., 2020), and SympNet (Jin et al., 2020). There are several features distinguishing our method from others, as shown in Table 1. HNN first enforces conservative features of a Hamiltonian system by reformulating its loss function, which incurs two main shortcomings. On the one hand, it requires the temporal derivatives of the momentum and the position of the systems to calculate the loss function, which is difficult to obtain from real-world systems. On the other hand, HNN doesn’t strictly preserve the symplectic structure, because its symplectomorphism is realized by its loss function rather than its intrinsic network architecture. NeuralODE successfully bypasses the time derivatives of the datasets by incorporating an integrator solver into the network architecture. Embedding the Hamiltonian prior into the NeuralODE, a series of methods are proposed, such as SRNN, SSINN, and TaylorNet, to predict the continuous trajectory of system variables; however, presently these methods are only designed to solve separable Hamiltonian systems. Instead of updating the continuous dynamics by integrating the neural networks in NeuralODE, SympNet adopts a symplectomorphism composed of well-designed both linear and non-linear matrices to intrinsically map the system variables within neighboring time steps. However, the parameters scale in the matrix map for training N dimensional Hamiltonian system in SympNet is O(N2), which makes it hard to generalize to the high dimensional N-body problems. For example, in Section 6, we predict the dynamic evolution of 6000 vortex particles, which is challenging for the training process of the SympNet on the level of O(60002). NSSNN overcomes the weaknesses mentioned above. Under the framework of NeuralODE, NSSNN utilizes continuously-defined dynamics in the neural networks, which gives it the capability to learn the continuous-time evolution of dynamical systems. Based on Tao (2016), NSSNN embeds the symplectic prior into the nonseparable symplectic integrator to ensure the strict symplectomorphism, thereby guaranteeing the property of long-term predictability. In addition, unlike SympNet, NSSNN is highly flexible and can be generalized to high dimensional N-body problems by involving the interaction networks (Sanchez-Gonzalez et al., 2019), which will be further discussed in Section 6. 5.2 EXPERIMENTS We compare five implementations that learn and predict Hamiltonian systems. The first one is NeuralODE, which trains the system by embedding the network fθ → (dq/dt, dp/dt) into the Runge-Kutta (RK) integrator. The other four, however, achieve the goal by fitting the Hamiltonian Hθ → H based on (1). Specifically, HNN trains the network with the constraints of the Hamiltonian symplectic gradient along with the time derivative of system variables and then embeds the welltrainedHθ into the RK integrator for predicting the system. The third and fourth implementations are ablation tests. One of them is improved HNN (IHNN), which embeds the well-trainedHθ into the nonseparable symplectic integrator (Tao’s integrator) for predicting. The other is to directly embed Hθ into the RK integrator for training, which we call HRK. The fifth method is NSSNN, which embedsHθ into the nonseparable symplectic integrator for training. For fair comparison, we adopt the same network structure (except that the dimension of output layer in NeuralODE is two times larger than that in the other four), the same L1 loss function and same size of the dataset, and the precision of all integral schemes is second order, and the other parameters keep consistent with the one in Section 4. The time derivative in the dataset for training HNN and IHNN is obtained by the first difference method dq dt ≈ q(Ttrain)− q(0) Ttrain and dp dt ≈ p(Ttrain)− q(0) Ttrain . (8) Figure 5 demonstrates the differences between the five methods using a spring systemH = 0.5(q2 + p2) with different time span Ttrain = 0.4, 1 and same time step dt = 0.2. We can see that by introducing the nonseparable symplectic integrator into the prediction of the Hamiltonian system, NSSNN has a stronger long-term predicting ability than all the other methods. In addition, the prediction of HNN and IHNN lies in the dataset with time derivative; consequently, it will lead to a larger error when the given time span Ttrain is large. Moreover, the datasets obtained by (11) in HNN and IHNN are sensitive to noise. Figure 6 compares the predictions of (q, p) for the system H = 0.5(q2 + 1)(p2 + 1), where the network is trained by the dataset with noise ∼ 0.05U(−1, 1), along with time span Ttrain = 0.2 and time step dt = 0.02. Under the condition with noise, NSSNN still performs well compared with other methods. Also, we compare the convergent error of a series of Hamiltonian systems with differentH trained with noisy data in Appendix D, which generally shows better robustness than HNN does. 6 MODELING VORTEX DYNAMICS OF MULTI-PARTICLE SYSTEM For two-dimensional vortex particle systems, the dynamical equations of particle positions (xj , yj), j = 1, 2, · · · , Nv with particle strengths Γj can be written in the generalized Hamiltonian form as Γj dxj dt = −∂H p ∂yj , Γj dyj dt = ∂Hp ∂xj , with Hp = 1 4π Nv∑ j,k=1 ΓjΓk log(|xj − xk|). (9) By including the given particle strengths Γj in Algorithm 1, we can still adopt the method mentioned above to learn the Hamiltonian in (9) when there are fewer particles. However, considering a system with Nv 2 particles, the cost to collect training data from all Nv particles might be high, and the training process can be time-consuming. Thus, instead of collecting information from all Nv particles to train our model, we only use data collected from two bodies as training data to make predictions of the dynamics of Nv particles. Specifically, we assume the interactive models between particle pairs with unit particle strengths Γj = 1 are the same, and their corresponding Hamiltonian can be represented as network Ĥθ(xj ,xk), based on which the corresponding Hamiltonian of Nv particles can be written as (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2019) Hpθ = Nv∑ i,j=1 ΓjΓkĤθ(xj ,xk). (10) We embed (10) into the symplectic integrator that includes Γj to obtain the final network architecture. The setup of the multi-particle problem is similar to the previous problems. The training time span is Ttrain = 0.01 while the prediction period can be up to Tpredict = 40. We use 2048 clean data samples to train our model. The training process takes about 100 epochs for the loss to converge. In Figure 7, we use our trained model to predict the dynamics of 6000-particle systems, including Taylor and Leapfrog vortices. We generate results of Taylor vortex and Leapfrop vortex using NSSNN and HNN and compare them with the ground truth. Vortex elements are used with corresponding initial vorticity conditions of Taylor vortex and Leapfrop vortex (Qu et al., 2019). The difficulty of the numerical modeling of these two systems lies in the separation of different dynamical vortices instead of having them merging into a bigger structure. In both cases, the vortices evolved using NSSNN are separated nicely as the ground truth shows, while the vortices merge together using HNN. 7 LIMITATIONS The network with the embedded integrator is often more time-consuming to train than the one based on the dataset with time derivative. For example, the ratio of training time of the methods HNN and NSSNN is 1 : 3 when dt = Ttrain, and the training time of the recurrent networks further increases with the decreasing of dt. Although a smaller dt often has higher discretization accuracy, there is a tradeoff between training cost and predicting accuracy. Additionally, a smaller dt may potentially cause gradient explosion. In this case, we may want to use the adjoint method instead. Another limitation lies in the assumption that the symplectic structure is conserved. In real-world systems, there could be dissipation that makes this assumption unsatisfied. 8 CONCLUSIONS We incorporate a classic ideal that maps a nonseparable system to a higher dimensional space making it quasi-separable to construct symplectic networks. With the intrinsic symplectic structure, NSSNN possesses many benefits compared with other methods. In particular, NSSNN is the first method that can learn the vortex dynamical system, and accurately predict the evolution of complex vortex structures, such as Taylor and Leapfrog vortices. NSSNN, based on the first principle of learning complex systems, has potential applications in fields of physics, astronomy, and weather forecast, etc. We will further explore the possibilities of neural networks with inherent structure-preserving ability in fields like 3D vortex dynamics and quantum turbulence. In addition, we will also work on general applications of NSSNN with datasets based on images or other real scenes through automatically identifying coordinate variables of Hamiltonian systems based on neural networks. ACKNOWLEDGMENTS This project is supported in part by Neukom Institute CompX Faculty Grant, Burke Research Initiation Award, and ByteDance Gift Donation. Yunjin Tong is supported by the Dartmouth Women in Science Project (WISP), Undergraduate Advising and Research Program (UGAR), and Neukom Scholars Program. A NETWORK ARCHITECTURE Figure 8(a) shows the forward pass of NSSNN is composed of a forward pass through a differentiable symplectic integrator as well as a backpropagation step through the model. Figure 8(b) plots the schematic diagram of NSSNN. For the constructed network Hθ(q,p), we integrate (4) by using the second-order symplectic integrator (Tao, 2016). Specifically, The input layer of the integrator is (q,p,x,y) = (q0,p0, q0,p0) at t = t0 and the output layer is (q,p,x,y) = (qn,pn,xn,yn) at t = t0 + ndt. The recursive relations of (qi,pi,xi,yi), i = 1, 2, · · · , n, are expressed by the algorithm 1. B SYMPLECTOMORPHISMS One of the most important features of the time evolution of Hamilton’s equations is that it is a symplectomorphism, representing a transformation of phase space that is volume-preserving. In the setting of canonical coordinates, symplectomorphism means the transformation of the phase flow of a Hamiltonian system conserves the symplectic two-form dq ∧ dp ≡ N∑ j=1 (dqj ∧ dpj) , (11) where ∧ denotes the wedge product of two differential forms. The rules of wedge products can be found in Lee (2010). In the two-dimensional case, (11) can be understood as the area element of the surface. In this case, the symplectomorphism can be interpreted as the area element of the surface is constant. As proved below, our constructed network structure intrinsically preserves Hamiltonian structure. Theorem B.1. For a given δ, the mapping φδ1, φδ2, and φδ3 in (5) are symplectomorphisms. Proof. Let (tqj , t p j , t x j , t y j ) = φ δ j(q,p,x,y), j = 1, 2, 3. (12) . From the first equation of (5), we have dtq1 ∧ dt p 1 + dt x 1 ∧ dt y 1 =dq ∧ d [ p− δ ∂Hθ(q,y) ∂q ] + d [ x+ δ ∂Hθ(q,y) ∂p ] ∧ dy =dq ∧ dp+ dx ∧ dy + δ [ ∂Hθ(q,y) ∂q∂y − ∂Hθ(q,y) ∂y∂q ] dq ∧ dy =dq ∧ dp+ dx ∧ dy. (13) Similarly, we can prove that dtq2 ∧ dt p 2 + dt x 2 ∧ dt y 2 = dq ∧ dp+ dx∧ dy. In addition, from the third equation of (5), we can directly deduce that dtq3 ∧ dt p 3 + dt x 3 ∧ dt y 3 = dq ∧ dp+ dx ∧ dy. Suppose that Φ1 and Φ2 are two symplectomorphisms. Then, it is easy to show that their composite map Φ2 ◦ Φ1 is also symplectomorphism due to the chain rule. Thus, the symplectomorphism of algorithm 1 can be guaranteed by the theorem B.1. C DETERMINING COEFFICIENT ω To further elucidation, the HamiltonianHA+HB without the binding, i.e.,H with ω = 0, in extended phase space (q,p,x,y) may not be integrable, even if H(q,p) is integrable in the original phase space (q,p). However,HC is integrable. Thus, as ω increases, a larger proportion in the phase space for H corresponds to regular behaviors (Kolmogorov, 1954). For H(q, p) = (q2 + 1)(p2 + 1)/2, shown in Fig. 9, we compare the trajectories starting from [q(0), p(0), x(0), y(0)] = (−3, 0,−3, 0) calculated by the symplectic integrator (Tao, 2016) with different ω, where the calculation accuracy is second order accuracy and the time interval is 0.001. As Figs. 9(a), (b), (c), and (d) shown, the chaotic region in phase space is significantly decreasing until forming a stable limit cycle. We define = ‖(q, p)− (x, y)‖2 as the calculation error of this system, shown in Figs. (e), (f), (g), and (h) that the error is decreasing with ω increasing, which fits the quantitative results of phase trajectory well. D OTHER EXPERIMENTS We consider the pendulum, the Lotka–Volterra, the Spring, the Hénon–Heiles, the Tao’s example (Tao, 2016), the Fourier form of nonlinear Schrödinger and the vortex particle systems in our implementation. The Hamiltonian energies of these systems (except vortex particle system) are summarized as follows: Pendulum system: H(q, p) = 3(1−cos(q))+p2. Lotka–Volterra system: H(q, p) = p−ep+2q−eq . Spring system: H(q, p) = q2 + p2. Hénon–Heiles system: H(q1, q2, p1, p2) = (p21 + p22)/2 + (q21 + q22) + (q 2 1q2 − q32/3)/2. Tao’s example (Tao, 2016): H(q, p) = (q2 + 1)(p2 + 1)/2. Fourier form of nonlinear Schrödinger equation: H(q1, q2, p1, p2) = [ (q21 + p 2 1) 2 + (q22 + p 2 2) 2 ] /4−(q21q22 +p21p22− q21p 2 2 − p21q22 + 4q1q2p1p2). The network is trained by the dataset with noise∼ 0.1U(−1, 1). The training time span, integral time step, and validation time span are 0.01, 0.01, and 0.1, respectively. Table 2 compares the Hamiltonian deviation H = ‖H(qtruth,ptruth)−H(qpredict,ppredict)‖2/‖H(qtruth,ptruth)‖2 and the prediction error p = ‖qtruth − qpredict‖1 + ‖ptruth − ppredict‖1. It is clearly from the Table 2 that NSSNN either outperforms or has similar performances as NeuralODE and HNN do.
1. What is the focus and contribution of the paper on predicting Hamiltonian systems using deep learning? 2. What are the strengths of the proposed approach, particularly in comparison to prior works such as NeuralODE and HNN? 3. Are there any limitations or potential improvements regarding the method's ability to handle non-separable systems and learn coordinates from data? 4. Can you provide more information about the training and testing process, specifically regarding the use of a held-out test set and the presence of overfitting? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review Summary: This paper describes a deep learning approach for predicting Hamiltonian systems. The original paper enforces conservation in the loss function. Several of the follow-up papers embed a symplectic integrator instead, but these couldn't handle non-separable systems. This paper can both handle non-separable systems and use a symplectic integrator to enforce conservation. They demonstrate their system on quite a few examples and show lower error (both in prediction and the deviation in the Hamiltonian) than a NeuralODE or the original HNN paper. The final example, in Figure 5, shows a compelling visual improvement. Strong points: The Greydanus, et al. NeurIPS 2019 paper on Hamiltonian Neural Networks was very successful and has already inspired many follow-up papers. This paper improves it in several ways (Table 1). Since the ICLR community is similar to the NeurIPS community, I think that this paper would be of interest. I like that the extension for non-separable systems directly builds off an approach for extending integrators to non-separable systems (Tao, 2016). Further, the Tao integrator is built into the network training. This suggests that it's a robust path to take. I appreciate that results are reported on quite a few examples to compare NeuralODE, HNN, and the new method (NSSNN). The results in Figure 5 are quite impressive! I also think it's cool that the training data only needed to be from two particles. Weak points/clarification questions: In the examples in this paper, the canonical coordinates need to be known ahead of time. The original HNN paper has an example where the coordinates can be learned from data (Pixel Pendulum), and I believe some of the follow-up papers have covered this case as well. Do you have any thoughts on if your method could learn the coordinates? The font size in the figures is sometimes hard to read. In Section 4.2 & Table 2, is there a held-out test set, or are these all training errors? I would like to see both training & test errors to see if there is overfitting. Minor points: I found it confusing that Figure 3 & Section 4.1 refer to an "ablation test." It seems like a test to choose a suitable training set. Disclaimer: I should mention that I can't vouch for the proof in the appendix.
ICLR
Title Nonseparable Symplectic Neural Networks Abstract Predicting the behaviors of Hamiltonian systems has been drawing increasing attention in scientific machine learning. However, the vast majority of the literature was focused on predicting separable Hamiltonian systems with their kinematic and potential energy terms being explicitly decoupled while building data-driven paradigms to predict nonseparable Hamiltonian systems that are ubiquitous in fluid dynamics and quantum mechanics were rarely explored. The main computational challenge lies in the effective embedding of symplectic priors to describe the inherently coupled evolution of position and momentum, which typically exhibits intricate dynamics. To solve the problem, we propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs), to uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data. The enabling mechanics of our approach is an augmented symplectic time integrator to decouple the position and momentum energy terms and facilitate their evolution. We demonstrated the efficacy and versatility of our method by predicting a wide range of Hamiltonian systems, both separable and nonseparable, including chaotic vortical flows. We showed the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems by rigorously enforcing symplectomorphism. 1 INTRODUCTION A Hamiltonian dynamic system refers to a formalism for modeling a physical system exhibiting some specific form of energy conservation during its temporal evolution. A typical example is a pendulum whose total energy (referred to as the system’s Hamiltonian) is conserved as a temporally invariant sum of its kinematic energy and potential energy. Mathematically, such energy conservation indicates a specific geometric structure underpinning its time integration, named as a symplectic structure, which further spawns a wide range of numerical time integrators to model Hamiltonian systems. These symplectic time integrators have proven their effectiveness in simulating a variety of energy-conserving dynamics when Hamiltonian expressions are known as a prior. Examples encompass applications in plasma physics (Morrison, 2005), electromagnetics (Li et al., 2019), fluid mechanics (Salmon, 1988), and celestial mechanics (Saari & Xia, 1996), to name a few. On another front, the emergence of the various machine learning paradigms with their particular focus on uncovering the hidden invariant quantities and their evolutionary structures enable a faithful prediction of Hamiltonian dynamics without knowing its analytical energy expression beforehand. The key mechanics underpinning these learning models lie in a proper embedding of the strong mathematical inductive priors to ensure Hamiltonian conservation in a neural network data flow. Typically, such priors are realized in a variational way or a structured way. For example, in Greydanus et al. (2019), the Hamiltonian conservation is encoded in the loss function. This category of methods does not assume any combinatorial pattern of the energy term and therefore relies on the inherent expressiveness of neural networks to distill the Hamiltonian structure from abundant training datasets (Choudhary et al., 2019). Another category of Hamiltonian networks, which we refer to as structured approaches, implements the conservation law indirectly by embedding a symplectic time integrator (DiPietro et al., 2020; Tong et al., 2020; Chen et al., 2020) or composition of linear, activation, and gradient modules (Jin et al., 2020) into the network architecture. ∗shiying.xiong@dartmouth.edu . One of the main limitations of the current structured methods lies in the separable assumption of the Hamiltonian expression. Examples of separable Hamiltonian systems include the pendulum, the Lotka–Volterra (Zhu et al., 2016), the Kepler (Antohe & Gladwell, 2004), and the Hénon–Heiles systems (Zotos, 2015). However, beyond this scope, there exist various nonseparable systems whose Hamiltonian has no explicit expression to decouple the position and momentum energies. Examples include incompressible flows (Suzuki et al., 2007), quantum systems (Bonnabel et al., 2009), rigid body dynamics (Chadaj et al., 2017), charged particle dynamics (Zhang et al., 2016), and nonlinear Schrödinger equation (Brugnano et al., 2018). This nonseparability typically causes chaos and instability, which further complicates the systems’ dynamics. Although SympNet in Jin et al. (2020) can be used to learn and predict nonseparable Hamiltonian systems, multiple matrices of the same order with system dimension are needed in the training process of SympNet, resulting in difficulties in generalizing into high-dimensional large-scale N-body problems which are common in a series of nonseparable Hamiltonian systems, such as quantum multibody problems and vortexparticle dynamics problems. Such chaotic and large-scale nature jointly adds shear difficulties for a conventional machine learning model to deliver faithful predictions. In this paper, we propose an effective machine learning paradigm to predict nonseparable Hamiltonian systems. We build a novel neural network architecture, named nonseparable symplectic neural networks (NSSNNs), to enable accurate and robust predictions of long-term Hamiltonian dynamics based on short-term observation data. Our proposed method belongs to the category of structured network architectures: it intrinsically embeds the symplectomorphism into the network design to strictly preserve the symplectic evolution and further conserves the unknown, nonseparable Hamiltonian energy. The enabling techniques we adopted in our learning framework consist of an augmented symplectic time integrator to asymptotically “decouple” the position and momentum quantities that were nonseparable in their original form. We also introduce the Lagrangian multiplier in the augmented phase space to improve the system’s numerical stability. Our network design is motivated by ideas originated from physics (Tao, 2016) and optimization (Boyd et al., 2004). The combination of these mathematical observations and numerical paradigms enables a novel neural network architecture that can drastically enhance both the scale and scope of the current predictions. We show a motivational example in Figure 1 by comparing our approach with a traditional HNN method (Greydanus et al., 2019) regarding their structural designs and predicting abilities. We refer the readers to Section 6 for a detailed discussion. As shown in Figure 1, the vortices evolved using NSSNN are separated nicely as the ground truth, while the vortices merge together using HNN due to the failure of conserving the symplectic structure of a nonseparable system. The conservative capability of NSSNN springs from our design of the auxiliary variables (red x and y) which converts the original nonseparable system into a higher dimensional quasi-separable system where we can adopt a symplectic integrator. 2 RELATED WORKS Data-driven physical prediction. Data-driven approaches have been widely applied in physical systems including fluid mechanics (Brunton et al., 2020), wave physics (Hughes et al., 2019), quantum physics (Sellier et al., 2019), thermodynamics (Hernandez et al., 2020), and material science (Teicherta et al., 2019). Among these different physical systems, data-driven fluid receives increasing attention. We refer the readers to Brunton et al. (2020) for a thorough survey of the fundamental machine learning methodologies as well as their uses for understanding, modeling, optimizing, and controlling fluid flows in experiments and simulations based on training data. One of the motivations of our work is to design a versatile learning approach that can predict complex fluid motions. On another front, many pieces of research focus on incorporating physical priors into the learning framework, e.g., by enforcing incompressibility (Mohan et al., 2020), the Galilean invariance (Ling et al., 2016), quasistatic equilibrium (Geng et al., 2020), the Lagrangian invariance (Cranmer et al., 2020), and Hamiltonian conservation (Hernandez et al., 2020; Greydanus et al., 2019; Jin et al., 2020; Zhong et al., 2020). Here, inspired by the idea of embedding physics priors into neural networks, we aim to accelerate the learning process and improve the accuracy of our model. Neural networks for Hamiltonian systems. Greydanus et al. (2019) introduced Hamiltonian neural networks (HNNs) to conserve the Hamiltonian energy of the system by reformulating the loss function. Inspired by HNN, a series of methods intrinsically embedding a symplectic integrator into the recurrent neural network was proposed, such as SRNN (Chen et al., 2020), TaylorNet (Tong et al., 2020) and SSINN (DiPietro et al., 2020), to solve separable Hamiltonian systems. Combined with graph networks (Sanchez-Gonzalez et al., 2019; Battaglia et al., 2016), these methods were further generalized to large-scale N-body problems induced by interaction force between the particle pairs. Jin et al. (2020) proposed SympNet by directly constructing the symplectic mapping of system variables within neighboring time steps to handle both separable and nonseparable Hamiltonian systems. However, the scale of parameters in SympNet for training N dimensional Hamiltonian system is O(N2), which makes it hard to be generalized to the high dimensional N-body problems. Our NSSNN overcomes these limitations by devising a new Hamiltonian network architecture that is specifically suited for nonseparable systems (see details in Section 5). In addition, the Hamiltonianbased neural networks can be extended to further applications. Toth et al. (2020) developed the Hamiltonian Generative Network (HGN) to learn Hamiltonian dynamics from high-dimensional observations (such as images). Moreover, Zhong et al. (2020) introduced Symplectic ODE-Net (SymODEN), which adds an external control term to the standard Hamiltonian dynamics. 3 FRAMEWORK 3.1 AUGMENTED HAMILTONIAN EQUATION We start by considering a Hamiltonian system with N pairs of canonical coordinates (i.e. N generalized positions and N generalized momentum). The time evolution of canonical coordinates is governed by the symplectic gradient of the Hamiltonian (Hand & Finch, 2008). Specifically, the time evolution of the system is governed by Hamilton’s equations as dq dt = ∂H ∂p , dp dt = −∂H ∂q , (1) with the initial condition (q,p)|t=t0 = (q0,p0). In a general setting, q = (q1, q2, · · · , qN ) represents the positions and p = (p1, p2, ...pN ) denotes their momentum. Function H = H(q,p) is the Hamiltonian, which corresponds to the total energy of the system. An important feature of Hamilton’s equations is its symplectomorphism (see Appendix B for a detailed overview). The symplectic structure underpinning our proposed network architecture draws inspirations from the original research of Tao (2016) in computational physics. In Tao (2016), a generic, high-order, explicit and symplectic time integrator was proposed to solve (1) of an arbitrary separable and nonseparable HamiltonianH. This is implemented by considering an augmented Hamiltonian H(q,p,x,y) := HA +HB + ωHC (2) with HA = H(q,y), HB = H(x,p), HC = 1 2 ( ‖q − x‖22 + ‖p− y‖22 ) (3) in an extended phase space with symplectic two form dq ∧ dp+ dx ∧ dy, where ω is a constant that controls the binding of the original system and the artificial restraint. Notice that the Hamilton’s equations forH dq dt = ∂H ∂p = ∂H(x,p) ∂p + ω(p− y), dp dt = −∂H ∂q = −∂H(q,y) ∂q − ω(q − x), dx dt = ∂H ∂y = ∂H(q,y) ∂y − ω(p− y), dy dt = −∂H ∂x = −∂H(x,p) ∂x + ω(q − x), (4) with the initial condition (q,p,x,y)|t=t0 = (q0,p0, q0,p0) have the same exact solution as (1) in the sense that (q,p,x,y) = (q,p, q,p). Hence, we can get the solution of (1) by solving (4). Furthermore, it is possible to construct high-order symplectic integrators forH in (4) with explicit updates. Our model aims to learn the dynamical evolution of (q,p) in (1) by embedding (4) into the framework of NeuralODE (Chen et al., 2018). The coefficient ω acts as a regularizer, which stabilizes the numerical results (see Section 4). 3.2 NONSEPARABLE HAMILTONIAN NEURAL NETWORK We learn the nonseparable Hamiltonian dynamics (1) by constructing an augmented system (4), from which we can obtain the energy function H(q,p) by training the neural network Hθ(q,p) with parameter θ and calculate the gradient∇Hθ(q,p) by taking the in-graph gradient. For the constructed network Hθ(q,p), we integrate (4) by using the second-order symplectic integrator (Tao, 2016). Specifically, we will have an input layer (q,p,x,y) = (q0,p0, q0,p0) at t = t0 and an output layer (q,p,x,y) = (qn,pn,xn,yn) at t = t0 +ndt. Algorithm 1 Integrate (4) by using the secondorder symplectic integrator Input: q0,p0, t0, t, dt; φδ1, φδ2, and φδ3 in (5); Output: (q̂, p̂, x̂, ŷ) = (qn,pn,xn,yn) 1 (q0,p0,x0,y0) = (q0,p0, q0,p0) n = floor[(t− t0)/dt] for i = 1→ n do 2 (qi,pi,xi,yi) = φ dt/2 1 ◦φ dt/2 2 ◦φdt3 ◦φ dt/2 2 ◦ φ dt/2 1 ◦ (qi−1,pi−1,xi−1,yi−1); 3 end The recursive relations of (qi,pi,xi,yi), i = 1, 2, · · · , n, can be expressed by the algorithm 1 (also see Figure 8 in Appendix A). The input functions φδ1(q,p,x,y), φ δ 2(q,p,x,y), and φ δ 3(q,p,x,y) in algorithm 1 are qp− δ[∂Hθ(q,y)/∂q]x+ δ[∂Hθ(q,y)/∂p] y , q + δ[∂Hθ(x,p)/∂p]px y − δ[∂Hθ(x,p)/∂q] , and 1 2 ( q + x p+ y ) +Rδ ( q − x p− y ) ( q + x p+ y ) −Rδ ( q − x p− y ) , (5) respectively. Here Rδ := [ cos(2ωδ)I sin(2ωδ)I − sin(2ωδ)I cos(2ωδ)I ] , where I is a identity matrix. (6) We remark that x and y are just auxiliary variables, which are theoretically equal to q and p. Therefore, we can use the data set of (q,p) to construct the data set containing variables (q,p,x,y). In addition, by constructing the networkHθ, we show that theorem B.1 in Appendix B holds, so the networks φδ1,φ δ 2, and φ δ 3 in (5) preserve the symplectic structure of the system. Suppose that Φ1 and Φ2 are two symplectomorphisms. Then, it is easy to show that their composite map Φ2 ◦ Φ1 is also symplectomorphism due to the chain rule. Thus, the symplectomorphism of algorithm 1 can be guaranteed by the theorems B.1. 4 TRAINING SETTINGS AND ABLATION TESTS We use 6 linear layers with hidden size 64 to model Hθ, all of which are followed by a Sigmoid activation function except the last one. The derivatives ∂Hθ/∂p, ∂Hθ/∂q, ∂Hθ/∂x, ∂Hθ/∂y are all obtained by automatic differentiation in Pytorch (Paszke et al., 2019). The weights of the linear layers are initialized by Xavier initializaiton (Glorot & Bengio, 2010). We generate the dataset for training and validation using high-precision numerical solver (Tao, 2016), where the ratio of training and validation datasets is 9 : 1. We set the dataset (qj0,p j 0) as the start input and (qj ,pj) as the target with j = 1, 2, · · · , Ns, and the time span between (qj0,p j 0) and (q j ,pj) is Ttrain. Feeding (q0,p0) = (q j 0,p j 0), t0 = 0, t = Ttrain, and time step dt in Algorithm 1 to get the predicted variables (q̂j , p̂j , x̂j , ŷj). Accordingly, the loss function is defined as LNSSNN = 1 Nb Nb∑ j=1 ‖q(j) − q̂(j)‖1 + ‖p(j) − p̂(j)‖1 + ‖q(j) − x̂(j)‖1 + ‖p(j) − ŷ(j)‖1, (7) where Nb = 512 is the batch size of the training samples. We use the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.05. The learning rate is multiplied by 0.8 for every 10 epoches. Taking systemH(q, p) = 0.5(q2 + 1)(p2 + 1) as an example, we carry out a series of ablation tests based on our constructed networks. Normally, we set the time span, time step and dateset size as T = 0.01, dt = 0.01 and Ns = 1280. The choice of ω in (4) is largely flexible since NSSNN is not sensitive to the parameter ω when it is larger than a certain threshold. Figure 2 shows the training and validation losses with different ω in the network trained by clean and noise datasets. Though the convergence rates are slightly different in a small scope, the examples with various ω are able to converge to the same size of training and validation losses. Here, we set ω = 2000, but ω can be smaller than 2000. The only requirement for picking ω is that it has to be larger than O(10), which is detailed in Appendix C. We pick the L1 loss function to train our network due to its better performance. Figure 3 compares the validation losses with different training loss functions in the network trained by clean and noise datasets. Figure 3(a) shows that either the network trained by L1 or MSE with a clean dataset can converge to a small validation loss, but the network trained by L1 loss converges relatively faster. Figures 3(b) and 3(c) both show that the network trained by L1 with noise dataset can converge to a smaller validation loss. In addition, we already introduced a regularization term in the symplectic integrator embedded in the network; thus, there is no need to add the regularization term in the loss function. The integral time step in the sympletic integrator is a vital parameter, and the choice of dt largely depends on the time span Ttrain. Figure 4 compares the validation losses generated by various integral time steps dt based on fixed dataset time spans Ttrain = 0.01, 0.1 and 0.2 respectively in the training process. The validation loss converges to a similar degree with various dt based on fixed Ttrain = 0.01 and Ttrain = 0.1 in 4(a) and (b), while it increases significantly as dt increases based on fixed Ttrain = 0.02 in 4(c). Thus, we should take relatively small dt for the dataset with larger time span Ttrain 5 COMPARISONS WITH OTHER METHODS 5.1 METHODOLOGIES We compare our method with other recently proposed methods, such as HNN (Greydanus et al., 2019), NeuralODE (Chen et al., 2018), TaylorNet (Tong et al., 2020), SSINN (DiPietro et al., 2020), SRNN (Chen et al., 2020), and SympNet (Jin et al., 2020). There are several features distinguishing our method from others, as shown in Table 1. HNN first enforces conservative features of a Hamiltonian system by reformulating its loss function, which incurs two main shortcomings. On the one hand, it requires the temporal derivatives of the momentum and the position of the systems to calculate the loss function, which is difficult to obtain from real-world systems. On the other hand, HNN doesn’t strictly preserve the symplectic structure, because its symplectomorphism is realized by its loss function rather than its intrinsic network architecture. NeuralODE successfully bypasses the time derivatives of the datasets by incorporating an integrator solver into the network architecture. Embedding the Hamiltonian prior into the NeuralODE, a series of methods are proposed, such as SRNN, SSINN, and TaylorNet, to predict the continuous trajectory of system variables; however, presently these methods are only designed to solve separable Hamiltonian systems. Instead of updating the continuous dynamics by integrating the neural networks in NeuralODE, SympNet adopts a symplectomorphism composed of well-designed both linear and non-linear matrices to intrinsically map the system variables within neighboring time steps. However, the parameters scale in the matrix map for training N dimensional Hamiltonian system in SympNet is O(N2), which makes it hard to generalize to the high dimensional N-body problems. For example, in Section 6, we predict the dynamic evolution of 6000 vortex particles, which is challenging for the training process of the SympNet on the level of O(60002). NSSNN overcomes the weaknesses mentioned above. Under the framework of NeuralODE, NSSNN utilizes continuously-defined dynamics in the neural networks, which gives it the capability to learn the continuous-time evolution of dynamical systems. Based on Tao (2016), NSSNN embeds the symplectic prior into the nonseparable symplectic integrator to ensure the strict symplectomorphism, thereby guaranteeing the property of long-term predictability. In addition, unlike SympNet, NSSNN is highly flexible and can be generalized to high dimensional N-body problems by involving the interaction networks (Sanchez-Gonzalez et al., 2019), which will be further discussed in Section 6. 5.2 EXPERIMENTS We compare five implementations that learn and predict Hamiltonian systems. The first one is NeuralODE, which trains the system by embedding the network fθ → (dq/dt, dp/dt) into the Runge-Kutta (RK) integrator. The other four, however, achieve the goal by fitting the Hamiltonian Hθ → H based on (1). Specifically, HNN trains the network with the constraints of the Hamiltonian symplectic gradient along with the time derivative of system variables and then embeds the welltrainedHθ into the RK integrator for predicting the system. The third and fourth implementations are ablation tests. One of them is improved HNN (IHNN), which embeds the well-trainedHθ into the nonseparable symplectic integrator (Tao’s integrator) for predicting. The other is to directly embed Hθ into the RK integrator for training, which we call HRK. The fifth method is NSSNN, which embedsHθ into the nonseparable symplectic integrator for training. For fair comparison, we adopt the same network structure (except that the dimension of output layer in NeuralODE is two times larger than that in the other four), the same L1 loss function and same size of the dataset, and the precision of all integral schemes is second order, and the other parameters keep consistent with the one in Section 4. The time derivative in the dataset for training HNN and IHNN is obtained by the first difference method dq dt ≈ q(Ttrain)− q(0) Ttrain and dp dt ≈ p(Ttrain)− q(0) Ttrain . (8) Figure 5 demonstrates the differences between the five methods using a spring systemH = 0.5(q2 + p2) with different time span Ttrain = 0.4, 1 and same time step dt = 0.2. We can see that by introducing the nonseparable symplectic integrator into the prediction of the Hamiltonian system, NSSNN has a stronger long-term predicting ability than all the other methods. In addition, the prediction of HNN and IHNN lies in the dataset with time derivative; consequently, it will lead to a larger error when the given time span Ttrain is large. Moreover, the datasets obtained by (11) in HNN and IHNN are sensitive to noise. Figure 6 compares the predictions of (q, p) for the system H = 0.5(q2 + 1)(p2 + 1), where the network is trained by the dataset with noise ∼ 0.05U(−1, 1), along with time span Ttrain = 0.2 and time step dt = 0.02. Under the condition with noise, NSSNN still performs well compared with other methods. Also, we compare the convergent error of a series of Hamiltonian systems with differentH trained with noisy data in Appendix D, which generally shows better robustness than HNN does. 6 MODELING VORTEX DYNAMICS OF MULTI-PARTICLE SYSTEM For two-dimensional vortex particle systems, the dynamical equations of particle positions (xj , yj), j = 1, 2, · · · , Nv with particle strengths Γj can be written in the generalized Hamiltonian form as Γj dxj dt = −∂H p ∂yj , Γj dyj dt = ∂Hp ∂xj , with Hp = 1 4π Nv∑ j,k=1 ΓjΓk log(|xj − xk|). (9) By including the given particle strengths Γj in Algorithm 1, we can still adopt the method mentioned above to learn the Hamiltonian in (9) when there are fewer particles. However, considering a system with Nv 2 particles, the cost to collect training data from all Nv particles might be high, and the training process can be time-consuming. Thus, instead of collecting information from all Nv particles to train our model, we only use data collected from two bodies as training data to make predictions of the dynamics of Nv particles. Specifically, we assume the interactive models between particle pairs with unit particle strengths Γj = 1 are the same, and their corresponding Hamiltonian can be represented as network Ĥθ(xj ,xk), based on which the corresponding Hamiltonian of Nv particles can be written as (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2019) Hpθ = Nv∑ i,j=1 ΓjΓkĤθ(xj ,xk). (10) We embed (10) into the symplectic integrator that includes Γj to obtain the final network architecture. The setup of the multi-particle problem is similar to the previous problems. The training time span is Ttrain = 0.01 while the prediction period can be up to Tpredict = 40. We use 2048 clean data samples to train our model. The training process takes about 100 epochs for the loss to converge. In Figure 7, we use our trained model to predict the dynamics of 6000-particle systems, including Taylor and Leapfrog vortices. We generate results of Taylor vortex and Leapfrop vortex using NSSNN and HNN and compare them with the ground truth. Vortex elements are used with corresponding initial vorticity conditions of Taylor vortex and Leapfrop vortex (Qu et al., 2019). The difficulty of the numerical modeling of these two systems lies in the separation of different dynamical vortices instead of having them merging into a bigger structure. In both cases, the vortices evolved using NSSNN are separated nicely as the ground truth shows, while the vortices merge together using HNN. 7 LIMITATIONS The network with the embedded integrator is often more time-consuming to train than the one based on the dataset with time derivative. For example, the ratio of training time of the methods HNN and NSSNN is 1 : 3 when dt = Ttrain, and the training time of the recurrent networks further increases with the decreasing of dt. Although a smaller dt often has higher discretization accuracy, there is a tradeoff between training cost and predicting accuracy. Additionally, a smaller dt may potentially cause gradient explosion. In this case, we may want to use the adjoint method instead. Another limitation lies in the assumption that the symplectic structure is conserved. In real-world systems, there could be dissipation that makes this assumption unsatisfied. 8 CONCLUSIONS We incorporate a classic ideal that maps a nonseparable system to a higher dimensional space making it quasi-separable to construct symplectic networks. With the intrinsic symplectic structure, NSSNN possesses many benefits compared with other methods. In particular, NSSNN is the first method that can learn the vortex dynamical system, and accurately predict the evolution of complex vortex structures, such as Taylor and Leapfrog vortices. NSSNN, based on the first principle of learning complex systems, has potential applications in fields of physics, astronomy, and weather forecast, etc. We will further explore the possibilities of neural networks with inherent structure-preserving ability in fields like 3D vortex dynamics and quantum turbulence. In addition, we will also work on general applications of NSSNN with datasets based on images or other real scenes through automatically identifying coordinate variables of Hamiltonian systems based on neural networks. ACKNOWLEDGMENTS This project is supported in part by Neukom Institute CompX Faculty Grant, Burke Research Initiation Award, and ByteDance Gift Donation. Yunjin Tong is supported by the Dartmouth Women in Science Project (WISP), Undergraduate Advising and Research Program (UGAR), and Neukom Scholars Program. A NETWORK ARCHITECTURE Figure 8(a) shows the forward pass of NSSNN is composed of a forward pass through a differentiable symplectic integrator as well as a backpropagation step through the model. Figure 8(b) plots the schematic diagram of NSSNN. For the constructed network Hθ(q,p), we integrate (4) by using the second-order symplectic integrator (Tao, 2016). Specifically, The input layer of the integrator is (q,p,x,y) = (q0,p0, q0,p0) at t = t0 and the output layer is (q,p,x,y) = (qn,pn,xn,yn) at t = t0 + ndt. The recursive relations of (qi,pi,xi,yi), i = 1, 2, · · · , n, are expressed by the algorithm 1. B SYMPLECTOMORPHISMS One of the most important features of the time evolution of Hamilton’s equations is that it is a symplectomorphism, representing a transformation of phase space that is volume-preserving. In the setting of canonical coordinates, symplectomorphism means the transformation of the phase flow of a Hamiltonian system conserves the symplectic two-form dq ∧ dp ≡ N∑ j=1 (dqj ∧ dpj) , (11) where ∧ denotes the wedge product of two differential forms. The rules of wedge products can be found in Lee (2010). In the two-dimensional case, (11) can be understood as the area element of the surface. In this case, the symplectomorphism can be interpreted as the area element of the surface is constant. As proved below, our constructed network structure intrinsically preserves Hamiltonian structure. Theorem B.1. For a given δ, the mapping φδ1, φδ2, and φδ3 in (5) are symplectomorphisms. Proof. Let (tqj , t p j , t x j , t y j ) = φ δ j(q,p,x,y), j = 1, 2, 3. (12) . From the first equation of (5), we have dtq1 ∧ dt p 1 + dt x 1 ∧ dt y 1 =dq ∧ d [ p− δ ∂Hθ(q,y) ∂q ] + d [ x+ δ ∂Hθ(q,y) ∂p ] ∧ dy =dq ∧ dp+ dx ∧ dy + δ [ ∂Hθ(q,y) ∂q∂y − ∂Hθ(q,y) ∂y∂q ] dq ∧ dy =dq ∧ dp+ dx ∧ dy. (13) Similarly, we can prove that dtq2 ∧ dt p 2 + dt x 2 ∧ dt y 2 = dq ∧ dp+ dx∧ dy. In addition, from the third equation of (5), we can directly deduce that dtq3 ∧ dt p 3 + dt x 3 ∧ dt y 3 = dq ∧ dp+ dx ∧ dy. Suppose that Φ1 and Φ2 are two symplectomorphisms. Then, it is easy to show that their composite map Φ2 ◦ Φ1 is also symplectomorphism due to the chain rule. Thus, the symplectomorphism of algorithm 1 can be guaranteed by the theorem B.1. C DETERMINING COEFFICIENT ω To further elucidation, the HamiltonianHA+HB without the binding, i.e.,H with ω = 0, in extended phase space (q,p,x,y) may not be integrable, even if H(q,p) is integrable in the original phase space (q,p). However,HC is integrable. Thus, as ω increases, a larger proportion in the phase space for H corresponds to regular behaviors (Kolmogorov, 1954). For H(q, p) = (q2 + 1)(p2 + 1)/2, shown in Fig. 9, we compare the trajectories starting from [q(0), p(0), x(0), y(0)] = (−3, 0,−3, 0) calculated by the symplectic integrator (Tao, 2016) with different ω, where the calculation accuracy is second order accuracy and the time interval is 0.001. As Figs. 9(a), (b), (c), and (d) shown, the chaotic region in phase space is significantly decreasing until forming a stable limit cycle. We define = ‖(q, p)− (x, y)‖2 as the calculation error of this system, shown in Figs. (e), (f), (g), and (h) that the error is decreasing with ω increasing, which fits the quantitative results of phase trajectory well. D OTHER EXPERIMENTS We consider the pendulum, the Lotka–Volterra, the Spring, the Hénon–Heiles, the Tao’s example (Tao, 2016), the Fourier form of nonlinear Schrödinger and the vortex particle systems in our implementation. The Hamiltonian energies of these systems (except vortex particle system) are summarized as follows: Pendulum system: H(q, p) = 3(1−cos(q))+p2. Lotka–Volterra system: H(q, p) = p−ep+2q−eq . Spring system: H(q, p) = q2 + p2. Hénon–Heiles system: H(q1, q2, p1, p2) = (p21 + p22)/2 + (q21 + q22) + (q 2 1q2 − q32/3)/2. Tao’s example (Tao, 2016): H(q, p) = (q2 + 1)(p2 + 1)/2. Fourier form of nonlinear Schrödinger equation: H(q1, q2, p1, p2) = [ (q21 + p 2 1) 2 + (q22 + p 2 2) 2 ] /4−(q21q22 +p21p22− q21p 2 2 − p21q22 + 4q1q2p1p2). The network is trained by the dataset with noise∼ 0.1U(−1, 1). The training time span, integral time step, and validation time span are 0.01, 0.01, and 0.1, respectively. Table 2 compares the Hamiltonian deviation H = ‖H(qtruth,ptruth)−H(qpredict,ppredict)‖2/‖H(qtruth,ptruth)‖2 and the prediction error p = ‖qtruth − qpredict‖1 + ‖ptruth − ppredict‖1. It is clearly from the Table 2 that NSSNN either outperforms or has similar performances as NeuralODE and HNN do.
1. How does the proposed method compare to other approaches in solving non-separable Hamiltonian systems? 2. What are the weaknesses of variational methods in tackling non-separable systems? 3. How does the proposed method perform in terms of computational and memory complexity? 4. What are the relative contributions of the different components of the proposed method to its improved performance? 5. How does the proposed method compare to recent competing methods in terms of performance and tradeoffs? 6. What are some potential areas where the proposed method falls short? 7. How can the experimentation section be improved to provide a more convincing evaluation of the proposed method? 8. What are some specific unclear phrases or sentences in the paper that need improvement? 9. How can the conclusion be strengthened by discussing the potential tradeoffs involved in choosing the proposed method over others?
Review
Review The work proposes a novel method for solving non-separable Hamiltonian systems, using Tao's approach in which two copies of the phase space are tied together by an additional Hamiltonian. This appears to be a novel proposal, and certainly of interest. Positioning w.r.t. related work. The proposed method specifically focuses on non-separable systems. The work by Jin et al. (2020) is mentioned in passing, but this work appears to address the problem of non-separability as well. It would thus make sense to dive deeper into how the proposed method compares to Jin's, both theoretically and empirically. Some arguments require citations, such as "variational methods are not well suited for tackling such challenges due to heir inherent weaknesses": which weaknesses? Discussion of the method. Although the method appears to perform favourably in all provided benchmarks, it remains unclear how the method compares on computational and memory complexity. This should be added to the discussion, with a theoretical and/or empirical analysis. It would also be beneficial if areas would be highlighted where the method falls short. Experiment section. Although there is a section called 'Ablation test', an actual empirical ablation study is missing. As the proposed method introduces various moving parts including a specific loss, phase space parametrisation, and specific neural architecture fo H. It is not clear what the relative contribution is of these parts to the improved performance over the baselines. A closer study on this, as well as the influence of the hyperparameters such as t, would shine more light on the characteristics of the proposed method. Moreover, the experimentation section does not convince that the baselines' hyperparameters have been fairly tuned, and if a potential increase of parameters in H might contribute to the stronger performance. Clarity of writing. I found the paper difficult to follow. The meaning of some phrases is unclear, e.g. 'which contributes robust for wider datasets'. Conclusion. The work is interesting, but the analysis of the method is incomplete due to the lack of comparison with a recent competing method, and a missing discussion of the potential tradeoffs involved in choosing this method over others. Couple nitpicks outside of the review: "Nonsep_e_rable" appears a couple of times Line 197: 'systmes. Update: The authors have addressed the most pressing issues with the manuscript. I've increased my score and vote in favour of accept. The section of limitations was difficult to follow and would benefit from a more structured comparison with competing methods.
ICLR
Title Nonseparable Symplectic Neural Networks Abstract Predicting the behaviors of Hamiltonian systems has been drawing increasing attention in scientific machine learning. However, the vast majority of the literature was focused on predicting separable Hamiltonian systems with their kinematic and potential energy terms being explicitly decoupled while building data-driven paradigms to predict nonseparable Hamiltonian systems that are ubiquitous in fluid dynamics and quantum mechanics were rarely explored. The main computational challenge lies in the effective embedding of symplectic priors to describe the inherently coupled evolution of position and momentum, which typically exhibits intricate dynamics. To solve the problem, we propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs), to uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data. The enabling mechanics of our approach is an augmented symplectic time integrator to decouple the position and momentum energy terms and facilitate their evolution. We demonstrated the efficacy and versatility of our method by predicting a wide range of Hamiltonian systems, both separable and nonseparable, including chaotic vortical flows. We showed the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems by rigorously enforcing symplectomorphism. 1 INTRODUCTION A Hamiltonian dynamic system refers to a formalism for modeling a physical system exhibiting some specific form of energy conservation during its temporal evolution. A typical example is a pendulum whose total energy (referred to as the system’s Hamiltonian) is conserved as a temporally invariant sum of its kinematic energy and potential energy. Mathematically, such energy conservation indicates a specific geometric structure underpinning its time integration, named as a symplectic structure, which further spawns a wide range of numerical time integrators to model Hamiltonian systems. These symplectic time integrators have proven their effectiveness in simulating a variety of energy-conserving dynamics when Hamiltonian expressions are known as a prior. Examples encompass applications in plasma physics (Morrison, 2005), electromagnetics (Li et al., 2019), fluid mechanics (Salmon, 1988), and celestial mechanics (Saari & Xia, 1996), to name a few. On another front, the emergence of the various machine learning paradigms with their particular focus on uncovering the hidden invariant quantities and their evolutionary structures enable a faithful prediction of Hamiltonian dynamics without knowing its analytical energy expression beforehand. The key mechanics underpinning these learning models lie in a proper embedding of the strong mathematical inductive priors to ensure Hamiltonian conservation in a neural network data flow. Typically, such priors are realized in a variational way or a structured way. For example, in Greydanus et al. (2019), the Hamiltonian conservation is encoded in the loss function. This category of methods does not assume any combinatorial pattern of the energy term and therefore relies on the inherent expressiveness of neural networks to distill the Hamiltonian structure from abundant training datasets (Choudhary et al., 2019). Another category of Hamiltonian networks, which we refer to as structured approaches, implements the conservation law indirectly by embedding a symplectic time integrator (DiPietro et al., 2020; Tong et al., 2020; Chen et al., 2020) or composition of linear, activation, and gradient modules (Jin et al., 2020) into the network architecture. ∗shiying.xiong@dartmouth.edu . One of the main limitations of the current structured methods lies in the separable assumption of the Hamiltonian expression. Examples of separable Hamiltonian systems include the pendulum, the Lotka–Volterra (Zhu et al., 2016), the Kepler (Antohe & Gladwell, 2004), and the Hénon–Heiles systems (Zotos, 2015). However, beyond this scope, there exist various nonseparable systems whose Hamiltonian has no explicit expression to decouple the position and momentum energies. Examples include incompressible flows (Suzuki et al., 2007), quantum systems (Bonnabel et al., 2009), rigid body dynamics (Chadaj et al., 2017), charged particle dynamics (Zhang et al., 2016), and nonlinear Schrödinger equation (Brugnano et al., 2018). This nonseparability typically causes chaos and instability, which further complicates the systems’ dynamics. Although SympNet in Jin et al. (2020) can be used to learn and predict nonseparable Hamiltonian systems, multiple matrices of the same order with system dimension are needed in the training process of SympNet, resulting in difficulties in generalizing into high-dimensional large-scale N-body problems which are common in a series of nonseparable Hamiltonian systems, such as quantum multibody problems and vortexparticle dynamics problems. Such chaotic and large-scale nature jointly adds shear difficulties for a conventional machine learning model to deliver faithful predictions. In this paper, we propose an effective machine learning paradigm to predict nonseparable Hamiltonian systems. We build a novel neural network architecture, named nonseparable symplectic neural networks (NSSNNs), to enable accurate and robust predictions of long-term Hamiltonian dynamics based on short-term observation data. Our proposed method belongs to the category of structured network architectures: it intrinsically embeds the symplectomorphism into the network design to strictly preserve the symplectic evolution and further conserves the unknown, nonseparable Hamiltonian energy. The enabling techniques we adopted in our learning framework consist of an augmented symplectic time integrator to asymptotically “decouple” the position and momentum quantities that were nonseparable in their original form. We also introduce the Lagrangian multiplier in the augmented phase space to improve the system’s numerical stability. Our network design is motivated by ideas originated from physics (Tao, 2016) and optimization (Boyd et al., 2004). The combination of these mathematical observations and numerical paradigms enables a novel neural network architecture that can drastically enhance both the scale and scope of the current predictions. We show a motivational example in Figure 1 by comparing our approach with a traditional HNN method (Greydanus et al., 2019) regarding their structural designs and predicting abilities. We refer the readers to Section 6 for a detailed discussion. As shown in Figure 1, the vortices evolved using NSSNN are separated nicely as the ground truth, while the vortices merge together using HNN due to the failure of conserving the symplectic structure of a nonseparable system. The conservative capability of NSSNN springs from our design of the auxiliary variables (red x and y) which converts the original nonseparable system into a higher dimensional quasi-separable system where we can adopt a symplectic integrator. 2 RELATED WORKS Data-driven physical prediction. Data-driven approaches have been widely applied in physical systems including fluid mechanics (Brunton et al., 2020), wave physics (Hughes et al., 2019), quantum physics (Sellier et al., 2019), thermodynamics (Hernandez et al., 2020), and material science (Teicherta et al., 2019). Among these different physical systems, data-driven fluid receives increasing attention. We refer the readers to Brunton et al. (2020) for a thorough survey of the fundamental machine learning methodologies as well as their uses for understanding, modeling, optimizing, and controlling fluid flows in experiments and simulations based on training data. One of the motivations of our work is to design a versatile learning approach that can predict complex fluid motions. On another front, many pieces of research focus on incorporating physical priors into the learning framework, e.g., by enforcing incompressibility (Mohan et al., 2020), the Galilean invariance (Ling et al., 2016), quasistatic equilibrium (Geng et al., 2020), the Lagrangian invariance (Cranmer et al., 2020), and Hamiltonian conservation (Hernandez et al., 2020; Greydanus et al., 2019; Jin et al., 2020; Zhong et al., 2020). Here, inspired by the idea of embedding physics priors into neural networks, we aim to accelerate the learning process and improve the accuracy of our model. Neural networks for Hamiltonian systems. Greydanus et al. (2019) introduced Hamiltonian neural networks (HNNs) to conserve the Hamiltonian energy of the system by reformulating the loss function. Inspired by HNN, a series of methods intrinsically embedding a symplectic integrator into the recurrent neural network was proposed, such as SRNN (Chen et al., 2020), TaylorNet (Tong et al., 2020) and SSINN (DiPietro et al., 2020), to solve separable Hamiltonian systems. Combined with graph networks (Sanchez-Gonzalez et al., 2019; Battaglia et al., 2016), these methods were further generalized to large-scale N-body problems induced by interaction force between the particle pairs. Jin et al. (2020) proposed SympNet by directly constructing the symplectic mapping of system variables within neighboring time steps to handle both separable and nonseparable Hamiltonian systems. However, the scale of parameters in SympNet for training N dimensional Hamiltonian system is O(N2), which makes it hard to be generalized to the high dimensional N-body problems. Our NSSNN overcomes these limitations by devising a new Hamiltonian network architecture that is specifically suited for nonseparable systems (see details in Section 5). In addition, the Hamiltonianbased neural networks can be extended to further applications. Toth et al. (2020) developed the Hamiltonian Generative Network (HGN) to learn Hamiltonian dynamics from high-dimensional observations (such as images). Moreover, Zhong et al. (2020) introduced Symplectic ODE-Net (SymODEN), which adds an external control term to the standard Hamiltonian dynamics. 3 FRAMEWORK 3.1 AUGMENTED HAMILTONIAN EQUATION We start by considering a Hamiltonian system with N pairs of canonical coordinates (i.e. N generalized positions and N generalized momentum). The time evolution of canonical coordinates is governed by the symplectic gradient of the Hamiltonian (Hand & Finch, 2008). Specifically, the time evolution of the system is governed by Hamilton’s equations as dq dt = ∂H ∂p , dp dt = −∂H ∂q , (1) with the initial condition (q,p)|t=t0 = (q0,p0). In a general setting, q = (q1, q2, · · · , qN ) represents the positions and p = (p1, p2, ...pN ) denotes their momentum. Function H = H(q,p) is the Hamiltonian, which corresponds to the total energy of the system. An important feature of Hamilton’s equations is its symplectomorphism (see Appendix B for a detailed overview). The symplectic structure underpinning our proposed network architecture draws inspirations from the original research of Tao (2016) in computational physics. In Tao (2016), a generic, high-order, explicit and symplectic time integrator was proposed to solve (1) of an arbitrary separable and nonseparable HamiltonianH. This is implemented by considering an augmented Hamiltonian H(q,p,x,y) := HA +HB + ωHC (2) with HA = H(q,y), HB = H(x,p), HC = 1 2 ( ‖q − x‖22 + ‖p− y‖22 ) (3) in an extended phase space with symplectic two form dq ∧ dp+ dx ∧ dy, where ω is a constant that controls the binding of the original system and the artificial restraint. Notice that the Hamilton’s equations forH dq dt = ∂H ∂p = ∂H(x,p) ∂p + ω(p− y), dp dt = −∂H ∂q = −∂H(q,y) ∂q − ω(q − x), dx dt = ∂H ∂y = ∂H(q,y) ∂y − ω(p− y), dy dt = −∂H ∂x = −∂H(x,p) ∂x + ω(q − x), (4) with the initial condition (q,p,x,y)|t=t0 = (q0,p0, q0,p0) have the same exact solution as (1) in the sense that (q,p,x,y) = (q,p, q,p). Hence, we can get the solution of (1) by solving (4). Furthermore, it is possible to construct high-order symplectic integrators forH in (4) with explicit updates. Our model aims to learn the dynamical evolution of (q,p) in (1) by embedding (4) into the framework of NeuralODE (Chen et al., 2018). The coefficient ω acts as a regularizer, which stabilizes the numerical results (see Section 4). 3.2 NONSEPARABLE HAMILTONIAN NEURAL NETWORK We learn the nonseparable Hamiltonian dynamics (1) by constructing an augmented system (4), from which we can obtain the energy function H(q,p) by training the neural network Hθ(q,p) with parameter θ and calculate the gradient∇Hθ(q,p) by taking the in-graph gradient. For the constructed network Hθ(q,p), we integrate (4) by using the second-order symplectic integrator (Tao, 2016). Specifically, we will have an input layer (q,p,x,y) = (q0,p0, q0,p0) at t = t0 and an output layer (q,p,x,y) = (qn,pn,xn,yn) at t = t0 +ndt. Algorithm 1 Integrate (4) by using the secondorder symplectic integrator Input: q0,p0, t0, t, dt; φδ1, φδ2, and φδ3 in (5); Output: (q̂, p̂, x̂, ŷ) = (qn,pn,xn,yn) 1 (q0,p0,x0,y0) = (q0,p0, q0,p0) n = floor[(t− t0)/dt] for i = 1→ n do 2 (qi,pi,xi,yi) = φ dt/2 1 ◦φ dt/2 2 ◦φdt3 ◦φ dt/2 2 ◦ φ dt/2 1 ◦ (qi−1,pi−1,xi−1,yi−1); 3 end The recursive relations of (qi,pi,xi,yi), i = 1, 2, · · · , n, can be expressed by the algorithm 1 (also see Figure 8 in Appendix A). The input functions φδ1(q,p,x,y), φ δ 2(q,p,x,y), and φ δ 3(q,p,x,y) in algorithm 1 are qp− δ[∂Hθ(q,y)/∂q]x+ δ[∂Hθ(q,y)/∂p] y , q + δ[∂Hθ(x,p)/∂p]px y − δ[∂Hθ(x,p)/∂q] , and 1 2 ( q + x p+ y ) +Rδ ( q − x p− y ) ( q + x p+ y ) −Rδ ( q − x p− y ) , (5) respectively. Here Rδ := [ cos(2ωδ)I sin(2ωδ)I − sin(2ωδ)I cos(2ωδ)I ] , where I is a identity matrix. (6) We remark that x and y are just auxiliary variables, which are theoretically equal to q and p. Therefore, we can use the data set of (q,p) to construct the data set containing variables (q,p,x,y). In addition, by constructing the networkHθ, we show that theorem B.1 in Appendix B holds, so the networks φδ1,φ δ 2, and φ δ 3 in (5) preserve the symplectic structure of the system. Suppose that Φ1 and Φ2 are two symplectomorphisms. Then, it is easy to show that their composite map Φ2 ◦ Φ1 is also symplectomorphism due to the chain rule. Thus, the symplectomorphism of algorithm 1 can be guaranteed by the theorems B.1. 4 TRAINING SETTINGS AND ABLATION TESTS We use 6 linear layers with hidden size 64 to model Hθ, all of which are followed by a Sigmoid activation function except the last one. The derivatives ∂Hθ/∂p, ∂Hθ/∂q, ∂Hθ/∂x, ∂Hθ/∂y are all obtained by automatic differentiation in Pytorch (Paszke et al., 2019). The weights of the linear layers are initialized by Xavier initializaiton (Glorot & Bengio, 2010). We generate the dataset for training and validation using high-precision numerical solver (Tao, 2016), where the ratio of training and validation datasets is 9 : 1. We set the dataset (qj0,p j 0) as the start input and (qj ,pj) as the target with j = 1, 2, · · · , Ns, and the time span between (qj0,p j 0) and (q j ,pj) is Ttrain. Feeding (q0,p0) = (q j 0,p j 0), t0 = 0, t = Ttrain, and time step dt in Algorithm 1 to get the predicted variables (q̂j , p̂j , x̂j , ŷj). Accordingly, the loss function is defined as LNSSNN = 1 Nb Nb∑ j=1 ‖q(j) − q̂(j)‖1 + ‖p(j) − p̂(j)‖1 + ‖q(j) − x̂(j)‖1 + ‖p(j) − ŷ(j)‖1, (7) where Nb = 512 is the batch size of the training samples. We use the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.05. The learning rate is multiplied by 0.8 for every 10 epoches. Taking systemH(q, p) = 0.5(q2 + 1)(p2 + 1) as an example, we carry out a series of ablation tests based on our constructed networks. Normally, we set the time span, time step and dateset size as T = 0.01, dt = 0.01 and Ns = 1280. The choice of ω in (4) is largely flexible since NSSNN is not sensitive to the parameter ω when it is larger than a certain threshold. Figure 2 shows the training and validation losses with different ω in the network trained by clean and noise datasets. Though the convergence rates are slightly different in a small scope, the examples with various ω are able to converge to the same size of training and validation losses. Here, we set ω = 2000, but ω can be smaller than 2000. The only requirement for picking ω is that it has to be larger than O(10), which is detailed in Appendix C. We pick the L1 loss function to train our network due to its better performance. Figure 3 compares the validation losses with different training loss functions in the network trained by clean and noise datasets. Figure 3(a) shows that either the network trained by L1 or MSE with a clean dataset can converge to a small validation loss, but the network trained by L1 loss converges relatively faster. Figures 3(b) and 3(c) both show that the network trained by L1 with noise dataset can converge to a smaller validation loss. In addition, we already introduced a regularization term in the symplectic integrator embedded in the network; thus, there is no need to add the regularization term in the loss function. The integral time step in the sympletic integrator is a vital parameter, and the choice of dt largely depends on the time span Ttrain. Figure 4 compares the validation losses generated by various integral time steps dt based on fixed dataset time spans Ttrain = 0.01, 0.1 and 0.2 respectively in the training process. The validation loss converges to a similar degree with various dt based on fixed Ttrain = 0.01 and Ttrain = 0.1 in 4(a) and (b), while it increases significantly as dt increases based on fixed Ttrain = 0.02 in 4(c). Thus, we should take relatively small dt for the dataset with larger time span Ttrain 5 COMPARISONS WITH OTHER METHODS 5.1 METHODOLOGIES We compare our method with other recently proposed methods, such as HNN (Greydanus et al., 2019), NeuralODE (Chen et al., 2018), TaylorNet (Tong et al., 2020), SSINN (DiPietro et al., 2020), SRNN (Chen et al., 2020), and SympNet (Jin et al., 2020). There are several features distinguishing our method from others, as shown in Table 1. HNN first enforces conservative features of a Hamiltonian system by reformulating its loss function, which incurs two main shortcomings. On the one hand, it requires the temporal derivatives of the momentum and the position of the systems to calculate the loss function, which is difficult to obtain from real-world systems. On the other hand, HNN doesn’t strictly preserve the symplectic structure, because its symplectomorphism is realized by its loss function rather than its intrinsic network architecture. NeuralODE successfully bypasses the time derivatives of the datasets by incorporating an integrator solver into the network architecture. Embedding the Hamiltonian prior into the NeuralODE, a series of methods are proposed, such as SRNN, SSINN, and TaylorNet, to predict the continuous trajectory of system variables; however, presently these methods are only designed to solve separable Hamiltonian systems. Instead of updating the continuous dynamics by integrating the neural networks in NeuralODE, SympNet adopts a symplectomorphism composed of well-designed both linear and non-linear matrices to intrinsically map the system variables within neighboring time steps. However, the parameters scale in the matrix map for training N dimensional Hamiltonian system in SympNet is O(N2), which makes it hard to generalize to the high dimensional N-body problems. For example, in Section 6, we predict the dynamic evolution of 6000 vortex particles, which is challenging for the training process of the SympNet on the level of O(60002). NSSNN overcomes the weaknesses mentioned above. Under the framework of NeuralODE, NSSNN utilizes continuously-defined dynamics in the neural networks, which gives it the capability to learn the continuous-time evolution of dynamical systems. Based on Tao (2016), NSSNN embeds the symplectic prior into the nonseparable symplectic integrator to ensure the strict symplectomorphism, thereby guaranteeing the property of long-term predictability. In addition, unlike SympNet, NSSNN is highly flexible and can be generalized to high dimensional N-body problems by involving the interaction networks (Sanchez-Gonzalez et al., 2019), which will be further discussed in Section 6. 5.2 EXPERIMENTS We compare five implementations that learn and predict Hamiltonian systems. The first one is NeuralODE, which trains the system by embedding the network fθ → (dq/dt, dp/dt) into the Runge-Kutta (RK) integrator. The other four, however, achieve the goal by fitting the Hamiltonian Hθ → H based on (1). Specifically, HNN trains the network with the constraints of the Hamiltonian symplectic gradient along with the time derivative of system variables and then embeds the welltrainedHθ into the RK integrator for predicting the system. The third and fourth implementations are ablation tests. One of them is improved HNN (IHNN), which embeds the well-trainedHθ into the nonseparable symplectic integrator (Tao’s integrator) for predicting. The other is to directly embed Hθ into the RK integrator for training, which we call HRK. The fifth method is NSSNN, which embedsHθ into the nonseparable symplectic integrator for training. For fair comparison, we adopt the same network structure (except that the dimension of output layer in NeuralODE is two times larger than that in the other four), the same L1 loss function and same size of the dataset, and the precision of all integral schemes is second order, and the other parameters keep consistent with the one in Section 4. The time derivative in the dataset for training HNN and IHNN is obtained by the first difference method dq dt ≈ q(Ttrain)− q(0) Ttrain and dp dt ≈ p(Ttrain)− q(0) Ttrain . (8) Figure 5 demonstrates the differences between the five methods using a spring systemH = 0.5(q2 + p2) with different time span Ttrain = 0.4, 1 and same time step dt = 0.2. We can see that by introducing the nonseparable symplectic integrator into the prediction of the Hamiltonian system, NSSNN has a stronger long-term predicting ability than all the other methods. In addition, the prediction of HNN and IHNN lies in the dataset with time derivative; consequently, it will lead to a larger error when the given time span Ttrain is large. Moreover, the datasets obtained by (11) in HNN and IHNN are sensitive to noise. Figure 6 compares the predictions of (q, p) for the system H = 0.5(q2 + 1)(p2 + 1), where the network is trained by the dataset with noise ∼ 0.05U(−1, 1), along with time span Ttrain = 0.2 and time step dt = 0.02. Under the condition with noise, NSSNN still performs well compared with other methods. Also, we compare the convergent error of a series of Hamiltonian systems with differentH trained with noisy data in Appendix D, which generally shows better robustness than HNN does. 6 MODELING VORTEX DYNAMICS OF MULTI-PARTICLE SYSTEM For two-dimensional vortex particle systems, the dynamical equations of particle positions (xj , yj), j = 1, 2, · · · , Nv with particle strengths Γj can be written in the generalized Hamiltonian form as Γj dxj dt = −∂H p ∂yj , Γj dyj dt = ∂Hp ∂xj , with Hp = 1 4π Nv∑ j,k=1 ΓjΓk log(|xj − xk|). (9) By including the given particle strengths Γj in Algorithm 1, we can still adopt the method mentioned above to learn the Hamiltonian in (9) when there are fewer particles. However, considering a system with Nv 2 particles, the cost to collect training data from all Nv particles might be high, and the training process can be time-consuming. Thus, instead of collecting information from all Nv particles to train our model, we only use data collected from two bodies as training data to make predictions of the dynamics of Nv particles. Specifically, we assume the interactive models between particle pairs with unit particle strengths Γj = 1 are the same, and their corresponding Hamiltonian can be represented as network Ĥθ(xj ,xk), based on which the corresponding Hamiltonian of Nv particles can be written as (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2019) Hpθ = Nv∑ i,j=1 ΓjΓkĤθ(xj ,xk). (10) We embed (10) into the symplectic integrator that includes Γj to obtain the final network architecture. The setup of the multi-particle problem is similar to the previous problems. The training time span is Ttrain = 0.01 while the prediction period can be up to Tpredict = 40. We use 2048 clean data samples to train our model. The training process takes about 100 epochs for the loss to converge. In Figure 7, we use our trained model to predict the dynamics of 6000-particle systems, including Taylor and Leapfrog vortices. We generate results of Taylor vortex and Leapfrop vortex using NSSNN and HNN and compare them with the ground truth. Vortex elements are used with corresponding initial vorticity conditions of Taylor vortex and Leapfrop vortex (Qu et al., 2019). The difficulty of the numerical modeling of these two systems lies in the separation of different dynamical vortices instead of having them merging into a bigger structure. In both cases, the vortices evolved using NSSNN are separated nicely as the ground truth shows, while the vortices merge together using HNN. 7 LIMITATIONS The network with the embedded integrator is often more time-consuming to train than the one based on the dataset with time derivative. For example, the ratio of training time of the methods HNN and NSSNN is 1 : 3 when dt = Ttrain, and the training time of the recurrent networks further increases with the decreasing of dt. Although a smaller dt often has higher discretization accuracy, there is a tradeoff between training cost and predicting accuracy. Additionally, a smaller dt may potentially cause gradient explosion. In this case, we may want to use the adjoint method instead. Another limitation lies in the assumption that the symplectic structure is conserved. In real-world systems, there could be dissipation that makes this assumption unsatisfied. 8 CONCLUSIONS We incorporate a classic ideal that maps a nonseparable system to a higher dimensional space making it quasi-separable to construct symplectic networks. With the intrinsic symplectic structure, NSSNN possesses many benefits compared with other methods. In particular, NSSNN is the first method that can learn the vortex dynamical system, and accurately predict the evolution of complex vortex structures, such as Taylor and Leapfrog vortices. NSSNN, based on the first principle of learning complex systems, has potential applications in fields of physics, astronomy, and weather forecast, etc. We will further explore the possibilities of neural networks with inherent structure-preserving ability in fields like 3D vortex dynamics and quantum turbulence. In addition, we will also work on general applications of NSSNN with datasets based on images or other real scenes through automatically identifying coordinate variables of Hamiltonian systems based on neural networks. ACKNOWLEDGMENTS This project is supported in part by Neukom Institute CompX Faculty Grant, Burke Research Initiation Award, and ByteDance Gift Donation. Yunjin Tong is supported by the Dartmouth Women in Science Project (WISP), Undergraduate Advising and Research Program (UGAR), and Neukom Scholars Program. A NETWORK ARCHITECTURE Figure 8(a) shows the forward pass of NSSNN is composed of a forward pass through a differentiable symplectic integrator as well as a backpropagation step through the model. Figure 8(b) plots the schematic diagram of NSSNN. For the constructed network Hθ(q,p), we integrate (4) by using the second-order symplectic integrator (Tao, 2016). Specifically, The input layer of the integrator is (q,p,x,y) = (q0,p0, q0,p0) at t = t0 and the output layer is (q,p,x,y) = (qn,pn,xn,yn) at t = t0 + ndt. The recursive relations of (qi,pi,xi,yi), i = 1, 2, · · · , n, are expressed by the algorithm 1. B SYMPLECTOMORPHISMS One of the most important features of the time evolution of Hamilton’s equations is that it is a symplectomorphism, representing a transformation of phase space that is volume-preserving. In the setting of canonical coordinates, symplectomorphism means the transformation of the phase flow of a Hamiltonian system conserves the symplectic two-form dq ∧ dp ≡ N∑ j=1 (dqj ∧ dpj) , (11) where ∧ denotes the wedge product of two differential forms. The rules of wedge products can be found in Lee (2010). In the two-dimensional case, (11) can be understood as the area element of the surface. In this case, the symplectomorphism can be interpreted as the area element of the surface is constant. As proved below, our constructed network structure intrinsically preserves Hamiltonian structure. Theorem B.1. For a given δ, the mapping φδ1, φδ2, and φδ3 in (5) are symplectomorphisms. Proof. Let (tqj , t p j , t x j , t y j ) = φ δ j(q,p,x,y), j = 1, 2, 3. (12) . From the first equation of (5), we have dtq1 ∧ dt p 1 + dt x 1 ∧ dt y 1 =dq ∧ d [ p− δ ∂Hθ(q,y) ∂q ] + d [ x+ δ ∂Hθ(q,y) ∂p ] ∧ dy =dq ∧ dp+ dx ∧ dy + δ [ ∂Hθ(q,y) ∂q∂y − ∂Hθ(q,y) ∂y∂q ] dq ∧ dy =dq ∧ dp+ dx ∧ dy. (13) Similarly, we can prove that dtq2 ∧ dt p 2 + dt x 2 ∧ dt y 2 = dq ∧ dp+ dx∧ dy. In addition, from the third equation of (5), we can directly deduce that dtq3 ∧ dt p 3 + dt x 3 ∧ dt y 3 = dq ∧ dp+ dx ∧ dy. Suppose that Φ1 and Φ2 are two symplectomorphisms. Then, it is easy to show that their composite map Φ2 ◦ Φ1 is also symplectomorphism due to the chain rule. Thus, the symplectomorphism of algorithm 1 can be guaranteed by the theorem B.1. C DETERMINING COEFFICIENT ω To further elucidation, the HamiltonianHA+HB without the binding, i.e.,H with ω = 0, in extended phase space (q,p,x,y) may not be integrable, even if H(q,p) is integrable in the original phase space (q,p). However,HC is integrable. Thus, as ω increases, a larger proportion in the phase space for H corresponds to regular behaviors (Kolmogorov, 1954). For H(q, p) = (q2 + 1)(p2 + 1)/2, shown in Fig. 9, we compare the trajectories starting from [q(0), p(0), x(0), y(0)] = (−3, 0,−3, 0) calculated by the symplectic integrator (Tao, 2016) with different ω, where the calculation accuracy is second order accuracy and the time interval is 0.001. As Figs. 9(a), (b), (c), and (d) shown, the chaotic region in phase space is significantly decreasing until forming a stable limit cycle. We define = ‖(q, p)− (x, y)‖2 as the calculation error of this system, shown in Figs. (e), (f), (g), and (h) that the error is decreasing with ω increasing, which fits the quantitative results of phase trajectory well. D OTHER EXPERIMENTS We consider the pendulum, the Lotka–Volterra, the Spring, the Hénon–Heiles, the Tao’s example (Tao, 2016), the Fourier form of nonlinear Schrödinger and the vortex particle systems in our implementation. The Hamiltonian energies of these systems (except vortex particle system) are summarized as follows: Pendulum system: H(q, p) = 3(1−cos(q))+p2. Lotka–Volterra system: H(q, p) = p−ep+2q−eq . Spring system: H(q, p) = q2 + p2. Hénon–Heiles system: H(q1, q2, p1, p2) = (p21 + p22)/2 + (q21 + q22) + (q 2 1q2 − q32/3)/2. Tao’s example (Tao, 2016): H(q, p) = (q2 + 1)(p2 + 1)/2. Fourier form of nonlinear Schrödinger equation: H(q1, q2, p1, p2) = [ (q21 + p 2 1) 2 + (q22 + p 2 2) 2 ] /4−(q21q22 +p21p22− q21p 2 2 − p21q22 + 4q1q2p1p2). The network is trained by the dataset with noise∼ 0.1U(−1, 1). The training time span, integral time step, and validation time span are 0.01, 0.01, and 0.1, respectively. Table 2 compares the Hamiltonian deviation H = ‖H(qtruth,ptruth)−H(qpredict,ppredict)‖2/‖H(qtruth,ptruth)‖2 and the prediction error p = ‖qtruth − qpredict‖1 + ‖ptruth − ppredict‖1. It is clearly from the Table 2 that NSSNN either outperforms or has similar performances as NeuralODE and HNN do.
1. What is the main contribution of the paper, and how does it extend the symplectic family of network architectures? 2. How does the proposed Nonseparable Symplectic NNs (NSSNNs) differ from vanilla HNNs and NeuralODEs when applied to nonseparable Hamiltonian systems? 3. What is the significance of implementing a symplectic integration schema (from Tao (2016)) for solving arbitrary nonseparable (and separable) Hamiltonian systems within a symplectic neural network architecture? 4. Why do the authors choose \omega = 2000 in their experiments, and what is the role of \omega as a regularization? 5. How do the results in Fig. 4 determine that NSSNNs can perform long-term predictions, but HNNs and NeuralODEs fail? 6. Can you provide more details on how SympNets (Jin et al., 2020) are related to NSSNNs and how they differ? 7. What are some minor comments and typos in the review, such as "TaylorNet" and "SSINN"?
Review
Review The paper extends the symplectic family of network architectures towards modeling nonseparable Hamiltonian dynamic systems. More specifically, the paper implements a symplectic integration schema (from Tao (2016)) for solving arbitrary nonseparable (and separable) Hamiltonian systems within a symplectic neural network architecture. The results from several modeling tasks show that the proposed Nonseparable Symplectic NNs (NSSNNs) are more robust and accurate than vanilla HNNs and NeuralODEs when applied to nonseparable Hamiltonian systems. Although the idea of modeling nonseparable Hamiltonian systems with Symplectic NNs was already briefly outlined in the SRNN paper (Chen et al 2020), this paper implements it and further analyses various properties of this approach. Overall, the paper is well structured and well written, however there are still some inconsistencies that need to be addressed and clarified. Namely, the related work discussion is somewhat handled poorly: For instance, the authors state in only one sentence that NSSNNs are closely related to SympNets (Jin et al 2020), without discussing any further details on how are they related and, more importantly, how they differ. Moreover, from that point on, SympNets are never considered (in the experiments) nor mentioned, even though SympNets are indeed able to model nonseparable Hamiltonian systems. In Table 1, that compares the properties of NSSNNs w.r.t some benchmarks, the authors discus "TaylorNet" and "SSINN" - these two are never introduced before. I assume the former refers to Thong et al. 2020, but I have no idea about the latter. Regarding the choice of \omega, the authors provide some evidence that the choice of \omega plays a role as a regularization, where larger values tend to restrain the system. The analyses given in Appendix B show that with \omega 10 the system already is stable (which also supports the experiments presented in Tao 2016). But then the \omega is set to 2000 in the experiments, which is orders of magnitudes larger than the analyses. How and why was this value chosen? Lines 206-207 state that from the results in Fig4, it is "clear that" NSSNNs can perform long-term predictions but HNNs and NeuralODEs (in the legend they are listed as ODE-nets, are these the same method?) fail. It is not clear how was this determined, since the results show that NSSNNs are more robust to noise than the other two, NeuralODEs are still able to perform long-term predictions (in a noiseless setting), and HNNs in a both scenarios w/o noise and w/ moderate amount of noise. Some typos and minor comments: L1: Hamiltonian systems are not a "special" category of physical systems, but is a formalism for modeling certain physical systems (eg. a pendulum, besides within Hamiltonian mechanics, can be modeled within classical (Newtonian) mechanics and Lagrangian mechanics). L42: "e.g. see Tong et al. 2020" -> "Tong et al. 2020" L56: "degree of Freedoms" -> "degrees of freedom" L206: "figure 4" -> "Figure 4" #Update I thank the authors for addressing my questions and revising the manuscript, which clarified many of my concerns regarding this work.
ICLR
Title Nonseparable Symplectic Neural Networks Abstract Predicting the behaviors of Hamiltonian systems has been drawing increasing attention in scientific machine learning. However, the vast majority of the literature was focused on predicting separable Hamiltonian systems with their kinematic and potential energy terms being explicitly decoupled while building data-driven paradigms to predict nonseparable Hamiltonian systems that are ubiquitous in fluid dynamics and quantum mechanics were rarely explored. The main computational challenge lies in the effective embedding of symplectic priors to describe the inherently coupled evolution of position and momentum, which typically exhibits intricate dynamics. To solve the problem, we propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs), to uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data. The enabling mechanics of our approach is an augmented symplectic time integrator to decouple the position and momentum energy terms and facilitate their evolution. We demonstrated the efficacy and versatility of our method by predicting a wide range of Hamiltonian systems, both separable and nonseparable, including chaotic vortical flows. We showed the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems by rigorously enforcing symplectomorphism. 1 INTRODUCTION A Hamiltonian dynamic system refers to a formalism for modeling a physical system exhibiting some specific form of energy conservation during its temporal evolution. A typical example is a pendulum whose total energy (referred to as the system’s Hamiltonian) is conserved as a temporally invariant sum of its kinematic energy and potential energy. Mathematically, such energy conservation indicates a specific geometric structure underpinning its time integration, named as a symplectic structure, which further spawns a wide range of numerical time integrators to model Hamiltonian systems. These symplectic time integrators have proven their effectiveness in simulating a variety of energy-conserving dynamics when Hamiltonian expressions are known as a prior. Examples encompass applications in plasma physics (Morrison, 2005), electromagnetics (Li et al., 2019), fluid mechanics (Salmon, 1988), and celestial mechanics (Saari & Xia, 1996), to name a few. On another front, the emergence of the various machine learning paradigms with their particular focus on uncovering the hidden invariant quantities and their evolutionary structures enable a faithful prediction of Hamiltonian dynamics without knowing its analytical energy expression beforehand. The key mechanics underpinning these learning models lie in a proper embedding of the strong mathematical inductive priors to ensure Hamiltonian conservation in a neural network data flow. Typically, such priors are realized in a variational way or a structured way. For example, in Greydanus et al. (2019), the Hamiltonian conservation is encoded in the loss function. This category of methods does not assume any combinatorial pattern of the energy term and therefore relies on the inherent expressiveness of neural networks to distill the Hamiltonian structure from abundant training datasets (Choudhary et al., 2019). Another category of Hamiltonian networks, which we refer to as structured approaches, implements the conservation law indirectly by embedding a symplectic time integrator (DiPietro et al., 2020; Tong et al., 2020; Chen et al., 2020) or composition of linear, activation, and gradient modules (Jin et al., 2020) into the network architecture. ∗shiying.xiong@dartmouth.edu . One of the main limitations of the current structured methods lies in the separable assumption of the Hamiltonian expression. Examples of separable Hamiltonian systems include the pendulum, the Lotka–Volterra (Zhu et al., 2016), the Kepler (Antohe & Gladwell, 2004), and the Hénon–Heiles systems (Zotos, 2015). However, beyond this scope, there exist various nonseparable systems whose Hamiltonian has no explicit expression to decouple the position and momentum energies. Examples include incompressible flows (Suzuki et al., 2007), quantum systems (Bonnabel et al., 2009), rigid body dynamics (Chadaj et al., 2017), charged particle dynamics (Zhang et al., 2016), and nonlinear Schrödinger equation (Brugnano et al., 2018). This nonseparability typically causes chaos and instability, which further complicates the systems’ dynamics. Although SympNet in Jin et al. (2020) can be used to learn and predict nonseparable Hamiltonian systems, multiple matrices of the same order with system dimension are needed in the training process of SympNet, resulting in difficulties in generalizing into high-dimensional large-scale N-body problems which are common in a series of nonseparable Hamiltonian systems, such as quantum multibody problems and vortexparticle dynamics problems. Such chaotic and large-scale nature jointly adds shear difficulties for a conventional machine learning model to deliver faithful predictions. In this paper, we propose an effective machine learning paradigm to predict nonseparable Hamiltonian systems. We build a novel neural network architecture, named nonseparable symplectic neural networks (NSSNNs), to enable accurate and robust predictions of long-term Hamiltonian dynamics based on short-term observation data. Our proposed method belongs to the category of structured network architectures: it intrinsically embeds the symplectomorphism into the network design to strictly preserve the symplectic evolution and further conserves the unknown, nonseparable Hamiltonian energy. The enabling techniques we adopted in our learning framework consist of an augmented symplectic time integrator to asymptotically “decouple” the position and momentum quantities that were nonseparable in their original form. We also introduce the Lagrangian multiplier in the augmented phase space to improve the system’s numerical stability. Our network design is motivated by ideas originated from physics (Tao, 2016) and optimization (Boyd et al., 2004). The combination of these mathematical observations and numerical paradigms enables a novel neural network architecture that can drastically enhance both the scale and scope of the current predictions. We show a motivational example in Figure 1 by comparing our approach with a traditional HNN method (Greydanus et al., 2019) regarding their structural designs and predicting abilities. We refer the readers to Section 6 for a detailed discussion. As shown in Figure 1, the vortices evolved using NSSNN are separated nicely as the ground truth, while the vortices merge together using HNN due to the failure of conserving the symplectic structure of a nonseparable system. The conservative capability of NSSNN springs from our design of the auxiliary variables (red x and y) which converts the original nonseparable system into a higher dimensional quasi-separable system where we can adopt a symplectic integrator. 2 RELATED WORKS Data-driven physical prediction. Data-driven approaches have been widely applied in physical systems including fluid mechanics (Brunton et al., 2020), wave physics (Hughes et al., 2019), quantum physics (Sellier et al., 2019), thermodynamics (Hernandez et al., 2020), and material science (Teicherta et al., 2019). Among these different physical systems, data-driven fluid receives increasing attention. We refer the readers to Brunton et al. (2020) for a thorough survey of the fundamental machine learning methodologies as well as their uses for understanding, modeling, optimizing, and controlling fluid flows in experiments and simulations based on training data. One of the motivations of our work is to design a versatile learning approach that can predict complex fluid motions. On another front, many pieces of research focus on incorporating physical priors into the learning framework, e.g., by enforcing incompressibility (Mohan et al., 2020), the Galilean invariance (Ling et al., 2016), quasistatic equilibrium (Geng et al., 2020), the Lagrangian invariance (Cranmer et al., 2020), and Hamiltonian conservation (Hernandez et al., 2020; Greydanus et al., 2019; Jin et al., 2020; Zhong et al., 2020). Here, inspired by the idea of embedding physics priors into neural networks, we aim to accelerate the learning process and improve the accuracy of our model. Neural networks for Hamiltonian systems. Greydanus et al. (2019) introduced Hamiltonian neural networks (HNNs) to conserve the Hamiltonian energy of the system by reformulating the loss function. Inspired by HNN, a series of methods intrinsically embedding a symplectic integrator into the recurrent neural network was proposed, such as SRNN (Chen et al., 2020), TaylorNet (Tong et al., 2020) and SSINN (DiPietro et al., 2020), to solve separable Hamiltonian systems. Combined with graph networks (Sanchez-Gonzalez et al., 2019; Battaglia et al., 2016), these methods were further generalized to large-scale N-body problems induced by interaction force between the particle pairs. Jin et al. (2020) proposed SympNet by directly constructing the symplectic mapping of system variables within neighboring time steps to handle both separable and nonseparable Hamiltonian systems. However, the scale of parameters in SympNet for training N dimensional Hamiltonian system is O(N2), which makes it hard to be generalized to the high dimensional N-body problems. Our NSSNN overcomes these limitations by devising a new Hamiltonian network architecture that is specifically suited for nonseparable systems (see details in Section 5). In addition, the Hamiltonianbased neural networks can be extended to further applications. Toth et al. (2020) developed the Hamiltonian Generative Network (HGN) to learn Hamiltonian dynamics from high-dimensional observations (such as images). Moreover, Zhong et al. (2020) introduced Symplectic ODE-Net (SymODEN), which adds an external control term to the standard Hamiltonian dynamics. 3 FRAMEWORK 3.1 AUGMENTED HAMILTONIAN EQUATION We start by considering a Hamiltonian system with N pairs of canonical coordinates (i.e. N generalized positions and N generalized momentum). The time evolution of canonical coordinates is governed by the symplectic gradient of the Hamiltonian (Hand & Finch, 2008). Specifically, the time evolution of the system is governed by Hamilton’s equations as dq dt = ∂H ∂p , dp dt = −∂H ∂q , (1) with the initial condition (q,p)|t=t0 = (q0,p0). In a general setting, q = (q1, q2, · · · , qN ) represents the positions and p = (p1, p2, ...pN ) denotes their momentum. Function H = H(q,p) is the Hamiltonian, which corresponds to the total energy of the system. An important feature of Hamilton’s equations is its symplectomorphism (see Appendix B for a detailed overview). The symplectic structure underpinning our proposed network architecture draws inspirations from the original research of Tao (2016) in computational physics. In Tao (2016), a generic, high-order, explicit and symplectic time integrator was proposed to solve (1) of an arbitrary separable and nonseparable HamiltonianH. This is implemented by considering an augmented Hamiltonian H(q,p,x,y) := HA +HB + ωHC (2) with HA = H(q,y), HB = H(x,p), HC = 1 2 ( ‖q − x‖22 + ‖p− y‖22 ) (3) in an extended phase space with symplectic two form dq ∧ dp+ dx ∧ dy, where ω is a constant that controls the binding of the original system and the artificial restraint. Notice that the Hamilton’s equations forH dq dt = ∂H ∂p = ∂H(x,p) ∂p + ω(p− y), dp dt = −∂H ∂q = −∂H(q,y) ∂q − ω(q − x), dx dt = ∂H ∂y = ∂H(q,y) ∂y − ω(p− y), dy dt = −∂H ∂x = −∂H(x,p) ∂x + ω(q − x), (4) with the initial condition (q,p,x,y)|t=t0 = (q0,p0, q0,p0) have the same exact solution as (1) in the sense that (q,p,x,y) = (q,p, q,p). Hence, we can get the solution of (1) by solving (4). Furthermore, it is possible to construct high-order symplectic integrators forH in (4) with explicit updates. Our model aims to learn the dynamical evolution of (q,p) in (1) by embedding (4) into the framework of NeuralODE (Chen et al., 2018). The coefficient ω acts as a regularizer, which stabilizes the numerical results (see Section 4). 3.2 NONSEPARABLE HAMILTONIAN NEURAL NETWORK We learn the nonseparable Hamiltonian dynamics (1) by constructing an augmented system (4), from which we can obtain the energy function H(q,p) by training the neural network Hθ(q,p) with parameter θ and calculate the gradient∇Hθ(q,p) by taking the in-graph gradient. For the constructed network Hθ(q,p), we integrate (4) by using the second-order symplectic integrator (Tao, 2016). Specifically, we will have an input layer (q,p,x,y) = (q0,p0, q0,p0) at t = t0 and an output layer (q,p,x,y) = (qn,pn,xn,yn) at t = t0 +ndt. Algorithm 1 Integrate (4) by using the secondorder symplectic integrator Input: q0,p0, t0, t, dt; φδ1, φδ2, and φδ3 in (5); Output: (q̂, p̂, x̂, ŷ) = (qn,pn,xn,yn) 1 (q0,p0,x0,y0) = (q0,p0, q0,p0) n = floor[(t− t0)/dt] for i = 1→ n do 2 (qi,pi,xi,yi) = φ dt/2 1 ◦φ dt/2 2 ◦φdt3 ◦φ dt/2 2 ◦ φ dt/2 1 ◦ (qi−1,pi−1,xi−1,yi−1); 3 end The recursive relations of (qi,pi,xi,yi), i = 1, 2, · · · , n, can be expressed by the algorithm 1 (also see Figure 8 in Appendix A). The input functions φδ1(q,p,x,y), φ δ 2(q,p,x,y), and φ δ 3(q,p,x,y) in algorithm 1 are qp− δ[∂Hθ(q,y)/∂q]x+ δ[∂Hθ(q,y)/∂p] y , q + δ[∂Hθ(x,p)/∂p]px y − δ[∂Hθ(x,p)/∂q] , and 1 2 ( q + x p+ y ) +Rδ ( q − x p− y ) ( q + x p+ y ) −Rδ ( q − x p− y ) , (5) respectively. Here Rδ := [ cos(2ωδ)I sin(2ωδ)I − sin(2ωδ)I cos(2ωδ)I ] , where I is a identity matrix. (6) We remark that x and y are just auxiliary variables, which are theoretically equal to q and p. Therefore, we can use the data set of (q,p) to construct the data set containing variables (q,p,x,y). In addition, by constructing the networkHθ, we show that theorem B.1 in Appendix B holds, so the networks φδ1,φ δ 2, and φ δ 3 in (5) preserve the symplectic structure of the system. Suppose that Φ1 and Φ2 are two symplectomorphisms. Then, it is easy to show that their composite map Φ2 ◦ Φ1 is also symplectomorphism due to the chain rule. Thus, the symplectomorphism of algorithm 1 can be guaranteed by the theorems B.1. 4 TRAINING SETTINGS AND ABLATION TESTS We use 6 linear layers with hidden size 64 to model Hθ, all of which are followed by a Sigmoid activation function except the last one. The derivatives ∂Hθ/∂p, ∂Hθ/∂q, ∂Hθ/∂x, ∂Hθ/∂y are all obtained by automatic differentiation in Pytorch (Paszke et al., 2019). The weights of the linear layers are initialized by Xavier initializaiton (Glorot & Bengio, 2010). We generate the dataset for training and validation using high-precision numerical solver (Tao, 2016), where the ratio of training and validation datasets is 9 : 1. We set the dataset (qj0,p j 0) as the start input and (qj ,pj) as the target with j = 1, 2, · · · , Ns, and the time span between (qj0,p j 0) and (q j ,pj) is Ttrain. Feeding (q0,p0) = (q j 0,p j 0), t0 = 0, t = Ttrain, and time step dt in Algorithm 1 to get the predicted variables (q̂j , p̂j , x̂j , ŷj). Accordingly, the loss function is defined as LNSSNN = 1 Nb Nb∑ j=1 ‖q(j) − q̂(j)‖1 + ‖p(j) − p̂(j)‖1 + ‖q(j) − x̂(j)‖1 + ‖p(j) − ŷ(j)‖1, (7) where Nb = 512 is the batch size of the training samples. We use the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.05. The learning rate is multiplied by 0.8 for every 10 epoches. Taking systemH(q, p) = 0.5(q2 + 1)(p2 + 1) as an example, we carry out a series of ablation tests based on our constructed networks. Normally, we set the time span, time step and dateset size as T = 0.01, dt = 0.01 and Ns = 1280. The choice of ω in (4) is largely flexible since NSSNN is not sensitive to the parameter ω when it is larger than a certain threshold. Figure 2 shows the training and validation losses with different ω in the network trained by clean and noise datasets. Though the convergence rates are slightly different in a small scope, the examples with various ω are able to converge to the same size of training and validation losses. Here, we set ω = 2000, but ω can be smaller than 2000. The only requirement for picking ω is that it has to be larger than O(10), which is detailed in Appendix C. We pick the L1 loss function to train our network due to its better performance. Figure 3 compares the validation losses with different training loss functions in the network trained by clean and noise datasets. Figure 3(a) shows that either the network trained by L1 or MSE with a clean dataset can converge to a small validation loss, but the network trained by L1 loss converges relatively faster. Figures 3(b) and 3(c) both show that the network trained by L1 with noise dataset can converge to a smaller validation loss. In addition, we already introduced a regularization term in the symplectic integrator embedded in the network; thus, there is no need to add the regularization term in the loss function. The integral time step in the sympletic integrator is a vital parameter, and the choice of dt largely depends on the time span Ttrain. Figure 4 compares the validation losses generated by various integral time steps dt based on fixed dataset time spans Ttrain = 0.01, 0.1 and 0.2 respectively in the training process. The validation loss converges to a similar degree with various dt based on fixed Ttrain = 0.01 and Ttrain = 0.1 in 4(a) and (b), while it increases significantly as dt increases based on fixed Ttrain = 0.02 in 4(c). Thus, we should take relatively small dt for the dataset with larger time span Ttrain 5 COMPARISONS WITH OTHER METHODS 5.1 METHODOLOGIES We compare our method with other recently proposed methods, such as HNN (Greydanus et al., 2019), NeuralODE (Chen et al., 2018), TaylorNet (Tong et al., 2020), SSINN (DiPietro et al., 2020), SRNN (Chen et al., 2020), and SympNet (Jin et al., 2020). There are several features distinguishing our method from others, as shown in Table 1. HNN first enforces conservative features of a Hamiltonian system by reformulating its loss function, which incurs two main shortcomings. On the one hand, it requires the temporal derivatives of the momentum and the position of the systems to calculate the loss function, which is difficult to obtain from real-world systems. On the other hand, HNN doesn’t strictly preserve the symplectic structure, because its symplectomorphism is realized by its loss function rather than its intrinsic network architecture. NeuralODE successfully bypasses the time derivatives of the datasets by incorporating an integrator solver into the network architecture. Embedding the Hamiltonian prior into the NeuralODE, a series of methods are proposed, such as SRNN, SSINN, and TaylorNet, to predict the continuous trajectory of system variables; however, presently these methods are only designed to solve separable Hamiltonian systems. Instead of updating the continuous dynamics by integrating the neural networks in NeuralODE, SympNet adopts a symplectomorphism composed of well-designed both linear and non-linear matrices to intrinsically map the system variables within neighboring time steps. However, the parameters scale in the matrix map for training N dimensional Hamiltonian system in SympNet is O(N2), which makes it hard to generalize to the high dimensional N-body problems. For example, in Section 6, we predict the dynamic evolution of 6000 vortex particles, which is challenging for the training process of the SympNet on the level of O(60002). NSSNN overcomes the weaknesses mentioned above. Under the framework of NeuralODE, NSSNN utilizes continuously-defined dynamics in the neural networks, which gives it the capability to learn the continuous-time evolution of dynamical systems. Based on Tao (2016), NSSNN embeds the symplectic prior into the nonseparable symplectic integrator to ensure the strict symplectomorphism, thereby guaranteeing the property of long-term predictability. In addition, unlike SympNet, NSSNN is highly flexible and can be generalized to high dimensional N-body problems by involving the interaction networks (Sanchez-Gonzalez et al., 2019), which will be further discussed in Section 6. 5.2 EXPERIMENTS We compare five implementations that learn and predict Hamiltonian systems. The first one is NeuralODE, which trains the system by embedding the network fθ → (dq/dt, dp/dt) into the Runge-Kutta (RK) integrator. The other four, however, achieve the goal by fitting the Hamiltonian Hθ → H based on (1). Specifically, HNN trains the network with the constraints of the Hamiltonian symplectic gradient along with the time derivative of system variables and then embeds the welltrainedHθ into the RK integrator for predicting the system. The third and fourth implementations are ablation tests. One of them is improved HNN (IHNN), which embeds the well-trainedHθ into the nonseparable symplectic integrator (Tao’s integrator) for predicting. The other is to directly embed Hθ into the RK integrator for training, which we call HRK. The fifth method is NSSNN, which embedsHθ into the nonseparable symplectic integrator for training. For fair comparison, we adopt the same network structure (except that the dimension of output layer in NeuralODE is two times larger than that in the other four), the same L1 loss function and same size of the dataset, and the precision of all integral schemes is second order, and the other parameters keep consistent with the one in Section 4. The time derivative in the dataset for training HNN and IHNN is obtained by the first difference method dq dt ≈ q(Ttrain)− q(0) Ttrain and dp dt ≈ p(Ttrain)− q(0) Ttrain . (8) Figure 5 demonstrates the differences between the five methods using a spring systemH = 0.5(q2 + p2) with different time span Ttrain = 0.4, 1 and same time step dt = 0.2. We can see that by introducing the nonseparable symplectic integrator into the prediction of the Hamiltonian system, NSSNN has a stronger long-term predicting ability than all the other methods. In addition, the prediction of HNN and IHNN lies in the dataset with time derivative; consequently, it will lead to a larger error when the given time span Ttrain is large. Moreover, the datasets obtained by (11) in HNN and IHNN are sensitive to noise. Figure 6 compares the predictions of (q, p) for the system H = 0.5(q2 + 1)(p2 + 1), where the network is trained by the dataset with noise ∼ 0.05U(−1, 1), along with time span Ttrain = 0.2 and time step dt = 0.02. Under the condition with noise, NSSNN still performs well compared with other methods. Also, we compare the convergent error of a series of Hamiltonian systems with differentH trained with noisy data in Appendix D, which generally shows better robustness than HNN does. 6 MODELING VORTEX DYNAMICS OF MULTI-PARTICLE SYSTEM For two-dimensional vortex particle systems, the dynamical equations of particle positions (xj , yj), j = 1, 2, · · · , Nv with particle strengths Γj can be written in the generalized Hamiltonian form as Γj dxj dt = −∂H p ∂yj , Γj dyj dt = ∂Hp ∂xj , with Hp = 1 4π Nv∑ j,k=1 ΓjΓk log(|xj − xk|). (9) By including the given particle strengths Γj in Algorithm 1, we can still adopt the method mentioned above to learn the Hamiltonian in (9) when there are fewer particles. However, considering a system with Nv 2 particles, the cost to collect training data from all Nv particles might be high, and the training process can be time-consuming. Thus, instead of collecting information from all Nv particles to train our model, we only use data collected from two bodies as training data to make predictions of the dynamics of Nv particles. Specifically, we assume the interactive models between particle pairs with unit particle strengths Γj = 1 are the same, and their corresponding Hamiltonian can be represented as network Ĥθ(xj ,xk), based on which the corresponding Hamiltonian of Nv particles can be written as (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2019) Hpθ = Nv∑ i,j=1 ΓjΓkĤθ(xj ,xk). (10) We embed (10) into the symplectic integrator that includes Γj to obtain the final network architecture. The setup of the multi-particle problem is similar to the previous problems. The training time span is Ttrain = 0.01 while the prediction period can be up to Tpredict = 40. We use 2048 clean data samples to train our model. The training process takes about 100 epochs for the loss to converge. In Figure 7, we use our trained model to predict the dynamics of 6000-particle systems, including Taylor and Leapfrog vortices. We generate results of Taylor vortex and Leapfrop vortex using NSSNN and HNN and compare them with the ground truth. Vortex elements are used with corresponding initial vorticity conditions of Taylor vortex and Leapfrop vortex (Qu et al., 2019). The difficulty of the numerical modeling of these two systems lies in the separation of different dynamical vortices instead of having them merging into a bigger structure. In both cases, the vortices evolved using NSSNN are separated nicely as the ground truth shows, while the vortices merge together using HNN. 7 LIMITATIONS The network with the embedded integrator is often more time-consuming to train than the one based on the dataset with time derivative. For example, the ratio of training time of the methods HNN and NSSNN is 1 : 3 when dt = Ttrain, and the training time of the recurrent networks further increases with the decreasing of dt. Although a smaller dt often has higher discretization accuracy, there is a tradeoff between training cost and predicting accuracy. Additionally, a smaller dt may potentially cause gradient explosion. In this case, we may want to use the adjoint method instead. Another limitation lies in the assumption that the symplectic structure is conserved. In real-world systems, there could be dissipation that makes this assumption unsatisfied. 8 CONCLUSIONS We incorporate a classic ideal that maps a nonseparable system to a higher dimensional space making it quasi-separable to construct symplectic networks. With the intrinsic symplectic structure, NSSNN possesses many benefits compared with other methods. In particular, NSSNN is the first method that can learn the vortex dynamical system, and accurately predict the evolution of complex vortex structures, such as Taylor and Leapfrog vortices. NSSNN, based on the first principle of learning complex systems, has potential applications in fields of physics, astronomy, and weather forecast, etc. We will further explore the possibilities of neural networks with inherent structure-preserving ability in fields like 3D vortex dynamics and quantum turbulence. In addition, we will also work on general applications of NSSNN with datasets based on images or other real scenes through automatically identifying coordinate variables of Hamiltonian systems based on neural networks. ACKNOWLEDGMENTS This project is supported in part by Neukom Institute CompX Faculty Grant, Burke Research Initiation Award, and ByteDance Gift Donation. Yunjin Tong is supported by the Dartmouth Women in Science Project (WISP), Undergraduate Advising and Research Program (UGAR), and Neukom Scholars Program. A NETWORK ARCHITECTURE Figure 8(a) shows the forward pass of NSSNN is composed of a forward pass through a differentiable symplectic integrator as well as a backpropagation step through the model. Figure 8(b) plots the schematic diagram of NSSNN. For the constructed network Hθ(q,p), we integrate (4) by using the second-order symplectic integrator (Tao, 2016). Specifically, The input layer of the integrator is (q,p,x,y) = (q0,p0, q0,p0) at t = t0 and the output layer is (q,p,x,y) = (qn,pn,xn,yn) at t = t0 + ndt. The recursive relations of (qi,pi,xi,yi), i = 1, 2, · · · , n, are expressed by the algorithm 1. B SYMPLECTOMORPHISMS One of the most important features of the time evolution of Hamilton’s equations is that it is a symplectomorphism, representing a transformation of phase space that is volume-preserving. In the setting of canonical coordinates, symplectomorphism means the transformation of the phase flow of a Hamiltonian system conserves the symplectic two-form dq ∧ dp ≡ N∑ j=1 (dqj ∧ dpj) , (11) where ∧ denotes the wedge product of two differential forms. The rules of wedge products can be found in Lee (2010). In the two-dimensional case, (11) can be understood as the area element of the surface. In this case, the symplectomorphism can be interpreted as the area element of the surface is constant. As proved below, our constructed network structure intrinsically preserves Hamiltonian structure. Theorem B.1. For a given δ, the mapping φδ1, φδ2, and φδ3 in (5) are symplectomorphisms. Proof. Let (tqj , t p j , t x j , t y j ) = φ δ j(q,p,x,y), j = 1, 2, 3. (12) . From the first equation of (5), we have dtq1 ∧ dt p 1 + dt x 1 ∧ dt y 1 =dq ∧ d [ p− δ ∂Hθ(q,y) ∂q ] + d [ x+ δ ∂Hθ(q,y) ∂p ] ∧ dy =dq ∧ dp+ dx ∧ dy + δ [ ∂Hθ(q,y) ∂q∂y − ∂Hθ(q,y) ∂y∂q ] dq ∧ dy =dq ∧ dp+ dx ∧ dy. (13) Similarly, we can prove that dtq2 ∧ dt p 2 + dt x 2 ∧ dt y 2 = dq ∧ dp+ dx∧ dy. In addition, from the third equation of (5), we can directly deduce that dtq3 ∧ dt p 3 + dt x 3 ∧ dt y 3 = dq ∧ dp+ dx ∧ dy. Suppose that Φ1 and Φ2 are two symplectomorphisms. Then, it is easy to show that their composite map Φ2 ◦ Φ1 is also symplectomorphism due to the chain rule. Thus, the symplectomorphism of algorithm 1 can be guaranteed by the theorem B.1. C DETERMINING COEFFICIENT ω To further elucidation, the HamiltonianHA+HB without the binding, i.e.,H with ω = 0, in extended phase space (q,p,x,y) may not be integrable, even if H(q,p) is integrable in the original phase space (q,p). However,HC is integrable. Thus, as ω increases, a larger proportion in the phase space for H corresponds to regular behaviors (Kolmogorov, 1954). For H(q, p) = (q2 + 1)(p2 + 1)/2, shown in Fig. 9, we compare the trajectories starting from [q(0), p(0), x(0), y(0)] = (−3, 0,−3, 0) calculated by the symplectic integrator (Tao, 2016) with different ω, where the calculation accuracy is second order accuracy and the time interval is 0.001. As Figs. 9(a), (b), (c), and (d) shown, the chaotic region in phase space is significantly decreasing until forming a stable limit cycle. We define = ‖(q, p)− (x, y)‖2 as the calculation error of this system, shown in Figs. (e), (f), (g), and (h) that the error is decreasing with ω increasing, which fits the quantitative results of phase trajectory well. D OTHER EXPERIMENTS We consider the pendulum, the Lotka–Volterra, the Spring, the Hénon–Heiles, the Tao’s example (Tao, 2016), the Fourier form of nonlinear Schrödinger and the vortex particle systems in our implementation. The Hamiltonian energies of these systems (except vortex particle system) are summarized as follows: Pendulum system: H(q, p) = 3(1−cos(q))+p2. Lotka–Volterra system: H(q, p) = p−ep+2q−eq . Spring system: H(q, p) = q2 + p2. Hénon–Heiles system: H(q1, q2, p1, p2) = (p21 + p22)/2 + (q21 + q22) + (q 2 1q2 − q32/3)/2. Tao’s example (Tao, 2016): H(q, p) = (q2 + 1)(p2 + 1)/2. Fourier form of nonlinear Schrödinger equation: H(q1, q2, p1, p2) = [ (q21 + p 2 1) 2 + (q22 + p 2 2) 2 ] /4−(q21q22 +p21p22− q21p 2 2 − p21q22 + 4q1q2p1p2). The network is trained by the dataset with noise∼ 0.1U(−1, 1). The training time span, integral time step, and validation time span are 0.01, 0.01, and 0.1, respectively. Table 2 compares the Hamiltonian deviation H = ‖H(qtruth,ptruth)−H(qpredict,ppredict)‖2/‖H(qtruth,ptruth)‖2 and the prediction error p = ‖qtruth − qpredict‖1 + ‖ptruth − ppredict‖1. It is clearly from the Table 2 that NSSNN either outperforms or has similar performances as NeuralODE and HNN do.
1. What are the strengths and weaknesses of the proposed approach in the paper regarding symplectic integrators? 2. How does the reviewer assess the novelty and significance of the work compared to prior arts in Hamiltonian Neural Networks? 3. What are some missing or unclear aspects in the experimental sections of the paper? 4. Are there any ablation studies or comparisons with other works that could enhance the understanding of the model's performance? 5. How does the reviewer evaluate the quality and clarity of the writing in different parts of the paper?
Review
Review The authors propose a variation of Hamiltonian Neural Networks (HNNs) with a built-in symplectic integrator. While built-in symplectic integrators have already been used in HNNs, the most popular symplectic integrators only work for separable Hamiltonians. The authors use Tao’s integrator (2016) at the core of their model (HSSNN) to achieve better conservation of energy in systems with non-separable Hamiltonians. Many works have experimented with training with different integrators, and in that sense this is a very incremental contribution, however, Tao’s integrator is a state of the art integrator, and I think the community will benefit from reading about it, and seeing it implemented. The introduction and model description part of the paper is clear in general and reasonably easy to follow, and does a very decent job at introducing Tao’s integrators in a very compact way. Many details for reproducibility are missing from the experimental sections of the paper (specially with respect to the implementation of the baselines), but this should be alleviated by the source code provided by the authors. The final section on the multi-particle vortices is probably the most unclear, and should be improved. Considering that this is a very empirical paper, with an incremental contributions that only make use of toy environments, it should probably do a bit more of a thorough job at really trying to ablate the model in more sophisticated ways. I think there are two important baselines missing: Performance of their model e.g. using a RK4 integrator, instead of Tao's. This would really tease apart the the effect of the symplecticity. Note this is an important ablation that is between this model and HNN (which is trained by supervising the gradients, rather than through an integrator). Training HNN using the gradients, but then integrate the learned Hamiltonian using Tao’s integrator at test time (or even better: a time adaptive version of Tao’s integrator). I would be surprised if this does not perform as well as HSSNN (Although as the authors say, requiring the gradients to train HNN is a very rigid constraint, so this would not take away from the importance of their work). Also in the multi-particle system, I am surprised the authors did not choose a Hamiltonian Graph Network (https://arxiv.org/pdf/1909.12790.pdf). I do not think at this stage I would need to see this included, but probably a mention to Hamiltonian Graph Networks in the context of multi-particle systems would be appropriate. Also, Figure 3 b) and c), seem to have weird patterns that would be interesting to try to gain more insights into because I cannot tell if, for example, the variance in Fig 3c is just noise, or something more intrinsic to the experiment. So I really don’t know what to make of plots Figure 3 b) and c). Maybe plotting them for more environments or adding some form of errors bars would help understand them better. Similarly, in Table 3, I would have expected larger differences between HNN, and HSSNN for the Hamiltonian Deviation column. Maybe it would be possible to use longer trajectories and amplify the differences more? In fact the differences in terms of conservation of energy for Tao’s environment seem to be much smaller quantitatively in Table 3 than qualitatively in Figure 4, was the example from Figure 4 cherrypicked? I think some additional comparison ablations, like some of those provided in https://arxiv.org/pdf/1909.12790.pdf, would really help gain more insight on the model: Try different integrators at train and test time Train on a range of time steps during training, instead of a single values. I think the paper is a bit borderline, and in its current form a lean on the side of "Weak Reject", but would be happy to raise the score, if the authors can make improvements in the axis mentioned above. Some questions I am curious about (have not affected my decision): Any particular reason to use an l1 loss, instead of l2? I wonder if this can also cause differences with the baselines which use l2. The prediction error seems to be defined also using l1. Is it not unfair to compare l1 to the baselines, when baselines are trained with l2 loss (or at least they were in their original papers)? Table 1 says that HNN requires the gradients, but if I remember correctly the HNN paper model does mention the possibility of estimating the derivatives with finite differences, which is equivalent to training through an Euler integrator. Some minor comments (have not affected my decision): The text in the experimental and results sections feels a bit rough at times, would recommend rewriting most of it, making sure that the message of each sentence is clear and unambiguous. these nonseparable systems exhibit plateau of degrees of freedom, demonstrating complexities that are orders-of-magnitude higher than separable systems (whose degree of Freedoms are typically below 10). Are not many n-body systems highly separable, yet they have much much higher numbers of degrees of freedom? Figure 1 could be clearer, the intended message of the crossed-out notation in red is not obvious. Figure 2b bit unclear. The plot implies n iteration steps, but this is not really conveyed really well. Missing “/” in line 169 Misspelled “systmes” (197) Notation in equation (7): Vectors (bold) or scalars (italics) What about the subindex, should I assume you train on one step data, and not on sequences? If the formula refers to the training loss, the limit of the sum should probably be the batch size, and not the number of training samples? The term “Strong stability” in Table 1 seems not very scientific. Maybe something like “symplectic stability” or something like that would be more specific. Table 2: Hamiltonian Deviation (Row “spring”) HNN and NSSNN seem to have the same error, so maybe both should be in bold. Equation 2016 wrong left side, H(p, q)_predict, should probably be H(p_predict, q_predict) since I assume the analytical Hamiltonian formula is used in all cases, to estimate the energy of the state. MODELING VORTEX DYNAMICS OF MULTI-PARTICLE SYSTEM section needs more clarity. For example notation in equations 8 and 9 seems inconsistent. Also lines 229 to 232 lack sufficient detail of how the generalization from 2 particles to N particles is achieved. My guess is that the authors just add up all possible pairwise interactions when moving towards systems with higher number of particles. EDIT: Updated rating after author revisions.
ICLR
Title VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation Abstract We propose a novel algorithm for offline reinforcement learning called Value Iteration with Perturbed Rewards (VIPeR), which amalgamates the pessimism principle with random perturbations of the value function. Most current offline RL algorithms explicitly construct statistical confidence regions to obtain pessimism via lower confidence bounds (LCB), which cannot easily scale to complex problems where a neural network is used to estimate the value functions. Instead, VIPeR implicitly obtains pessimism by simply perturbing the offline data multiple times with carefully-designed i.i.d. Gaussian noises to learn an ensemble of estimated state-action value functions and acting greedily with respect to the minimum of the ensemble. The estimated state-action values are obtained by fitting a parametric model (e.g., neural networks) to the perturbed datasets using gradient descent. As a result, VIPeR only needsO(1) time complexity for action selection, while LCB-based algorithms require at least Ω(K), where K is the total number of trajectories in the offline data. We also propose a novel data-splitting technique that helps remove a factor involving the log of the covering number in our bound. We prove that VIPeR yields a provable uncertainty quantifier with overparameterized neural networks and enjoys a bound on sub-optimality of Õ(κHd̃/ √ K), where d̃ is the effective dimension, H is the horizon length and κ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation. N/A √ K), where d̃ is the effective dimension, H is the horizon length and κ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation. 1 INTRODUCTION Offline reinforcement learning (offline RL) (Lange et al., 2012; Levine et al., 2020) is a practical paradigm of RL for domains where active exploration is not permissible. Instead, the learner can access a fixed dataset of previous experiences available a priori. Offline RL finds applications in several critical domains where exploration is prohibitively expensive or even implausible, including healthcare (Gottesman et al., 2019; Nie et al., 2021), recommendation systems (Strehl et al., 2010; Thomas et al., 2017), and econometrics (Kitagawa & Tetenov, 2018; Athey & Wager, 2021), among others. The recent surge of interest in this area and renewed research efforts have yielded several important empirical successes (Chen et al., 2021; Wang et al., 2023; 2022; Meng et al., 2021). A key challenge in offline RL is to efficiently exploit the given offline dataset to learn an optimal policy in the absence of any further exploration. The dominant approaches to offline RL address this challenge by incorporating uncertainty from the offline dataset into decision-making (Buckman et al., 2021; Jin et al., 2021; Xiao et al., 2021; Nguyen-Tang et al., 2022a; Ghasemipour et al., 2022; An et al., 2021; Bai et al., 2022). The main component of these uncertainty-aware approaches to offline RL is the pessimism principle, which constrains the learned policy to the offline data and leads to various lower confidence bound (LCB)-based algorithms. However, these methods are not easily extended or scaled to complex problems where neural function approximation is used to estimate the value functions. In particular, it is costly to explicitly compute the statistical confidence regions of the model or value functions if the class of function approximator is given by overparameterized neural networks. For example, constructing the LCB for neural offline contextual bandits (NguyenTang et al., 2022a) and RL (Xu & Liang, 2022) requires computing the inverse of a large covariance matrix whose size scales with the number of parameters in the neural network. This computational cost hinders the practical application of these provably efficient offline RL algorithms. Therefore, a largely open question is how to design provably computationally efficient algorithms for offline RL with neural network function approximation. In this work, we present a solution based on a computational approach that combines the pessimism principle with randomizing the value function (Osband et al., 2016; Ishfaq et al., 2021). The algorithm is strikingly simple: we randomly perturb the offline rewards several times and act greedily with respect to the minimum of the estimated state-action values. The intuition is that taking the minimum from an ensemble of randomized state-action values can efficiently achieve pessimism with high probability while avoiding explicit computation of statistical confidence regions. We learn the state-action value function by training a neural network using gradient descent (GD). Further, we consider a novel data-splitting technique that helps remove the dependence on the potentially large log covering number in the learning bound. We show that the proposed algorithm yields a provable uncertainty quantifier with overparameterized neural network function approximation and achieves a sub-optimality bound of Õ(κH5/2d̃/ √ K), where K is the total number of episodes in the offline data, d̃ is the effective dimension, H is the horizon length, and κ measures the distributional shift. We achieve computational efficiency since the proposed algorithm only needsO(1) time complexity for action selection, while LCB-based algorithms require O(K2) time complexity. We empirically corroborate the statistical and computational efficiency of our proposed algorithm on a wide set of synthetic and real-world datasets. The experimental results show that the proposed algorithm has a strong advantage in computational efficiency while outperforming LCB-based neural algorithms. To the best of our knowledge, ours is the first offline RL algorithm that is both provably and computationally efficient in general MDPs with neural network function approximation. 2 RELATED WORK Randomized value functions for RL. For online RL, Osband et al. (2016; 2019) were the first to explore randomization of estimates of the value function for exploration. Their approach was inspired by posterior sampling for RL (Osband et al., 2013), which samples a value function from a posterior distribution and acts greedily with respect to the sampled function. Concretely, Osband et al. (2016; 2019) generate randomized value functions by injecting Gaussian noise into the training data and fitting a model on the perturbed data. Jia et al. (2022) extended the idea of perturbing rewards to online contextual bandits with neural function approximation. Ishfaq et al. (2021) obtained a provably efficient method for online RL with general function approximation using the perturbed rewards. While randomizing the value function is an intuitive approach to obtaining optimism in online RL, obtaining pessimism from the randomized value functions can be tricky in offline RL. Indeed, Ghasemipour et al. (2022) point out a critical flaw in several popular existing methods for offline RL that update an ensemble of randomized Q-networks toward a shared pessimistic temporal difference target. In this paper, we propose a simple fix to obtain pessimism properly by updating each randomized value function independently and taking the minimum over an ensemble of randomized value functions to form a pessimistic value function. Offline RL with function approximation. Provably efficient offline RL has been studied extensively for linear function approximation. Jin et al. (2021) were the first to show that pessimistic value iteration is provably efficient for offline linear MDPs. Xiong et al. (2023); Yin et al. (2022) improved upon Jin et al. (2021) by leveraging variance reduction. Xie et al. (2021) proposed a Bellman-consistency assumption with general function approximation, which improves the bound of Jin et al. (2021) by a factor of √ d when realized to finite action space and linear MDPs. Wang et al. (2021); Zanette (2021) studied the statistical hardness of offline RL with linear function approximation via exponential lower bound, and Foster et al. (2021) suggested that only realizability and strong uniform data coverage are not sufficient for sample-efficient offline RL. Beyond linearity, some works study offline RL for general function approximation, both parametric and nonparametric. These approaches are either based on Fitted-Q Iteration (FQI) (Munos & Szepesvári, 2008; Le et al., 2019; Chen & Jiang, 2019; Duan et al., 2021a;b; Hu et al., 2021; Nguyen-Tang et al., 2022b) or the pessimism principle (Uehara & Sun, 2022; Nguyen-Tang et al., 2022a; Jin et al., 2021). While pessimism-based algorithms avoid the strong assumptions of data coverage used by FQI-based algorithms, they require an explicit computation of valid confidence regions and possibly the inverse of a large covariance matrix which is computationally prohibitive and does not scale to complex function approximation setting. This limits the applicability of pessimism-based, provably efficient offline RL to practical settings. A very recent work Bai et al. (2022) estimates the uncertainty for constructing LCB via the disagreement of bootstrapped Q-functions. However, the uncertainty quantifier is only guaranteed in linear MDPs and must be computed explicitly. We provide a more detailed discussion of our technical contribution in the context of existing literature in Section C.1. 3 PRELIMINARIES In this section, we provide basic background on offline RL and overparameterized neural networks. 3.1 EPISODIC TIME-INHOMOGENOUS MARKOV DECISION PROCESSES (MDPS) A finite-horizon Markov decision process (MDP) is denoted as the tupleM = (S,A,P, r,H, d1), where S is an arbitrary state space, A an arbitrary action space, H the episode length, and d1 the initial state distribution. We assume that SA := |S||A| is finite but arbitrarily large, e.g., it can be as large as the total number of atoms in the observable universe ≈ 1082. Let P(S) denote the set of probability measures over S. A time-inhomogeneous transition kernel P = {Ph}Hh=1, where Ph : S × A → P(S) maps each state-action pair (sh, ah) to a probability distribution Ph(·|sh, ah). Let r = {rh}Hh=1 where rh : S × A → [0, 1] is the mean reward function at step h. A policy π = {πh}Hh=1 assigns each state sh ∈ S to a probability distribution, πh(·|sh), over the action space and induces a random trajectory s1, a1, r1, . . . , sH , aH , rH , sH+1 where s1 ∼ d1, ah ∼ πh(·|sh), sh+1 ∼ Ph(·|sh, ah). We define the state value function V πh ∈ RS and the actionstate value function Qπh ∈ RS×A at each timestep h as Qπh(s, a) = Eπ[ ∑H t=h rt|sh = s, ah = a], and V πh (s) = Ea∼π(·|s) [Qπh(s, a)], where the expectation Eπ is taken with respect to the randomness of the trajectory induced by π. Let Ph denote the transition operator defined as (PhV )(s, a) := Es′∼Ph(·|s,a)[V (s′)]. For any V : S → R, we define the Bellman operator at timestep h as (BhV )(s, a) := rh(s, a) + (PhV )(s, a). The Bellman equations are given as follows. For any (s, a, h) ∈ S ×A× [H], Qπh(s, a) = (BhV πh+1)(s, a), V πh (s) = ⟨Qπh(s, ·), πh(·|s)⟩A, V πH+1(s) = 0, where [H] := {1, 2, . . . ,H}, and ⟨·, ·⟩A denotes the summation over all a ∈ A. We define an optimal policy π∗ as any policy that yields the optimal value function, i.e. V π ∗ h (s) = supπ V π h (s) for any (s, h) ∈ S × [H]. For simplicity, we denote V π∗h and Qπ ∗ h as V ∗ h and Q ∗ h, respectively. The Bellman optimality equation can be written as Q∗h(s, a) = (BhV ∗h+1)(s, a), V ∗h (s) = max a∈A Q∗h(s, a), V ∗ H+1(s) = 0. Define the occupancy density as dπh(s, a) := P((sh, ah) = (s, a)|π) which is the probability that we visit state s and take action a at timestep h if we follow the policy π. We denote dπ ∗ h by d ∗ h. Offline regime. In the offline regime, the learner has access to a fixed dataset D = {(sth, ath, rth, sth+1)} t∈[K] h∈[H] generated a priori by some unknown behaviour policy µ = {µh}h∈[H]. Here, K is the total number of trajectories, and ath ∼ µh(·|sth), sth+1 ∼ Ph(·|sth, ath) for any (t, h) ∈ [K] × [H]. Note that we allow the trajectory at any time t ∈ [K] to depend on the trajectories at previous times. The goal of offline RL is to learn a policy π̂, based on (historical data) D, such that π̂ achieves small sub-optimality, which we define as SubOpt(π̂) := Es1∼d1 [SubOpt(π̂; s1)] , where SubOpt(π̂; s1) := V π ∗ 1 (s1)− V π̂1 (s1). Algorithm 1 Value Iteration with Perturbed Rewards (VIPeR) 1: Input: Offline data D = {(skh, akh, rkh)} k∈[K] h∈[H], a parametric function family F = {f(·, ·;W ) : W ∈ W} ⊂ {X → R} (e.g. neural networks), perturbed variances {σh}h∈[H], number of bootstraps M , regularization parameter λ, step size η, number of gradient descent steps J , and cutoff margin ψ, split indices {Ih}h∈[H] where Ih := [(H − h)K ′ + 1, . . . , (H − h+ 1)K ′] 2: Initialize ṼH+1(·)← 0 and initialize f(·, ·;W ) with initial parameter W0 3: for h = H, . . . , 1 do 4: for i = 1, . . . ,M do 5: Sample {ξk,ih }k∈Ih ∼ N (0, σ2h) and ζih = {ζ j,i h }j∈[d] ∼ N (0, σ2hId) 6: Perturb the dataset D̃ih ← {skh, akh, rkh + Ṽh+1(skh+1) + ξ k,i h }k∈Ih ▷ Perturbation 7: Let W̃ ih ← GradientDescent(λ, η, J, D̃ih, ζih,W0) (Algorithm 2) ▷ Optimization 8: end for 9: Compute Q̃h(·, ·)← min{mini∈[M ]f(·, ·; W̃ ih), (H − h+ 1)(1 + ψ)}+ ▷ Pessimism 10: π̃h ← argmaxπh⟨Q̃h, πh⟩ and Ṽh ← ⟨Q̃h, π̃h⟩ ▷ Greedy 11: end for 12: Output: π̃ = {π̃h}h∈[H]. Notation. For simplicity, we write xth = (sth, ath) and x = (s, a). We write Õ(·) to hide logarithmic factors of the problem parameters (d,H,K,m, 1/δ) in the standard Big-Oh notation. We use Ω(·) as the standard Omega notation. We write u ≲ v if u = O(v) and write u ≳ v if v ≲ u. We write A ⪯ B iff B −A is a positive definite matrix. Id denotes the d× d identity matrix. 3.2 OVERPARAMETERIZED NEURAL NETWORKS In this paper, we consider neural function approximation setting where the state-action value function is approximated by a two-layer neural network. For simplicity, we denoteX := S×A and view it as a subset of Rd. Without loss of generality, we assume X ⊂ Sd−1 := {x ∈ Rd : ∥x∥2 = 1}. We consider a standard two-layer neural network: f(x;W, b) = 1√ m ∑m i=1 biσ(w T i x), where m is an even number, σ(·) = max{·, 0} is the ReLU activation function (Arora et al., 2018), and W = (wT1 , . . . , w T m) T ∈ Rmd. During the training, we initialize (W, b) via the symmetric initialization scheme (Gao et al., 2019) as follows: For any i ≤ m2 , wi = wm2 +i ∼ N (0, Id/d), and bm 2 +i = −bi ∼ Unif({−1, 1}).1 During the training, we optimize over W while the bi are kept fixed, thus we write f(x;W, b) as f(x;W ). Denote g(x;W ) = ∇W f(x;W ) ∈ Rmd, and let W0 be the initial parameters of W . We assume that the neural network is overparameterized, i.e, the width m is sufficiently larger than the number of samples K. Overparameterization has been shown to be effective in studying the convergence and the interpolation behaviour of neural networks (Arora et al., 2019; Allen-Zhu et al., 2019; Hanin & Nica, 2020; Cao & Gu, 2019; Belkin, 2021). Under such an overparameterization regime, the dynamics of the training of the neural network can be captured using the framework of the neural tangent kernel (NTK) (Jacot et al., 2018). 4 ALGORITHM In this section, we present the proposed algorithm called Value Iteration with Perturbed Rewards, or VIPeR; see Algorithm 1 for the pseudocode. The key idea underlying VIPeR is to train a parametric model (e.g., a neural network) on a perturbed-reward dataset several times and act pessimistically by picking the minimum over an ensemble of estimated state-action value functions. In particular, at each timestep h ∈ [H], we drawM independent samples of zero-mean Gaussian noise with variance σh. We use these samples to perturb the sum of the observed rewards, rkh, and the estimated value function with a one-step lookahead, i.e., Ṽh+1(skh+1) (see Line 6 of Algorithm 1). The weights W̃ i h are then updated by minimizing the perturbed regularized squared loss on {D̃ih}i∈[M ] using gradient descent (Line 7). We pick the value function pessimistically by selecting the minimum over the finite ensemble. The chosen value function is truncated at (H − h+ 1)(1 + ψ) (see Line 9), where 1This symmetric initialization scheme makes f(x;W0) = 0 and ⟨g(x;W0),W0⟩ = 0 for any x. ψ ≥ 0 is a small cutoff margin (more on this when we discuss the theoretical analysis). The returned policy is greedy with respect to the truncated pessimistic value function (see Line 10). Algorithm 2 GradientDescent(λ, η, J, D̃ih, ζih,W0) 1: Input: Regularization parameter λ, step size η, number of gradient descent steps J , perturbed dataset D̃ih = {skh, akh, rkh + Ṽh+1(s k h+1) + ξ t,i h }k∈Ih , regularization per- turber ζih, initial parameter W0 2: L(W ) := 12 ∑ k∈Ih(f(s k h, a k h;W ) − (rkh + Ṽh+1(s k h+1) + ξ k,i h )) 2 + λ2 ∥W + ζ i h −W0∥22 3: for j = 0, . . . , J − 1 do 4: Wj+1 ←Wj − η∇L(Wj) 5: end for 6: Output: WJ . It is important to note that we split the trajectory indices [K] evenly into H disjoint buckets [K] = ∪h∈[H]Ih, where Ih = [(H − h)K ′ + 1, . . . , (H − h + 1)K ′] for K ′ := ⌊K/H⌋2, as illustrated in Figure 1. The estimated Q̃h is thus obtained only from the offline data with (trajectory) indices from Ih along with Ṽh+1. This novel design removes the data dependence structure in offline RL with function approximation (Nguyen-Tang et al., 2022b) and avoids a factor involving the log of the covering number in the bound on the sub-optimality of Algorithm 1, as we show in Section D.1. To deal with the non-linearity of the underlying MDP, we use a two-layer fully connected neural network as the parametric function family F in Algorithm 1. In other words, we approximate the state-action values: f(x;W ) = 1√ m ∑m i=1 biσ(w T i x), as described in Section 3.2. We use two-layer neural networks to simplify the computational analysis. We utilize gradient descent to train the state-action value functions {f(·, ·; W̃ ih)}i∈[M ], on perturbed rewards. The use of gradient descent is for the convenience of computational analysis, and our results can be extended to stochastic gradient descent by leveraging recent advances in the theory of deep learning (Allen-Zhu et al., 2019; Cao & Gu, 2019), albeit with a more involved analysis. Existing offline RL algorithms utilize estimates of statistical confidence regions to achieve pessimism in the offline setting. Explicitly constructing these confidence bounds is computationally expensive in complex problems where a neural network is used for function approximation. For example, the lower-confidence-bound-based algorithms in neural offline contextual bandits (NguyenTang et al., 2022a) and RL (Xu & Liang, 2022) require computing the inverse of a large covariance matrix with the size scaling with the number of network parameters. This is computationally prohibitive in most practical settings. Algorithm 1 (VIPeR) avoids such expensive computations while still obtaining provable pessimism and guaranteeing a rate of Õ( 1√ K ) on the sub-optimality, as we show in the next section. 5 SUB-OPTIMALITY ANALYSIS Next, we provide a theoretical guarantee on the sub-optimality of VIPeR for the function approximation class, F , represented by (overparameterized) neural networks. Our analysis builds on the recent advances in generalization and optimization of deep neural networks (Arora et al., 2019; Allen-Zhu et al., 2019; Hanin & Nica, 2020; Cao & Gu, 2019; Belkin, 2021) that leverage the observation that the dynamics of the neural parameters learned by (stochastic) gradient descent can be captured by the corresponding neural tangent kernel (NTK) space (Jacot et al., 2018) when the network is overparameterized. Next, we recall some definitions and state our key assumptions, formally. Definition 1 (NTK (Jacot et al., 2018)). The NTK kernel Kntk : X × X → R is defined as Kntk(x, x ′) = Ew∼N (0,Id/d)⟨xσ ′(wTx), x′σ′(wTx′)⟩, where σ′(u) = 1{u ≥ 0}. 2Without loss of generality, we assume K/H ∈ N. Let Hntk denote the reproducing kernel Hilbert space (RKHS) induced by the NTK, Kntk. SinceKntk is a universal kernel (Ji et al., 2020), we have that Hntk is dense in the space of continuous functions on (a compact set) X = S ×A (Rahimi & Recht, 2008). Definition 2 (Effective dimension). For any h ∈ [H], the effective dimension of the NTK matrix on data {xkh}k∈Ih is defined as d̃h := logdet(IK′ +Kh/λ) log(1 +K ′/λ) , where Kh := [Kntk(xih, x j h)]i,j∈Ih is the Gram matrix of Kntk on the data {xkh}k∈Ih . We further define d̃ := maxh∈[H] d̃h. Remark 1. Intuitively, the effective dimension d̃h measures the number of principal dimensions over which the projection of the data {xkh}k∈Ih in the RKHSHntk is spread. It was first introduced by Valko et al. (2013) for kernelized contextual bandits and was subsequently adopted by Yang & Wang (2020) and Zhou et al. (2020) for kernelized RL and neural contextual bandits, respectively. The effective dimension is data-dependent and can be bounded by d̃ ≲ K ′(d+1)/(2d) in the worst case (see Section B for more details).3 Definition 3 (RKHS of the infinite-width NTK). Define Q∗ := {f(x) = ∫ Rd c(w) Txσ′(wTx)dw : supw ∥c(w)∥2 p0(w) < B}, where c : Rd → Rd is any function, p0 is the probability density function of N (0, Id/d), and B is some positive constant. We make the following assumption about the regularity of the underlying MDP under function approximation. Assumption 5.1 (Completeness). For any V : S → [0, H + 1] and any h ∈ [H], BhV ∈ Q∗.4 Assumption 5.1 ensures that the Bellman operator Bh can be captured by an infinite-width neural network. This assumption is mild as Q∗ is a dense subset of Hntk (Gao et al., 2019, Lemma C.1) when B = ∞, thus Q∗ is an expressive function class when B is sufficiently large. Moreover, similar assumptions have been used in many prior works on provably efficient RL with function approximation (Cai et al., 2019; Wang et al., 2020; Yang et al., 2020; Nguyen-Tang et al., 2022b). Next, we present a bound on the suboptimality of the policy π̃ returned by Algorithm 1. Recall that we use the initialization scheme described in Section 3.2. Fix any δ ∈ (0, 1). Theorem 1. Let σh = σ := 1 + λ 1 2B + (H + 1) [ d̃ log(1 +K ′/λ) + 2 + 2 log(3H/δ) ] 1 2 . Let m = poly(K ′, H, d,B, d̃, λ, δ) be some high-order polynomial of the problem parameters, λ = 1 + HK , η ≲ (λ +K ′)−1, J ≳ K ′ log(K ′(H √ d̃ + B)), ψ = 1, and M = log HSAδ / log 1 1−Φ(−1) , where Φ(·) is the cumulative distribution function of the standard normal distribution. Then, under Assumption 5.1, with probability at least 1−MHm−2 − 2δ, for any s1 ∈ S, we have that SubOpt(π̃; s1) ≤ σ(1 + √ 2 log(MSAH/δ)) · Eπ∗ [ H∑ h=1 ∥g(sh, ah;W0)∥Λ−1h ] + Õ( 1 K ′ ) where Λh := λImd + ∑ k∈Ih g(s k h, a k h;W0)g(s k h, a k h;W0) T ∈ Rmd×md. Remark 2. Theorem 1 shows that the randomized design in our proposed algorithm yields a provable uncertainty quantifier even though we do not explicitly maintain any confidence regions in the algorithm. The implicit pessimism via perturbed rewards introduces an extra factor of 1 + √ 2 log(MSAH/δ) into the confidence parameter β. We build upon Theorem 1 to obtain an explicit bound using the following data coverage assumption. Assumption 5.2 (Optimal-Policy Concentrability). ∃κ <∞, sup(h,sh,ah) d∗h(sh,ah) dµh(sh,ah) ≤ κ. 3Note that this is the worst-case bound, and the effective dimension can be significantly smaller in practice. 4We consider V : S → [0, H + 1] instead of V : S → [0, H] due to the cutoff margin ψ in Algorithm 1. Assumption 5.2 requires any positive-probability trajectory induced by the optimal policy to be covered by the behavior policy. This data coverage assumption is significantly milder than the uniform coverage assumptions in many FQI-based offline RL algorithms (Munos & Szepesvári, 2008; Chen & Jiang, 2019; Nguyen-Tang et al., 2022b) and is common in pessimism-based algorithms (Rashidinejad et al., 2021; Nguyen-Tang et al., 2022a; Chen & Jiang, 2022; Zhan et al., 2022). Theorem 2. For the same parameter settings and the same assumption as in Theorem 1, we have that with probability at least 1−MHm−2 − 5δ, SubOpt(π̃) ≤ 2σ̃κH√ K ′ √2d̃ log(1 +K ′/λ) + 1 + √ log Hδ λ + 16H 3K ′ log log2(K ′H) δ + Õ( 1 K ′ ), where σ̃ := σ(1 + √ 2 log(SAH/δ)). Remark 3. Theorem 2 shows that with appropriate parameter choice, VIPeR achieves a suboptimality of Õ ( κH3/2 √ d̃·max{B,H √ d̃}√ K ) . Compared to Yang et al. (2020), we improve by a factor of K 2 dγ−1 for some γ ∈ (0, 1) at the expense of √ H . When realized to a linear MDP in Rdlin , d̃ = dlin and our bound reduces into Õ ( κH5/2dlin√ K ) which improves the bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) by a factor of √ dlin. We provide the result summary and comparison in Table 1 and give a more detailed discussion in Subsection B.1. 6 EXPERIMENTS In this section, we empirically evaluate the proposed algorithm VIPeR against several state-of-the-art baselines, including (a) PEVI (Jin et al., 2021), which explicitly constructs lower confidence bound (LCB) for pessimism in a linear model (thus, we rename this algorithm as LinLCB for convenience in our experiments); (b) NeuraLCB (Nguyen-Tang et al., 2022a) which explicitly constructs an LCB using neural network gradients; (c) NeuraLCB (Diag), which is NeuraLCB with a diagonal approximation for estimating the confidence set as suggested in NeuraLCB (Nguyen-Tang et al., 2022a); (d) Lin-VIPeR which is VIPeR realized to the linear function approximation instead of neural network function approximation; (e) NeuralGreedy (LinGreedy, respectively) which uses neural networks (linear models, respectively) to fit the offline data and act greedily with respect to the estimated state-action value functions without any pessimism. Note that when the parametric class, F , in Algorithm 1 is that of neural networks, we refer to VIPeR as Neural-VIPeR. We do not utilize data splitting in the experiments. We provide further algorithmic details of the baselines in Section H. We evaluate all algorithms in two problem settings: (1) the underlying MDP is a linear MDP whose reward functions and transition kernels are linear in some known feature map (Jin et al., 2020), and (2) the underlying MDP is non-linear with horizon length H = 1 (i.e., non-linear contextual bandits) (Zhou et al., 2020), where the reward function is either synthetic or constructed from MNIST dataset (LeCun et al., 1998). We also evaluate (a variant of) our algorithm and show its strong performance advantage in the D4RL benchmark (Fu et al., 2020) in Section A.3. We implemented all algorithms in Pytorch (Paszke et al., 2019) on a server with Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz, 755G RAM, and one NVIDIA Tesla V100 Volta GPU Accelerator 32GB Graphics Card.5 6.1 LINEAR MDPS We first test the effectiveness of pessimism implicit in VIPeR (Algorithm 1). To that end, we construct a hard instance of linear MDPs (Yin et al., 2022; Min et al., 2021); due to page limitation, we defer the details of our construction to Section A.1. We test for different values of H ∈ {20, 30, 50, 80} and report the sub-optimality of LinLCB, Lin-VIPeR, and LinGreedy, averaged over 30 runs, in Figure 2. We find that LinGreedy, which is uncertainty-agnostic, fails to learn from offline data and has poor performance in terms of sub-optimality when compared to pessimism-based algorithms LinLCB and Lin-VIPeR. Further, LinLCB outperforms Lin-VIPeR when K is smaller than 400, but the performance of the two algorithms matches for larger sample sizes. Unlike LinLCB, Lin-VIPeR does not construct any confidence regions or require computing and inverting large (covariance) matrices. The Y-axis is in log scale; thus, Lin-VIPeR already has small sub-optimality in the first K ≈ 400 samples. These show the effectiveness of the randomized design for pessimism implicit in Algorithm 1. 6.2 NEURAL CONTEXTUAL BANDITS Next, we compare the performance and computational efficiency of various algorithms against VIPeR when neural networks are employed. For simplicity, we consider contextual bandits, a special case of MDPs with horizon H = 1. Following Zhou et al. (2020); Nguyen-Tang et al. (2022a), we use the bandit problems specified by the following reward functions: (a) r(s, a) = cos(3sT θa); (b) r(s, a) = exp(−10(sT θa)2), where s and θa are generated uniformly at random from the unit sphere Sd−1 with d = 16 and A = 10; (c) MNIST, where r(s, a) = 1 if a is the true label of the input image s and r(s, a) = 0, otherwise. To predict the value of different actions from the same state s using neural networks, we transform a state s ∈ Rd into dA-dimensional vectors s(1) = (s, 0, . . . , 0), s(2) = (0, s, 0, . . . , 0), . . . , s(A) = (0, . . . , 0, s) and train the network to map s(a) to r(s, a) given a pair of data (s, a). For Neural-VIPeR, NeuralGreedy, NeuraLCB, and NeuraLCB (Diag), we use the same neural network architecture with two hidden layers of width m = 64 and train the network with Adam optimizer (Kingma & Ba, 2015). Due to page limitations, we defer other experimental details and hyperparameter setting to Section A.2. We report the 5Our code is available here: https://github.com/thanhnguyentang/neural-offline-rl. sub-optimality averaged over 5 runs in Figure 3. We see that algorithms that use a linear model, i.e., LinLCB and Lin-VIPeR significantly underperform neural-based algorithms, i.e., NeuralGreedy, NeuraLCB, NeuraLCB (Diag) and Neural-VIPeR, attesting to the crucial role neural representations play in RL for non-linear problems. It is also interesting to observe from the experimental results that NeuraLCB does not always outperform its diagonal approximation, NeuraLCB (Diag) (e.g., in Figure 3(b)), putting a question mark on the empirical effectiveness of NTK-based uncertainty for offline RL. Finally, Neural-VIPeR outperforms all algorithms in the tested benchmarks, suggesting the effectiveness of our randomized design with neural function approximation. Figure 4 shows the average runtime for action selection of neural-based algorithms NeuraLCB, NeuraLCB (Diag), and Neural-VIPeR. We observe that algorithms that use explicit confidence regions, i.e., NeuraLCB and NeuraLCB (Diag), take significant time selecting an action when either the number of offline samples K or the network width m increases. This is perhaps not surprising because NeuraLCB and NeuraLCB (Diag) need to compute the inverse of a large covariance matrix to sample an action and maintain the confidence region for each action per state. The diagonal approximation significantly reduces the runtime of NeuraLCB, but the runtime still scales with the number of samples and the network width. In comparison, the runtime for action selection for Neural-VIPeR is constant. Since NeuraLCB, NeuraLCB (Diag), and Neural-VIPeR use the same neural network architecture, the runtime spent training one model is similar. The only difference is that Neural-VIPeR trains M models while NeuraLCB and NeuraLCB (Diag) train a single model. However, as the perturbed data in Algorithm 1 are independent, trainingM models in Neural-VIPeR is embarrassingly parallelizable. Finally, in Figure 5, we study the effect of the ensemble size on the performance of Neural-VIPeR. We use different values of M ∈ {1, 2, 5, 10, 20, 30, 50, 100, 200} for sample size K = 1000. We find that the sub-optimality of Neural-VIPeR decreases graciously as M increases. Indeed, the grid search from the previous experiment in Figure 3 also yields M = 10 and M = 20 from the search space M ∈ {1, 10, 20} as the best result. This suggests that the ensemble size can also play an important role as a hyperparameter that can determine the amount of pessimism needed in a practical setting. 7 CONCLUSION We propose a novel algorithmic approach for offline RL that involves randomly perturbing value functions and pessimism. Our algorithm eliminates the computational overhead of explicitly maintaining a valid confidence region and computing the inverse of a large covariance matrix for pessimism. We bound the suboptimality of the proposed algorithm as Õ ( κH5/2d̃/ √ K ) . We support our theoretical claims of computational efficiency and the effectiveness of our algorithm with extensive experiments. ACKNOWLEDGEMENTS This research was supported, in part, by DARPA GARD award HR00112020004, NSF CAREER award IIS-1943251, an award from the Institute of Assured Autonomy, and Spring 2022 workshop on “Learning and Games” at the Simons Institute for the Theory of Computing. A EXPERIMENT DETAILS A.1 LINEAR MDPS In this subsection, we provide further details to the experiment setup used in Subsection 6.1. We describe in detail a variant of the hard instance of linear MDPs (Yin et al., 2022) used in our experiment. The linear MDP has S = {0, 1},A = {0, 1, · · · , 99}, and the feature dimension d = 10. Each action a ∈ [99] = {1, . . . , 99} is represented by its binary encoding vector ua ∈ R8 with entry being either −1 or 1. The feature mapping ϕ(s, a) is given by ϕ(s, a) = [uTa , δ(s, a), 1− δ(s, a)]T ∈ R10, where δ(s, a) = 1 if (s, a) = (0, 0) and δ(s, a) = 0 otherwise. The true measure νh(s) is given by νh(s) = [0, · · · , 0, (1 − s) ⊕ αh, s ⊕ αh] where {αh}h∈[H] ∈ {0, 1}H are generated uniformly at random and ⊕ is the XOR operator. We define θh = [0, · · · , 0, r, 1 − r]T ∈ R10 where r = 0.99. Recall that the transition follows Ph(s′|s, a) = ⟨ϕ(s, a), νh(s′)⟩ and the mean reward rh(s, a) = ⟨ϕ(s, a), θh⟩. We generated a priori K ∈ {1, . . . , 1000} trajectories using the behavior policy µ, where for any h ∈ [H] we set µh(0|0) = p, µh(1|0) = 1 − p, µh(a|0) = 0,∀a > 1;µh(0|1) = p, µh(a|1) = (1− p)/99,∀a > 0, where we set p = 0.6. We run over K ∈ {1, . . . , 1000} and H ∈ {20, 30, 50, 80}. We set λ = 0.01 for all algorithms. For Lin-VIPeR, we grid searched σh = σ ∈ {0.0, 0.1, 0.5, 1.0, 2.0} and M ∈ {1, 2, 10, 20}. For LinLCB, we grid searched its uncertainty multiplier β ∈ {0.1, 0.5, 1, 2}. The sub-optimality metric is used to compare algorithms. For each H ∈ {20, 30, 50, 80}, each algorithm was executed for 30 times and the averaged results (with std) are reported in Figure 2. A.2 NEURAL CONTEXTUAL BANDITS In this subsection, we provide in detail the experimental and hyperparameter setup in our experiment in Subsection 6.2. For Neural-VIPeR, NeuralGreedy, NeuraLCB and NeuraLCB (Diag), we use the same neural network architecture with two hidden layers whose width m = 64, train the network with Adam optimizer (Kingma & Ba, 2015) with learning rate being grid-searched over {0.0001, 0.001, 0.01} and batch size of 64. For NeuraLCB, NeuraLCB (Diag), and LinLCB, we grid-searched β over {0.001, 0.01, 0.1, 1, 5, 10}. For Neural-VIPeR and Lin-VIPeR, we gridsearched σh = σ over {0.001, 0.01, 0.1, 1, 5, 10} andM over {1, 10, 20}. We did not run NeuraLCB in MNIST as the inverse of a full covariance matrix in this case is extremely expensive. We fixed the regularization parameter λ = 0.01 for all algorithms. Offline data is generated by the (1−ϵ)-optimal policy which generates non-optimal actions with probability ϵ and optimal actions with probability 1 − ϵ. We set ϵ = 0.5 in our experiments. To estimate the expected sub-optimality, we randomly obtain 1, 000 novel samples (i.e. not used in training) to compute the average sub-optimality and keep these same samples for all algorithms. A.3 EXPERIMENT IN D4RL BENCHMARK In this subsection, we evaluate the effectiveness of the reward perturbing design of VIPeR in the Gym domain in the D4RL benchmark (Fu et al., 2020). The Gym domain has three environments (HalfCheetah, Hopper, and Walker2d) with five datasets (random, medium, medium-replay, medium-expert, and expert), making up 15 different settings. Design. To adapt the design of VIPeR to continuous control, we use the actor-critic framework. Specifically, we have M critics {Qθi}i∈[M ] and one actor πϕ, where {θi}i∈[M ] and ϕ are the learnable parameters for the critics and actor, respectively. Note that in the continuous domain, we consider discounted MDP with discount factor γ, instead of finite-time episode MDP as we initially considered in our setting in the main paper. In the presence of the actor πϕ, there are two modifications to Algorithm 1. The first modification is that when training the critics {Qiθ}i∈[M ], we augment the training loss in Algorithm 2 with a new penalization term. Specifically, the critic loss for Qθi on a training sample τ := (s, a, r, s′) (sampled from the offline data D) is L(θi; τ) = (Qθi(s, a)− (r + γQθ̄i(s′) + ξ)) 2 + β Ea′∼πϕ(·|s) [ (Qθi(s, a ′)− Q̄(s, a′))2 ]︸ ︷︷ ︸ penalization term R(θi;s,ϕ) , (1) where θ̄i has the same value of the current θi but is kept fixed, Q̄ = 1M ∑M i=1Qθi and ξ ∼ N (0, σ2) is Gaussian noise, and β is a penalization parameter (note that β here is totally different from the β in Theorem 1). The penalization term R(θi; s, ϕ) discourages overestimation in the value function estimate Qθi for out-of-distribution (OOD) actions a′ ∼ πϕ(·|s). Our design of R(θi; s, ϕ) is initially inspired by the OOD penalization in Bai et al. (2022) that creates a pessimistic pseudo target for the values at OOD actions. Note that we do not need any penalization for OOD actions in our experiment for contextual bandits in Section 6.2. This is because in the contextual bandit setting in Section 6.2 the action space is finite and not large, thus the offline data often sufficiently cover all good actions. In the continuous domain such as the Gym domain of D4RL, however, it is almost certain that there are actions that are not covered by the offline data since the action space is continuous. We also note that the inclusion of the OOD action penalization term R(θi; s, ϕ) in this experiment does not contradict our guarantee in Theorem 1 since in the theorem we consider finite action space while in this experiment we consider continuous action space. We argue that the inclusion of some regularization for OOD actions (e.g., R(θi; s, ϕ)) is necessary for the continuous domain. 6 The second modification to Algorithm 1 for the continuous domain is the actor training, which is the implementation of policy extraction in line 10 of Algorithm 1. Specifically, to train the actor πϕ given the ensemble of critics {Qiθ}i∈[M ], we use soft actor update in Haarnoja et al. (2018) via max ϕ { Es∼D,a′∼πϕ(·|s) [ min i∈[M ] Qθi(s, a ′)− log πϕ(a′|s) ]} , (2) which is trained using gradient ascent in practice. Note that in the discrete action domain, we do not need such actor training as we can efficiently extract the greedy policy with respect to the estimated action-value functions when the action space is finite. Also note that we do not use data splitting and value truncation as in the original design of Algorithm 1. Hyperparameters. For the hyper-parameters of our training, we set M = 10 and the noise variance σ = 0.01. For β, we decrease it from 0.5 to 0.2 by linear decay for the first 50K steps and exponential decay for the remaining steps. For the other hyperparameters of actor-critic training, we fix them the same as in Bai et al. (2022). Specifically, the Q-network is the fully connected neural network with three hidden layers all of which has 256 neurons. The learning rate for the actor and the critic are 10−4 and 3× 10−4, respectively. The optimizer is Adam. Results. We compare VIPeR with several state-of-the-art algorithms, including (i) BEAR (Kumar et al., 2019) that use MMD distance to constraint policy to the offline data, (ii) UWAC (Wu et al., 2021) that improves BEAR using dropout uncertainty, (iii) CQL (Kumar et al., 2020) that minimizes Q-values of OOD actions, (iv) MOPO (Yu et al., 2020) that uses model-based uncertainty via ensemble dynamics, (v) TD3-BC (Fujimoto & Gu, 2021) that uses adaptive behavior cloning, and (vi) PBRL (Bai et al., 2022) that use uncertainty quantification via disagreement of bootstrapped Q-functions. We follow the evaluation protocol in Bai et al. (2022). We run our algorithm for five seeds and report the average final evaluation scores with standard deviation. We report the scores of our method and the baselines in Table 2. We can see that our method has a strong advantage of good performance (highest scores) in 11 out of 15 settings, and has good stability (small std) in all settings. Overall, we also have the strongest average scores aggregated over all settings. B EXTENDED DISCUSSION Here we provide extended discussion of our result. B.1 COMPARISON WITH OTHER WORKS AND DISCUSSION We provide further discussion regarding comparison with other works in the literature. 6In our experiment, we also observe that without this penalization term, the method struggles to learn any good policy. However, using only the penalization term without the first term in Eq. (1), we observe that the method cannot learn either. Comparing to Jin et al. (2021). When the underlying MDP reduces into a linear MDP, if we use the linear model as the plug-in parametric model in Algorithm 1, our bound reduces into Õ ( κH5/2dlin√ K ) which improves the bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) by a factor of √ dlin and worsen by a factor of √ H due to the data splitting. Thus, our bound is more favorable in the linear MDPs with high-dimensional features. Moreover, our bound is guaranteed in more practical scenarios where the offline data can have been adaptively generated and is not required to uniformly cover the state-action space. The explicit bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) is obtained under the assumption that the offline data have uniform coverage and are generated independently on the episode basis. Comparing to Yang et al. (2020). Though Yang et al. (2020) work in the online regime, it shares some part of the literature with our work in function approximation for RL. Besides different learning regimes (offline versus online), we offer three key distinctions which can potentially be used in the online regime as well: (i) perturbed rewards, (ii) optimization, and (iii) data split. Regarding (i), our perturbed reward design can be applied to online RL with function approximation to obtain a provably efficient online RL that is computationally efficient and thus remove the need of maintaining explicit confidence regions and performing the inverse of a large covariance matrix. Regarding (ii), we incorporate the optimization analysis into our algorithm which makes our algorithm and analysis more practical. We also note that unlike (Yang et al., 2020), we do not make any assumption on the eigenvalue decay rate of the empirical NTK kernel as the empirical NTK kernel is data-dependent. Regarding (iii), our data split technique completely removes the factor√ logN∞(H, 1/K,B) in the bound at the expense of increasing the bound by a factor of √ H . In complex models, such log covering number can be excessively larger than the horizon H , making the algorithm too optimistic in the online regime (optimistic in the offline regime, respectively). For example, the target function class is RKHS with a γ-polynomial decay, the log covering number scales as (Yang et al., 2020, Lemma D1),√ logN∞(H, 1/K,B) ≲ K 2 αγ−1 , for some α ∈ (0, 1). In the case of two-layer ReLU NTK, γ = d (Bietti & Mairal, 2019), thus√ logN∞(H, 1/K,B) ≲ K 2 αd−1 which is much larger than √ H when the size of dataset is large. Note that our data-splitting technique is general that can be used in the online regime as well. Comparing to Xu & Liang (2022). Xu & Liang (2022) consider a different setting where pertimestep rewards are not available and only the total reward of the whole trajectory is given. Used with neural function approximation, they obtain Õ(DeffH2/ √ K) where Deff is their effective dimension. Note that Xu & Liang (2022) do not use data splitting and still achieve the same order of Deff as our result with data splitting. It at first might appear that our bound is inferior to their bound as we pay the cost of √ H due to data splitting. However, to obtain that bound, they make three critical assumptions: (i) the offline data trajectories are independently and identically distributed (i.i.d.) (see their Assumption 3), (ii) the offline data is uniformly explorative over all dimensions of the feature space (also see their Assumption 3), and (iii) the eigenfunctions of the induced NTK RKHS has finite spectrum (see their Assumption 4). The i.i.d. assumption under the RKHS space with finite dimensions (due to the finite spectrum assumption) and the well-explored dataset is critical in their proof to use a matrix concentration that does not incur an extra factor of √ Deff as it would normally do without these assumptions (see Section E, the proof of their Lemma 2). Note that the celebrated ReLU NTK does not satisfy the finite spectrum assumption (Bietti & Mairal, 2019). Moreover, we do not make any of these three assumptions above for our bound to hold. That suggests that our bound is much more general. In addition, we do not need to compute any confidence regions nor perform the inverse of a large covariance matrix. Comparing to Yin et al. (2023). During the submission of our work, a concurrent work of Yin et al. (2023) appeared online. Yin et al. (2023) study provably efficient offline RL with a general parametric function approximation that unifies the guarantees of offline RL in linear and generalized linear MDPs, and beyond with potential applications to other classes of functions in practice. We remark that the result in Yin et al. (2023) is orthogonal/complementary to our paper since they consider the parametric class with third-time differentiability which cannot apply to neural networks (not necessarily overparameterized) with non-smooth activation such as ReLU. In addition, they do not consider reward perturbing in their algorithmic design or optimization errors in their analysis. B.2 WORSE-CASE RATE OF EFFECTIVE DIMENSION In the main paper, we prove an Õ ( κH5/2d̃√ K ) sub-optimality bound which depends on the notion of effective dimension defined in Definition 2. Here we give a worst-case rate of the effective dimension d̃ for the two-layer ReLU NTK. We first briefly review the background of RKHS. LetH be an RKHS defined on X ⊆ Rd with kernel function ρ : X ×X → R. Let ⟨·, ·⟩H : H×H → R and ∥ · ∥H : H → R be the inner product and the RKSH norm on H. By the reproducing kernel property of H, there exists a feature mapping ϕ : X → H such that f(x) = ⟨f, ϕ(x)⟩H and ρ(x, x′) = ⟨ϕ(x), ϕ(x′)⟩H. We assume that the kernel function ρ is uniformly bounded, i.e. supx∈X ρ(x, x) <∞. Let L2(X ) be the space of square-integral functions on X with respect to the Lebesgue measure and let ⟨·, ·⟩L2 be the inner product on L2(X ). The kernel function ρ induces an integral operator Tρ : L2(X )→ L2(X ) defined as Tρf(x) = ∫ X ρ(x, x′)f(x′)dx′. By Mercer’s theorem (Steinwart & Christmann, 2008), Tρ has countable and positive eigenvalues {λi}i≥1 and eigenfunctions {νi}i≥1. The kernel function andH can be expressed as ρ(x, x′) = ∞∑ i=1 λiνi(x)νi(x ′), H = {f ∈ L2(X ) : ∞∑ i=1 ⟨f, νi⟩L2 λi <∞}. Now consider the NTK defined in Definition 1: Kntk(x, x ′) = Ew∼N (0,Id/d)⟨xσ ′(wTx), x′σ′(wTx′)⟩. It follows from (Bietti & Mairal, 2019, Proposition 1) that λi ≍ i−d. Thus, by (Srinivas et al., 2010, Theorem 5), the data-dependent effective dimension ofHntk can be bounded in the worst case by d̃ ≲ K ′(d+1)/(2d). We remark that this is the worst-case bound that considers uniformly over all possible realizable of training data. The effective dimension d̃ is on the other hand data-dependent, i.e. its value depends on the specific training data at hand thus d̃ can be actually much smaller than the worst-case rate. C PROOF OF THEOREM 1 AND THEOREM 2 In this section, we provide both the outline and detailed proofs of Theorem 1 and Theorem 2. C.1 TECHNICAL REVIEW AND PROOF OVERVIEW Technical Review. In what follows, we provide more detailed discussion when placing our technical contribution in the context of the related literature. Our technical result starts with the value difference lemma in Jin et al. (2021) to connect bounding the suboptimality of an offline algorithm to controlling the uncertainty quantification in the value estimates. Thus, our key technical contribution is to provably quantify the uncertainty of the perturbed value function estimates which were obtained via reward perturbing and gradient descent. This problem setting is largely different from the current analysis of overparameterized neural networks for supervised learning which does not require uncertainty quantification. Our work is not the first to consider uncertainty quantification with overparameterized neural networks, since it has been studied in Zhou et al. (2020); Nguyen-Tang et al. (2022a); Jia et al. (2022). However, there are significant technical differences between our work and these works. The work in Zhou et al. (2020); Nguyen-Tang et al. (2022a) considers contextual bandits with overparameterized neural networks trained by (S)GD and quantifies the uncertainty of the value function with explicit empirical covariance matrices. We consider general MDP and use reward perturbing to implicitly obtain uncertainty, thus requiring different proof techniques. Jia et al. (2022) is more related to our work since they consider reward perturbing with overparameterized neural networks (but they consider contextual bandits). However, our reward perturbing strategy is largely different from that in Jia et al. (2022). Specifically, Jia et al. (2022) perturbs each reward only once while we perturb each reward multiple times, where the number of perturbing times is crucial in our work and needs to be controlled carefully. We show in Theorem 1 that our reward perturbing strategy is effective in enforcing sufficient pessimism for offline learning in general MDP and the empirical results in Figure 2, Figure 3, Figure 5, and Table 2 are strongly consistent with our theoretical suggestion. Thus, our technical proofs are largely different from those of Jia et al. (2022). Finally, the idea of perturbing rewards multiple times in our algorithm is inspired by Ishfaq et al. (2021). However, Ishfaq et al. (2021) consider reward perturbing for obtaining optimism in online RL. While perturbing rewards are intuitive to obtain optimism for online RL, for offline RL, under distributional shift, it can be paradoxically difficult to properly obtain pessimism with randomization and ensemble (Ghasemipour et al., 2022), especially with neural function approximation. We show affirmatively in our work that simply taking the minimum of the randomized value functions after perturbing rewards multiple times is sufficient to obtain provable pessimism for offline RL. In addition, Ishfaq et al. (2021) do not consider neural network function approximation and optimization. Controlling the uncertainty of randomization (via reward perturbing) under neural networks with extra optimization errors induced by gradient descent sets our technical proof significantly apart from that of Ishfaq et al. (2021). Besides all these differences, in this work, we propose an intricately-designed data splitting technique that avoids the uniform convergence argument and could be of independent interest for studying sample-efficient RL with complex function approximation. Proof Overview. The key steps for proving Theorem 1 and Theorem 2 are highlighted in Subsection C.2 and Subsection C.3, respectively. Here, we discuss an overview of our proof strategy. The key technical challenge in our proof is to quantify the uncertainty of the perturbed value function estimates. To deal with this, we carefully control both the near-linearity of neural networks in the NTK regime and the estimation error induced by reward perturbing. A key result that we use to control the linear approximation to the value function estimates is Lemma D.3. The technical challenge in establishing Lemma D.3 is how to carefully control and propagate the optimization error incurred by gradient descent. The complete proof of Lemma D.3 is provided in Section E.3. The implicit uncertainty quantifier induced by the reward perturbing is established in Lemma D.1 and Lemma D.2, where we carefully design a series of intricate auxiliary loss functions and establish the anti-concentrability of the perturbed value function estimates. This requires a careful design of the variance of the noises injected into the rewards. To deal with removing a potentially large covering number when we quantify the implicit uncertainty, we propose our data splitting technique which is validated in the proof of Lemma D.1 in Section E.1. Moreover, establishing Lemma D.1 in the overparameterization regime induces an additional challenge since a standard analysis would result in a vacuous bound that scales with the overparameterization. We avoid this issue by carefully incorporating the use of the effective dimension in Lemma D.1. C.2 PROOF OF THEOREM 1 In this subsection, we present the proof of Theorem 1. We first decompose the suboptimality SubOpt(π̃; s) and present the main lemmas to bound the evaluation error and the summation of the implicit confidence terms, respectively. The detailed proof of these lemmas are deferred to Section D. For proof convenience, we first provide the key parameters that we use consistently throughout our proofs in Table 3. We define the model evaluation error at any (x, h) ∈ X × [H] as errh(x) = (BhṼh+1 − Q̃h)(x), (3) where Bh is the Bellman operator defined in Section 3, and Ṽh and Q̃h are the estimated (action-) state value functions returned by Algorithm 1. Using the standard suboptimality decomposition (Jin et al., 2021, Lemma 3.1), for any s1 ∈ S, SubOpt(π̃; s1) = − H∑ h=1 Eπ̃ [errh(sh, ah)] + H∑ h=1 Eπ∗ [errh(sh, ah)] + H∑ h=1 Eπ∗ [ ⟨Q̃h(sh, ·), π∗h(·|sh)− π̃h(·|sh)⟩A ] ︸ ︷︷ ︸ ≤0 , where the third term is non-positive as π̃h is greedy with respect to Q̃h. Thus, for any s1 ∈ S, we have SubOpt(π̃; s1) ≤ − H∑ h=1 Eπ̃ [errh(sh, ah)] + H∑ h=1 Eπ∗ [errh(sh, ah)] . (4) In the following main lemma, we bound the evaluation error errh(s, a). In the rest of the proof, we consider an additional parameter R and fix any δ ∈ (0, 1). Lemma C.1. Let m = Ω ( d3/2R−1 log3/2( √ m/R) ) R = O ( m1/2 log−3m ) , m = Ω ( K ′10(H + ψ)2 log(3K ′H/δ) ) λ > 1 K ′C2g ≥ λR ≥ max{4B̃1, 4B̃2, 2 √ 2λ−1K ′(H + ψ + γh,1)2 + 4γ2h,2}, η ≤ (λ+K ′C2g )−1, ψ > ι, σh ≥ β,∀h ∈ [H], (5) where B̃1, B̃2, γh,1, γh,2, and ι are defined in Table 3,Cg is a absolute constant given in Lemma G.1, and R is an additional parameter. Let M = log HSAδ / log 1 1−Φ(−1) where Φ(·) is the cumulative distribution function of the standard normal distribution. With probability at least 1−MHm−2−2δ, for any (x, h) ∈ X × [H], we have −ι ≤ errh(x) ≤ σh(1 + √ 2 log(MSAH/δ)) · ∥g(x;W0)∥Λ−1h + ι where Λh := λImd + ∑ k∈Ih g(x k h;W0)g(x k h;W0) T ∈ Rmd×md. Now we can prove Theorem 1. Proof of Theorem 1. Theorem 1 can directly follow from substituting Lemma C.1 into Equation (4). We now only need to simplify the conditions in Equation (5). To satisfy Equation (5), it suffices to set λ = 1 + HK ψ = 1 > ι σh = β 8CgR 4/3m−1/6 √ logm ≤ 1 λ−1K ′H2 ≥ 2 B̃1 ≤ √ 2K ′(H + ψ + γh,1)2 + λγ2h,2 + 1 √ K ′CgR 1/3m−1/6 √ logm ≤ 1 B̃2 ≤ K ′CgR4/3m−1/6 √ logm ≤ 1. Combining with Equation 5, we have λ = 1 + HK ψ = 1 > ι σh = β η ≲ (λ+K ′)−1 m ≳ max { R8 log3m,K ′10(H + 1)2 log(3K ′H/δ), d3/2R−1 log3/2( √ m/R),K ′6R8 log3m } m ≳ [2K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) + 1]3K ′3R log3m 4 √ K ′(H + 1 + β √ log(K ′M/δ)) + 4β √ d log(dK ′M/δ) ≤ R ≲ K ′. (6) Note that with the above choice of λ = 1 + HK , we have K ′ log λ = log(1 + 1 K ′ )K ′ ≤ log 3 < 2. We further set that m ≳ B2K ′2d log(3H/δ), we have β = BK ′√ m (2 √ d+ √ 2 log(3H/ δ))λ−1/2Cg + λ 1/2B + (H + ψ) [√ d̃h log(1 + K ′ λ ) +K ′ log λ+ 2 log(3H/δ) ] ≤ 1 + λ1/2B + (H + 1) [√ d̃h log(1 + K ′ λ ) + 2 + 2 log(3H/δ) ] = o( √ K ′). Thus, 4 √ K ′(H + 1 + β √ log(K ′M/δ)) + 4β √ d log(dK ′M/δ) << K ′ for K ′ large enough. Therefore, there exists R that satisfies Equation (6). We now only need to verify ι < 1. We have ι0 = Bm −1/2(2 √ d+ √ 2 log(3H/δ)) ≤ 1/3, ι1 = CgR 4/3m−1/6 √ logm+ Cg ( B̃1 + B̃2 + λ −1(1− ηλ)J ( K ′(H + 1 + γh,1) 2 + λγ2h,2 )) ≲ 1/3 if (1− ηλ)J [ K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) ] ≲ 1. (7) Note that (1− ηλ)J ≤ e−ηλJ , K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) ≲ K ′H2λβ2d log(dK ′M/δ). Thus, Equation (7) is satisfied if J ≳ ηλ log ( K ′H2λβ2d log(dK ′M/δ) ) . Finally note that ι2 ≤ ι1. Rearranging the derived conditions here gives the complete parameter conditions in Theorem 1. Specifically, the polynomial form of m is m ≳ max{R8 log3m,K ′10(H + 1)2 log(3K ′H/δ), d3/2R−1 log3/2( √ m/R),K ′6R8 log3m, B2K ′2d log(3H/δ)}, m ≳ [2K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) + 1]3K ′3R log3m. C.3 PROOF OF THEOREM 2 In this subsection, we give a detailed proof of Theorem 2. We first present intermediate lemmas whose proofs are deferred to Section D. For any h ∈ [H] and k ∈ Ih = [(H − h)K ′ +1, . . . , (H − h+ 1)K ′], we define the filtration Fkh = σ ( {(sth′ , ath′ , rth′)} t≤k h′∈[H] ∪ {(s k+1 h′ , a k+1 h′ , r k+1 h′ )}h′≤h−1 ∪ {(s k+1 h , a k+1 h )} ) . Let Λkh := λI + ∑ t∈Ik,t≤k g(xth;W0)g(x t h;W0) T , β̃ := β(1 + 2 √ log(SAH/δ)). In the following lemma, we connect the expected sub-optimality of π̃ to the summation of the uncertainty quantifier at empirical data. Lemma C.2. Suppose that the conditions in Theorem 1 all hold. With probability at least 1 − MHm−2 − 3δ, SubOpt(π̃) ≤ 2β̃ K ′ H∑ h=1 ∑ k∈Ih Eπ∗ [ ∥g(xh;W0)∥(Λkh)−1 ∣∣∣∣Fk−1h , sk1]+ 163K ′H log(log2(K ′H)/δ) + 2 K ′ + 2ι, Lemma C.3. Under Assumption 5.2, for any h ∈ [H] and fixed W0, with probability at least 1− δ,∑ k∈Ih Eπ∗ [ ∥g(xh;W0)∥(Λkh)−1 ∣∣∣∣Fk−1, sk1] ≤ ∑ k∈Ih κ∥g(xh;W0)∥(Λkh)−1 + κ √ K ′ log(1/δ) λ . Lemma C.4. If λ ≥ C2g and m = Ω(K ′4 log(K ′H/δ)), then with probability at least 1− δ, for any h ∈ [H], we have ∑ k∈Ih ∥g(xh;W0)∥2(Λkh)−1 ≤ 2d̃h log(1 +K ′/λ) + 1. where d̃h is the effective dimension defined in Definition 2. Proof of Theorem 2. Theorem 2 directly follows from Lemma C.2-C.3-C.4 using the union bound. D PROOF OF LEMMA C.1 In this section, we provide the proof for Lemma C.1. We set up preparation for all the results in the rest of the paper and provide intermediate lemmas that we use to prove Lemma C.1. The detailed proofs of these intermediate lemmas are deferred to Section E. D.1 PREPARATION To prepare for the lemmas and proofs in the rest of the paper, we define the following quantities. Recall that we use abbreviation x = (s, a) ∈ X ⊂ Sd−1 and xkh = (skh, akh) ∈ X ⊂ Sd−1. For any h ∈ [H] and i ∈ [M ], we define the perturbed loss function L̃ih(W ) := 1 2 ∑ k∈Ih ( f(xkh;W )− ỹ i,k h ) )2 + λ 2 ∥W + ζih −W0∥22, (8) where ỹi,kh := r k h + Ṽh+1(s k h+1) + ξ i,k h , Ṽh+1 is computed by Algorithm 1 at Line 10 for timestep h+1, and {ξi,kh } and ζih are the Gaussian noises obtained at Line 5 of Algorithm 1. Here the subscript h and the superscript i in L̃ih(W ) emphasize the dependence on the ensemble sample i and timestep h. The gradient descent update rule of L̃ih(W ) is W̃ i,(j+1) h = W̃ i,(j) h − η∇L̃ i h(W ), (9) where W̃ i,(0)h =W0 is the initialization parameters. Note that W̃ ih = GradientDescent(λ, η, J, D̃ih, ζih,W0) = W̃ i,(J) h , where W̃ ih is returned by Line 7 of Algorithm 1. We consider a non-perturbed auxiliary loss function Lh(W ) := 1 2 ∑ k∈Ih ( f(xkh;W )− ykh) )2 + λ 2 ∥W −W0∥22, (10) where ykh := r k h + Ṽh+1(s k h+1). Note that Lh(W ) is simply a non-perturbed version of L̃ih(W ) where we drop all the noises {ξ i,k h } and {ζih}. We consider the gradient update rule for Lh(W ) as follows Ŵ (j+1) h = Ŵ (j) h − η∇Lh(W ), (11) where Ŵ (0)h =W0 is the initialization parameters. To correspond with W̃ i h, we denote Ŵh := Ŵ (J) h . (12) We also define the auxiliary loss functions for both non-perturbed and perturbed data in the linear model with feature g(·;W0) as follows L̃i,linh (W ) := 1 2 ∑ k∈Ih ( ⟨g(xkh;W0),W ⟩ − ỹ i,k h )2 + λ 2 ∥W + ζih −W0∥22, (13) Llinh (W ) := 1 2 ∑ k∈Ih ( ⟨g(xkh;W0),W ⟩ − ykh )2 + λ 2 ∥W −W0∥22. (14) We consider the auxiliary gradient updates for L̃i,linh (W ) as W̃ i,lin,(j+1) h = W̃ i,lin,(j) h − η∇L̃ i,lin h (W ), (15) Ŵ lin,(j+1) h = Ŵ lin,(j) h − η∇L̃ lin h (W ), (16) where W̃ i,lin,(0)h = Ŵ i,lin,(0) h = W0 for all i, h. Finally, we define the least-square solutions to the auxili
1. What is the focus and contribution of the paper regarding offline reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of using neural networks? 3. Do you have any concerns or minor technical questions about the implementation or sample complexity results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies offline RL problems with perturbed rewards. In particular, the Q function will be parametrized as the minimum of M neural networks, trained on M perturbed datasets. The benefit of the proposed algorithm, compared to the UCB-based method is reducing the time complexity of action selection. On a technical side, the authors propose a data splitting analysis technique to improve dependence on log covering the number in the sample complexity result. Strengths And Weaknesses Strength: The proposed neural network-based offline RL algorithm is new. In appendix B3 the authors compare with existing literature on the sample complexity results. Weakness: I did not spot any major errors in the paper. However, I have several minor technical questions: Q1. In line 10 of Algorithm 1, shouldn't we take argmax over action space? Q2. A related question is, how is the argmax implemented in code since we are considering the large space-action space? Clarity, Quality, Novelty And Reproducibility The paper is clearly written with sufficient novelty.
ICLR
Title VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation Abstract We propose a novel algorithm for offline reinforcement learning called Value Iteration with Perturbed Rewards (VIPeR), which amalgamates the pessimism principle with random perturbations of the value function. Most current offline RL algorithms explicitly construct statistical confidence regions to obtain pessimism via lower confidence bounds (LCB), which cannot easily scale to complex problems where a neural network is used to estimate the value functions. Instead, VIPeR implicitly obtains pessimism by simply perturbing the offline data multiple times with carefully-designed i.i.d. Gaussian noises to learn an ensemble of estimated state-action value functions and acting greedily with respect to the minimum of the ensemble. The estimated state-action values are obtained by fitting a parametric model (e.g., neural networks) to the perturbed datasets using gradient descent. As a result, VIPeR only needsO(1) time complexity for action selection, while LCB-based algorithms require at least Ω(K), where K is the total number of trajectories in the offline data. We also propose a novel data-splitting technique that helps remove a factor involving the log of the covering number in our bound. We prove that VIPeR yields a provable uncertainty quantifier with overparameterized neural networks and enjoys a bound on sub-optimality of Õ(κHd̃/ √ K), where d̃ is the effective dimension, H is the horizon length and κ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation. N/A √ K), where d̃ is the effective dimension, H is the horizon length and κ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation. 1 INTRODUCTION Offline reinforcement learning (offline RL) (Lange et al., 2012; Levine et al., 2020) is a practical paradigm of RL for domains where active exploration is not permissible. Instead, the learner can access a fixed dataset of previous experiences available a priori. Offline RL finds applications in several critical domains where exploration is prohibitively expensive or even implausible, including healthcare (Gottesman et al., 2019; Nie et al., 2021), recommendation systems (Strehl et al., 2010; Thomas et al., 2017), and econometrics (Kitagawa & Tetenov, 2018; Athey & Wager, 2021), among others. The recent surge of interest in this area and renewed research efforts have yielded several important empirical successes (Chen et al., 2021; Wang et al., 2023; 2022; Meng et al., 2021). A key challenge in offline RL is to efficiently exploit the given offline dataset to learn an optimal policy in the absence of any further exploration. The dominant approaches to offline RL address this challenge by incorporating uncertainty from the offline dataset into decision-making (Buckman et al., 2021; Jin et al., 2021; Xiao et al., 2021; Nguyen-Tang et al., 2022a; Ghasemipour et al., 2022; An et al., 2021; Bai et al., 2022). The main component of these uncertainty-aware approaches to offline RL is the pessimism principle, which constrains the learned policy to the offline data and leads to various lower confidence bound (LCB)-based algorithms. However, these methods are not easily extended or scaled to complex problems where neural function approximation is used to estimate the value functions. In particular, it is costly to explicitly compute the statistical confidence regions of the model or value functions if the class of function approximator is given by overparameterized neural networks. For example, constructing the LCB for neural offline contextual bandits (NguyenTang et al., 2022a) and RL (Xu & Liang, 2022) requires computing the inverse of a large covariance matrix whose size scales with the number of parameters in the neural network. This computational cost hinders the practical application of these provably efficient offline RL algorithms. Therefore, a largely open question is how to design provably computationally efficient algorithms for offline RL with neural network function approximation. In this work, we present a solution based on a computational approach that combines the pessimism principle with randomizing the value function (Osband et al., 2016; Ishfaq et al., 2021). The algorithm is strikingly simple: we randomly perturb the offline rewards several times and act greedily with respect to the minimum of the estimated state-action values. The intuition is that taking the minimum from an ensemble of randomized state-action values can efficiently achieve pessimism with high probability while avoiding explicit computation of statistical confidence regions. We learn the state-action value function by training a neural network using gradient descent (GD). Further, we consider a novel data-splitting technique that helps remove the dependence on the potentially large log covering number in the learning bound. We show that the proposed algorithm yields a provable uncertainty quantifier with overparameterized neural network function approximation and achieves a sub-optimality bound of Õ(κH5/2d̃/ √ K), where K is the total number of episodes in the offline data, d̃ is the effective dimension, H is the horizon length, and κ measures the distributional shift. We achieve computational efficiency since the proposed algorithm only needsO(1) time complexity for action selection, while LCB-based algorithms require O(K2) time complexity. We empirically corroborate the statistical and computational efficiency of our proposed algorithm on a wide set of synthetic and real-world datasets. The experimental results show that the proposed algorithm has a strong advantage in computational efficiency while outperforming LCB-based neural algorithms. To the best of our knowledge, ours is the first offline RL algorithm that is both provably and computationally efficient in general MDPs with neural network function approximation. 2 RELATED WORK Randomized value functions for RL. For online RL, Osband et al. (2016; 2019) were the first to explore randomization of estimates of the value function for exploration. Their approach was inspired by posterior sampling for RL (Osband et al., 2013), which samples a value function from a posterior distribution and acts greedily with respect to the sampled function. Concretely, Osband et al. (2016; 2019) generate randomized value functions by injecting Gaussian noise into the training data and fitting a model on the perturbed data. Jia et al. (2022) extended the idea of perturbing rewards to online contextual bandits with neural function approximation. Ishfaq et al. (2021) obtained a provably efficient method for online RL with general function approximation using the perturbed rewards. While randomizing the value function is an intuitive approach to obtaining optimism in online RL, obtaining pessimism from the randomized value functions can be tricky in offline RL. Indeed, Ghasemipour et al. (2022) point out a critical flaw in several popular existing methods for offline RL that update an ensemble of randomized Q-networks toward a shared pessimistic temporal difference target. In this paper, we propose a simple fix to obtain pessimism properly by updating each randomized value function independently and taking the minimum over an ensemble of randomized value functions to form a pessimistic value function. Offline RL with function approximation. Provably efficient offline RL has been studied extensively for linear function approximation. Jin et al. (2021) were the first to show that pessimistic value iteration is provably efficient for offline linear MDPs. Xiong et al. (2023); Yin et al. (2022) improved upon Jin et al. (2021) by leveraging variance reduction. Xie et al. (2021) proposed a Bellman-consistency assumption with general function approximation, which improves the bound of Jin et al. (2021) by a factor of √ d when realized to finite action space and linear MDPs. Wang et al. (2021); Zanette (2021) studied the statistical hardness of offline RL with linear function approximation via exponential lower bound, and Foster et al. (2021) suggested that only realizability and strong uniform data coverage are not sufficient for sample-efficient offline RL. Beyond linearity, some works study offline RL for general function approximation, both parametric and nonparametric. These approaches are either based on Fitted-Q Iteration (FQI) (Munos & Szepesvári, 2008; Le et al., 2019; Chen & Jiang, 2019; Duan et al., 2021a;b; Hu et al., 2021; Nguyen-Tang et al., 2022b) or the pessimism principle (Uehara & Sun, 2022; Nguyen-Tang et al., 2022a; Jin et al., 2021). While pessimism-based algorithms avoid the strong assumptions of data coverage used by FQI-based algorithms, they require an explicit computation of valid confidence regions and possibly the inverse of a large covariance matrix which is computationally prohibitive and does not scale to complex function approximation setting. This limits the applicability of pessimism-based, provably efficient offline RL to practical settings. A very recent work Bai et al. (2022) estimates the uncertainty for constructing LCB via the disagreement of bootstrapped Q-functions. However, the uncertainty quantifier is only guaranteed in linear MDPs and must be computed explicitly. We provide a more detailed discussion of our technical contribution in the context of existing literature in Section C.1. 3 PRELIMINARIES In this section, we provide basic background on offline RL and overparameterized neural networks. 3.1 EPISODIC TIME-INHOMOGENOUS MARKOV DECISION PROCESSES (MDPS) A finite-horizon Markov decision process (MDP) is denoted as the tupleM = (S,A,P, r,H, d1), where S is an arbitrary state space, A an arbitrary action space, H the episode length, and d1 the initial state distribution. We assume that SA := |S||A| is finite but arbitrarily large, e.g., it can be as large as the total number of atoms in the observable universe ≈ 1082. Let P(S) denote the set of probability measures over S. A time-inhomogeneous transition kernel P = {Ph}Hh=1, where Ph : S × A → P(S) maps each state-action pair (sh, ah) to a probability distribution Ph(·|sh, ah). Let r = {rh}Hh=1 where rh : S × A → [0, 1] is the mean reward function at step h. A policy π = {πh}Hh=1 assigns each state sh ∈ S to a probability distribution, πh(·|sh), over the action space and induces a random trajectory s1, a1, r1, . . . , sH , aH , rH , sH+1 where s1 ∼ d1, ah ∼ πh(·|sh), sh+1 ∼ Ph(·|sh, ah). We define the state value function V πh ∈ RS and the actionstate value function Qπh ∈ RS×A at each timestep h as Qπh(s, a) = Eπ[ ∑H t=h rt|sh = s, ah = a], and V πh (s) = Ea∼π(·|s) [Qπh(s, a)], where the expectation Eπ is taken with respect to the randomness of the trajectory induced by π. Let Ph denote the transition operator defined as (PhV )(s, a) := Es′∼Ph(·|s,a)[V (s′)]. For any V : S → R, we define the Bellman operator at timestep h as (BhV )(s, a) := rh(s, a) + (PhV )(s, a). The Bellman equations are given as follows. For any (s, a, h) ∈ S ×A× [H], Qπh(s, a) = (BhV πh+1)(s, a), V πh (s) = ⟨Qπh(s, ·), πh(·|s)⟩A, V πH+1(s) = 0, where [H] := {1, 2, . . . ,H}, and ⟨·, ·⟩A denotes the summation over all a ∈ A. We define an optimal policy π∗ as any policy that yields the optimal value function, i.e. V π ∗ h (s) = supπ V π h (s) for any (s, h) ∈ S × [H]. For simplicity, we denote V π∗h and Qπ ∗ h as V ∗ h and Q ∗ h, respectively. The Bellman optimality equation can be written as Q∗h(s, a) = (BhV ∗h+1)(s, a), V ∗h (s) = max a∈A Q∗h(s, a), V ∗ H+1(s) = 0. Define the occupancy density as dπh(s, a) := P((sh, ah) = (s, a)|π) which is the probability that we visit state s and take action a at timestep h if we follow the policy π. We denote dπ ∗ h by d ∗ h. Offline regime. In the offline regime, the learner has access to a fixed dataset D = {(sth, ath, rth, sth+1)} t∈[K] h∈[H] generated a priori by some unknown behaviour policy µ = {µh}h∈[H]. Here, K is the total number of trajectories, and ath ∼ µh(·|sth), sth+1 ∼ Ph(·|sth, ath) for any (t, h) ∈ [K] × [H]. Note that we allow the trajectory at any time t ∈ [K] to depend on the trajectories at previous times. The goal of offline RL is to learn a policy π̂, based on (historical data) D, such that π̂ achieves small sub-optimality, which we define as SubOpt(π̂) := Es1∼d1 [SubOpt(π̂; s1)] , where SubOpt(π̂; s1) := V π ∗ 1 (s1)− V π̂1 (s1). Algorithm 1 Value Iteration with Perturbed Rewards (VIPeR) 1: Input: Offline data D = {(skh, akh, rkh)} k∈[K] h∈[H], a parametric function family F = {f(·, ·;W ) : W ∈ W} ⊂ {X → R} (e.g. neural networks), perturbed variances {σh}h∈[H], number of bootstraps M , regularization parameter λ, step size η, number of gradient descent steps J , and cutoff margin ψ, split indices {Ih}h∈[H] where Ih := [(H − h)K ′ + 1, . . . , (H − h+ 1)K ′] 2: Initialize ṼH+1(·)← 0 and initialize f(·, ·;W ) with initial parameter W0 3: for h = H, . . . , 1 do 4: for i = 1, . . . ,M do 5: Sample {ξk,ih }k∈Ih ∼ N (0, σ2h) and ζih = {ζ j,i h }j∈[d] ∼ N (0, σ2hId) 6: Perturb the dataset D̃ih ← {skh, akh, rkh + Ṽh+1(skh+1) + ξ k,i h }k∈Ih ▷ Perturbation 7: Let W̃ ih ← GradientDescent(λ, η, J, D̃ih, ζih,W0) (Algorithm 2) ▷ Optimization 8: end for 9: Compute Q̃h(·, ·)← min{mini∈[M ]f(·, ·; W̃ ih), (H − h+ 1)(1 + ψ)}+ ▷ Pessimism 10: π̃h ← argmaxπh⟨Q̃h, πh⟩ and Ṽh ← ⟨Q̃h, π̃h⟩ ▷ Greedy 11: end for 12: Output: π̃ = {π̃h}h∈[H]. Notation. For simplicity, we write xth = (sth, ath) and x = (s, a). We write Õ(·) to hide logarithmic factors of the problem parameters (d,H,K,m, 1/δ) in the standard Big-Oh notation. We use Ω(·) as the standard Omega notation. We write u ≲ v if u = O(v) and write u ≳ v if v ≲ u. We write A ⪯ B iff B −A is a positive definite matrix. Id denotes the d× d identity matrix. 3.2 OVERPARAMETERIZED NEURAL NETWORKS In this paper, we consider neural function approximation setting where the state-action value function is approximated by a two-layer neural network. For simplicity, we denoteX := S×A and view it as a subset of Rd. Without loss of generality, we assume X ⊂ Sd−1 := {x ∈ Rd : ∥x∥2 = 1}. We consider a standard two-layer neural network: f(x;W, b) = 1√ m ∑m i=1 biσ(w T i x), where m is an even number, σ(·) = max{·, 0} is the ReLU activation function (Arora et al., 2018), and W = (wT1 , . . . , w T m) T ∈ Rmd. During the training, we initialize (W, b) via the symmetric initialization scheme (Gao et al., 2019) as follows: For any i ≤ m2 , wi = wm2 +i ∼ N (0, Id/d), and bm 2 +i = −bi ∼ Unif({−1, 1}).1 During the training, we optimize over W while the bi are kept fixed, thus we write f(x;W, b) as f(x;W ). Denote g(x;W ) = ∇W f(x;W ) ∈ Rmd, and let W0 be the initial parameters of W . We assume that the neural network is overparameterized, i.e, the width m is sufficiently larger than the number of samples K. Overparameterization has been shown to be effective in studying the convergence and the interpolation behaviour of neural networks (Arora et al., 2019; Allen-Zhu et al., 2019; Hanin & Nica, 2020; Cao & Gu, 2019; Belkin, 2021). Under such an overparameterization regime, the dynamics of the training of the neural network can be captured using the framework of the neural tangent kernel (NTK) (Jacot et al., 2018). 4 ALGORITHM In this section, we present the proposed algorithm called Value Iteration with Perturbed Rewards, or VIPeR; see Algorithm 1 for the pseudocode. The key idea underlying VIPeR is to train a parametric model (e.g., a neural network) on a perturbed-reward dataset several times and act pessimistically by picking the minimum over an ensemble of estimated state-action value functions. In particular, at each timestep h ∈ [H], we drawM independent samples of zero-mean Gaussian noise with variance σh. We use these samples to perturb the sum of the observed rewards, rkh, and the estimated value function with a one-step lookahead, i.e., Ṽh+1(skh+1) (see Line 6 of Algorithm 1). The weights W̃ i h are then updated by minimizing the perturbed regularized squared loss on {D̃ih}i∈[M ] using gradient descent (Line 7). We pick the value function pessimistically by selecting the minimum over the finite ensemble. The chosen value function is truncated at (H − h+ 1)(1 + ψ) (see Line 9), where 1This symmetric initialization scheme makes f(x;W0) = 0 and ⟨g(x;W0),W0⟩ = 0 for any x. ψ ≥ 0 is a small cutoff margin (more on this when we discuss the theoretical analysis). The returned policy is greedy with respect to the truncated pessimistic value function (see Line 10). Algorithm 2 GradientDescent(λ, η, J, D̃ih, ζih,W0) 1: Input: Regularization parameter λ, step size η, number of gradient descent steps J , perturbed dataset D̃ih = {skh, akh, rkh + Ṽh+1(s k h+1) + ξ t,i h }k∈Ih , regularization per- turber ζih, initial parameter W0 2: L(W ) := 12 ∑ k∈Ih(f(s k h, a k h;W ) − (rkh + Ṽh+1(s k h+1) + ξ k,i h )) 2 + λ2 ∥W + ζ i h −W0∥22 3: for j = 0, . . . , J − 1 do 4: Wj+1 ←Wj − η∇L(Wj) 5: end for 6: Output: WJ . It is important to note that we split the trajectory indices [K] evenly into H disjoint buckets [K] = ∪h∈[H]Ih, where Ih = [(H − h)K ′ + 1, . . . , (H − h + 1)K ′] for K ′ := ⌊K/H⌋2, as illustrated in Figure 1. The estimated Q̃h is thus obtained only from the offline data with (trajectory) indices from Ih along with Ṽh+1. This novel design removes the data dependence structure in offline RL with function approximation (Nguyen-Tang et al., 2022b) and avoids a factor involving the log of the covering number in the bound on the sub-optimality of Algorithm 1, as we show in Section D.1. To deal with the non-linearity of the underlying MDP, we use a two-layer fully connected neural network as the parametric function family F in Algorithm 1. In other words, we approximate the state-action values: f(x;W ) = 1√ m ∑m i=1 biσ(w T i x), as described in Section 3.2. We use two-layer neural networks to simplify the computational analysis. We utilize gradient descent to train the state-action value functions {f(·, ·; W̃ ih)}i∈[M ], on perturbed rewards. The use of gradient descent is for the convenience of computational analysis, and our results can be extended to stochastic gradient descent by leveraging recent advances in the theory of deep learning (Allen-Zhu et al., 2019; Cao & Gu, 2019), albeit with a more involved analysis. Existing offline RL algorithms utilize estimates of statistical confidence regions to achieve pessimism in the offline setting. Explicitly constructing these confidence bounds is computationally expensive in complex problems where a neural network is used for function approximation. For example, the lower-confidence-bound-based algorithms in neural offline contextual bandits (NguyenTang et al., 2022a) and RL (Xu & Liang, 2022) require computing the inverse of a large covariance matrix with the size scaling with the number of network parameters. This is computationally prohibitive in most practical settings. Algorithm 1 (VIPeR) avoids such expensive computations while still obtaining provable pessimism and guaranteeing a rate of Õ( 1√ K ) on the sub-optimality, as we show in the next section. 5 SUB-OPTIMALITY ANALYSIS Next, we provide a theoretical guarantee on the sub-optimality of VIPeR for the function approximation class, F , represented by (overparameterized) neural networks. Our analysis builds on the recent advances in generalization and optimization of deep neural networks (Arora et al., 2019; Allen-Zhu et al., 2019; Hanin & Nica, 2020; Cao & Gu, 2019; Belkin, 2021) that leverage the observation that the dynamics of the neural parameters learned by (stochastic) gradient descent can be captured by the corresponding neural tangent kernel (NTK) space (Jacot et al., 2018) when the network is overparameterized. Next, we recall some definitions and state our key assumptions, formally. Definition 1 (NTK (Jacot et al., 2018)). The NTK kernel Kntk : X × X → R is defined as Kntk(x, x ′) = Ew∼N (0,Id/d)⟨xσ ′(wTx), x′σ′(wTx′)⟩, where σ′(u) = 1{u ≥ 0}. 2Without loss of generality, we assume K/H ∈ N. Let Hntk denote the reproducing kernel Hilbert space (RKHS) induced by the NTK, Kntk. SinceKntk is a universal kernel (Ji et al., 2020), we have that Hntk is dense in the space of continuous functions on (a compact set) X = S ×A (Rahimi & Recht, 2008). Definition 2 (Effective dimension). For any h ∈ [H], the effective dimension of the NTK matrix on data {xkh}k∈Ih is defined as d̃h := logdet(IK′ +Kh/λ) log(1 +K ′/λ) , where Kh := [Kntk(xih, x j h)]i,j∈Ih is the Gram matrix of Kntk on the data {xkh}k∈Ih . We further define d̃ := maxh∈[H] d̃h. Remark 1. Intuitively, the effective dimension d̃h measures the number of principal dimensions over which the projection of the data {xkh}k∈Ih in the RKHSHntk is spread. It was first introduced by Valko et al. (2013) for kernelized contextual bandits and was subsequently adopted by Yang & Wang (2020) and Zhou et al. (2020) for kernelized RL and neural contextual bandits, respectively. The effective dimension is data-dependent and can be bounded by d̃ ≲ K ′(d+1)/(2d) in the worst case (see Section B for more details).3 Definition 3 (RKHS of the infinite-width NTK). Define Q∗ := {f(x) = ∫ Rd c(w) Txσ′(wTx)dw : supw ∥c(w)∥2 p0(w) < B}, where c : Rd → Rd is any function, p0 is the probability density function of N (0, Id/d), and B is some positive constant. We make the following assumption about the regularity of the underlying MDP under function approximation. Assumption 5.1 (Completeness). For any V : S → [0, H + 1] and any h ∈ [H], BhV ∈ Q∗.4 Assumption 5.1 ensures that the Bellman operator Bh can be captured by an infinite-width neural network. This assumption is mild as Q∗ is a dense subset of Hntk (Gao et al., 2019, Lemma C.1) when B = ∞, thus Q∗ is an expressive function class when B is sufficiently large. Moreover, similar assumptions have been used in many prior works on provably efficient RL with function approximation (Cai et al., 2019; Wang et al., 2020; Yang et al., 2020; Nguyen-Tang et al., 2022b). Next, we present a bound on the suboptimality of the policy π̃ returned by Algorithm 1. Recall that we use the initialization scheme described in Section 3.2. Fix any δ ∈ (0, 1). Theorem 1. Let σh = σ := 1 + λ 1 2B + (H + 1) [ d̃ log(1 +K ′/λ) + 2 + 2 log(3H/δ) ] 1 2 . Let m = poly(K ′, H, d,B, d̃, λ, δ) be some high-order polynomial of the problem parameters, λ = 1 + HK , η ≲ (λ +K ′)−1, J ≳ K ′ log(K ′(H √ d̃ + B)), ψ = 1, and M = log HSAδ / log 1 1−Φ(−1) , where Φ(·) is the cumulative distribution function of the standard normal distribution. Then, under Assumption 5.1, with probability at least 1−MHm−2 − 2δ, for any s1 ∈ S, we have that SubOpt(π̃; s1) ≤ σ(1 + √ 2 log(MSAH/δ)) · Eπ∗ [ H∑ h=1 ∥g(sh, ah;W0)∥Λ−1h ] + Õ( 1 K ′ ) where Λh := λImd + ∑ k∈Ih g(s k h, a k h;W0)g(s k h, a k h;W0) T ∈ Rmd×md. Remark 2. Theorem 1 shows that the randomized design in our proposed algorithm yields a provable uncertainty quantifier even though we do not explicitly maintain any confidence regions in the algorithm. The implicit pessimism via perturbed rewards introduces an extra factor of 1 + √ 2 log(MSAH/δ) into the confidence parameter β. We build upon Theorem 1 to obtain an explicit bound using the following data coverage assumption. Assumption 5.2 (Optimal-Policy Concentrability). ∃κ <∞, sup(h,sh,ah) d∗h(sh,ah) dµh(sh,ah) ≤ κ. 3Note that this is the worst-case bound, and the effective dimension can be significantly smaller in practice. 4We consider V : S → [0, H + 1] instead of V : S → [0, H] due to the cutoff margin ψ in Algorithm 1. Assumption 5.2 requires any positive-probability trajectory induced by the optimal policy to be covered by the behavior policy. This data coverage assumption is significantly milder than the uniform coverage assumptions in many FQI-based offline RL algorithms (Munos & Szepesvári, 2008; Chen & Jiang, 2019; Nguyen-Tang et al., 2022b) and is common in pessimism-based algorithms (Rashidinejad et al., 2021; Nguyen-Tang et al., 2022a; Chen & Jiang, 2022; Zhan et al., 2022). Theorem 2. For the same parameter settings and the same assumption as in Theorem 1, we have that with probability at least 1−MHm−2 − 5δ, SubOpt(π̃) ≤ 2σ̃κH√ K ′ √2d̃ log(1 +K ′/λ) + 1 + √ log Hδ λ + 16H 3K ′ log log2(K ′H) δ + Õ( 1 K ′ ), where σ̃ := σ(1 + √ 2 log(SAH/δ)). Remark 3. Theorem 2 shows that with appropriate parameter choice, VIPeR achieves a suboptimality of Õ ( κH3/2 √ d̃·max{B,H √ d̃}√ K ) . Compared to Yang et al. (2020), we improve by a factor of K 2 dγ−1 for some γ ∈ (0, 1) at the expense of √ H . When realized to a linear MDP in Rdlin , d̃ = dlin and our bound reduces into Õ ( κH5/2dlin√ K ) which improves the bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) by a factor of √ dlin. We provide the result summary and comparison in Table 1 and give a more detailed discussion in Subsection B.1. 6 EXPERIMENTS In this section, we empirically evaluate the proposed algorithm VIPeR against several state-of-the-art baselines, including (a) PEVI (Jin et al., 2021), which explicitly constructs lower confidence bound (LCB) for pessimism in a linear model (thus, we rename this algorithm as LinLCB for convenience in our experiments); (b) NeuraLCB (Nguyen-Tang et al., 2022a) which explicitly constructs an LCB using neural network gradients; (c) NeuraLCB (Diag), which is NeuraLCB with a diagonal approximation for estimating the confidence set as suggested in NeuraLCB (Nguyen-Tang et al., 2022a); (d) Lin-VIPeR which is VIPeR realized to the linear function approximation instead of neural network function approximation; (e) NeuralGreedy (LinGreedy, respectively) which uses neural networks (linear models, respectively) to fit the offline data and act greedily with respect to the estimated state-action value functions without any pessimism. Note that when the parametric class, F , in Algorithm 1 is that of neural networks, we refer to VIPeR as Neural-VIPeR. We do not utilize data splitting in the experiments. We provide further algorithmic details of the baselines in Section H. We evaluate all algorithms in two problem settings: (1) the underlying MDP is a linear MDP whose reward functions and transition kernels are linear in some known feature map (Jin et al., 2020), and (2) the underlying MDP is non-linear with horizon length H = 1 (i.e., non-linear contextual bandits) (Zhou et al., 2020), where the reward function is either synthetic or constructed from MNIST dataset (LeCun et al., 1998). We also evaluate (a variant of) our algorithm and show its strong performance advantage in the D4RL benchmark (Fu et al., 2020) in Section A.3. We implemented all algorithms in Pytorch (Paszke et al., 2019) on a server with Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz, 755G RAM, and one NVIDIA Tesla V100 Volta GPU Accelerator 32GB Graphics Card.5 6.1 LINEAR MDPS We first test the effectiveness of pessimism implicit in VIPeR (Algorithm 1). To that end, we construct a hard instance of linear MDPs (Yin et al., 2022; Min et al., 2021); due to page limitation, we defer the details of our construction to Section A.1. We test for different values of H ∈ {20, 30, 50, 80} and report the sub-optimality of LinLCB, Lin-VIPeR, and LinGreedy, averaged over 30 runs, in Figure 2. We find that LinGreedy, which is uncertainty-agnostic, fails to learn from offline data and has poor performance in terms of sub-optimality when compared to pessimism-based algorithms LinLCB and Lin-VIPeR. Further, LinLCB outperforms Lin-VIPeR when K is smaller than 400, but the performance of the two algorithms matches for larger sample sizes. Unlike LinLCB, Lin-VIPeR does not construct any confidence regions or require computing and inverting large (covariance) matrices. The Y-axis is in log scale; thus, Lin-VIPeR already has small sub-optimality in the first K ≈ 400 samples. These show the effectiveness of the randomized design for pessimism implicit in Algorithm 1. 6.2 NEURAL CONTEXTUAL BANDITS Next, we compare the performance and computational efficiency of various algorithms against VIPeR when neural networks are employed. For simplicity, we consider contextual bandits, a special case of MDPs with horizon H = 1. Following Zhou et al. (2020); Nguyen-Tang et al. (2022a), we use the bandit problems specified by the following reward functions: (a) r(s, a) = cos(3sT θa); (b) r(s, a) = exp(−10(sT θa)2), where s and θa are generated uniformly at random from the unit sphere Sd−1 with d = 16 and A = 10; (c) MNIST, where r(s, a) = 1 if a is the true label of the input image s and r(s, a) = 0, otherwise. To predict the value of different actions from the same state s using neural networks, we transform a state s ∈ Rd into dA-dimensional vectors s(1) = (s, 0, . . . , 0), s(2) = (0, s, 0, . . . , 0), . . . , s(A) = (0, . . . , 0, s) and train the network to map s(a) to r(s, a) given a pair of data (s, a). For Neural-VIPeR, NeuralGreedy, NeuraLCB, and NeuraLCB (Diag), we use the same neural network architecture with two hidden layers of width m = 64 and train the network with Adam optimizer (Kingma & Ba, 2015). Due to page limitations, we defer other experimental details and hyperparameter setting to Section A.2. We report the 5Our code is available here: https://github.com/thanhnguyentang/neural-offline-rl. sub-optimality averaged over 5 runs in Figure 3. We see that algorithms that use a linear model, i.e., LinLCB and Lin-VIPeR significantly underperform neural-based algorithms, i.e., NeuralGreedy, NeuraLCB, NeuraLCB (Diag) and Neural-VIPeR, attesting to the crucial role neural representations play in RL for non-linear problems. It is also interesting to observe from the experimental results that NeuraLCB does not always outperform its diagonal approximation, NeuraLCB (Diag) (e.g., in Figure 3(b)), putting a question mark on the empirical effectiveness of NTK-based uncertainty for offline RL. Finally, Neural-VIPeR outperforms all algorithms in the tested benchmarks, suggesting the effectiveness of our randomized design with neural function approximation. Figure 4 shows the average runtime for action selection of neural-based algorithms NeuraLCB, NeuraLCB (Diag), and Neural-VIPeR. We observe that algorithms that use explicit confidence regions, i.e., NeuraLCB and NeuraLCB (Diag), take significant time selecting an action when either the number of offline samples K or the network width m increases. This is perhaps not surprising because NeuraLCB and NeuraLCB (Diag) need to compute the inverse of a large covariance matrix to sample an action and maintain the confidence region for each action per state. The diagonal approximation significantly reduces the runtime of NeuraLCB, but the runtime still scales with the number of samples and the network width. In comparison, the runtime for action selection for Neural-VIPeR is constant. Since NeuraLCB, NeuraLCB (Diag), and Neural-VIPeR use the same neural network architecture, the runtime spent training one model is similar. The only difference is that Neural-VIPeR trains M models while NeuraLCB and NeuraLCB (Diag) train a single model. However, as the perturbed data in Algorithm 1 are independent, trainingM models in Neural-VIPeR is embarrassingly parallelizable. Finally, in Figure 5, we study the effect of the ensemble size on the performance of Neural-VIPeR. We use different values of M ∈ {1, 2, 5, 10, 20, 30, 50, 100, 200} for sample size K = 1000. We find that the sub-optimality of Neural-VIPeR decreases graciously as M increases. Indeed, the grid search from the previous experiment in Figure 3 also yields M = 10 and M = 20 from the search space M ∈ {1, 10, 20} as the best result. This suggests that the ensemble size can also play an important role as a hyperparameter that can determine the amount of pessimism needed in a practical setting. 7 CONCLUSION We propose a novel algorithmic approach for offline RL that involves randomly perturbing value functions and pessimism. Our algorithm eliminates the computational overhead of explicitly maintaining a valid confidence region and computing the inverse of a large covariance matrix for pessimism. We bound the suboptimality of the proposed algorithm as Õ ( κH5/2d̃/ √ K ) . We support our theoretical claims of computational efficiency and the effectiveness of our algorithm with extensive experiments. ACKNOWLEDGEMENTS This research was supported, in part, by DARPA GARD award HR00112020004, NSF CAREER award IIS-1943251, an award from the Institute of Assured Autonomy, and Spring 2022 workshop on “Learning and Games” at the Simons Institute for the Theory of Computing. A EXPERIMENT DETAILS A.1 LINEAR MDPS In this subsection, we provide further details to the experiment setup used in Subsection 6.1. We describe in detail a variant of the hard instance of linear MDPs (Yin et al., 2022) used in our experiment. The linear MDP has S = {0, 1},A = {0, 1, · · · , 99}, and the feature dimension d = 10. Each action a ∈ [99] = {1, . . . , 99} is represented by its binary encoding vector ua ∈ R8 with entry being either −1 or 1. The feature mapping ϕ(s, a) is given by ϕ(s, a) = [uTa , δ(s, a), 1− δ(s, a)]T ∈ R10, where δ(s, a) = 1 if (s, a) = (0, 0) and δ(s, a) = 0 otherwise. The true measure νh(s) is given by νh(s) = [0, · · · , 0, (1 − s) ⊕ αh, s ⊕ αh] where {αh}h∈[H] ∈ {0, 1}H are generated uniformly at random and ⊕ is the XOR operator. We define θh = [0, · · · , 0, r, 1 − r]T ∈ R10 where r = 0.99. Recall that the transition follows Ph(s′|s, a) = ⟨ϕ(s, a), νh(s′)⟩ and the mean reward rh(s, a) = ⟨ϕ(s, a), θh⟩. We generated a priori K ∈ {1, . . . , 1000} trajectories using the behavior policy µ, where for any h ∈ [H] we set µh(0|0) = p, µh(1|0) = 1 − p, µh(a|0) = 0,∀a > 1;µh(0|1) = p, µh(a|1) = (1− p)/99,∀a > 0, where we set p = 0.6. We run over K ∈ {1, . . . , 1000} and H ∈ {20, 30, 50, 80}. We set λ = 0.01 for all algorithms. For Lin-VIPeR, we grid searched σh = σ ∈ {0.0, 0.1, 0.5, 1.0, 2.0} and M ∈ {1, 2, 10, 20}. For LinLCB, we grid searched its uncertainty multiplier β ∈ {0.1, 0.5, 1, 2}. The sub-optimality metric is used to compare algorithms. For each H ∈ {20, 30, 50, 80}, each algorithm was executed for 30 times and the averaged results (with std) are reported in Figure 2. A.2 NEURAL CONTEXTUAL BANDITS In this subsection, we provide in detail the experimental and hyperparameter setup in our experiment in Subsection 6.2. For Neural-VIPeR, NeuralGreedy, NeuraLCB and NeuraLCB (Diag), we use the same neural network architecture with two hidden layers whose width m = 64, train the network with Adam optimizer (Kingma & Ba, 2015) with learning rate being grid-searched over {0.0001, 0.001, 0.01} and batch size of 64. For NeuraLCB, NeuraLCB (Diag), and LinLCB, we grid-searched β over {0.001, 0.01, 0.1, 1, 5, 10}. For Neural-VIPeR and Lin-VIPeR, we gridsearched σh = σ over {0.001, 0.01, 0.1, 1, 5, 10} andM over {1, 10, 20}. We did not run NeuraLCB in MNIST as the inverse of a full covariance matrix in this case is extremely expensive. We fixed the regularization parameter λ = 0.01 for all algorithms. Offline data is generated by the (1−ϵ)-optimal policy which generates non-optimal actions with probability ϵ and optimal actions with probability 1 − ϵ. We set ϵ = 0.5 in our experiments. To estimate the expected sub-optimality, we randomly obtain 1, 000 novel samples (i.e. not used in training) to compute the average sub-optimality and keep these same samples for all algorithms. A.3 EXPERIMENT IN D4RL BENCHMARK In this subsection, we evaluate the effectiveness of the reward perturbing design of VIPeR in the Gym domain in the D4RL benchmark (Fu et al., 2020). The Gym domain has three environments (HalfCheetah, Hopper, and Walker2d) with five datasets (random, medium, medium-replay, medium-expert, and expert), making up 15 different settings. Design. To adapt the design of VIPeR to continuous control, we use the actor-critic framework. Specifically, we have M critics {Qθi}i∈[M ] and one actor πϕ, where {θi}i∈[M ] and ϕ are the learnable parameters for the critics and actor, respectively. Note that in the continuous domain, we consider discounted MDP with discount factor γ, instead of finite-time episode MDP as we initially considered in our setting in the main paper. In the presence of the actor πϕ, there are two modifications to Algorithm 1. The first modification is that when training the critics {Qiθ}i∈[M ], we augment the training loss in Algorithm 2 with a new penalization term. Specifically, the critic loss for Qθi on a training sample τ := (s, a, r, s′) (sampled from the offline data D) is L(θi; τ) = (Qθi(s, a)− (r + γQθ̄i(s′) + ξ)) 2 + β Ea′∼πϕ(·|s) [ (Qθi(s, a ′)− Q̄(s, a′))2 ]︸ ︷︷ ︸ penalization term R(θi;s,ϕ) , (1) where θ̄i has the same value of the current θi but is kept fixed, Q̄ = 1M ∑M i=1Qθi and ξ ∼ N (0, σ2) is Gaussian noise, and β is a penalization parameter (note that β here is totally different from the β in Theorem 1). The penalization term R(θi; s, ϕ) discourages overestimation in the value function estimate Qθi for out-of-distribution (OOD) actions a′ ∼ πϕ(·|s). Our design of R(θi; s, ϕ) is initially inspired by the OOD penalization in Bai et al. (2022) that creates a pessimistic pseudo target for the values at OOD actions. Note that we do not need any penalization for OOD actions in our experiment for contextual bandits in Section 6.2. This is because in the contextual bandit setting in Section 6.2 the action space is finite and not large, thus the offline data often sufficiently cover all good actions. In the continuous domain such as the Gym domain of D4RL, however, it is almost certain that there are actions that are not covered by the offline data since the action space is continuous. We also note that the inclusion of the OOD action penalization term R(θi; s, ϕ) in this experiment does not contradict our guarantee in Theorem 1 since in the theorem we consider finite action space while in this experiment we consider continuous action space. We argue that the inclusion of some regularization for OOD actions (e.g., R(θi; s, ϕ)) is necessary for the continuous domain. 6 The second modification to Algorithm 1 for the continuous domain is the actor training, which is the implementation of policy extraction in line 10 of Algorithm 1. Specifically, to train the actor πϕ given the ensemble of critics {Qiθ}i∈[M ], we use soft actor update in Haarnoja et al. (2018) via max ϕ { Es∼D,a′∼πϕ(·|s) [ min i∈[M ] Qθi(s, a ′)− log πϕ(a′|s) ]} , (2) which is trained using gradient ascent in practice. Note that in the discrete action domain, we do not need such actor training as we can efficiently extract the greedy policy with respect to the estimated action-value functions when the action space is finite. Also note that we do not use data splitting and value truncation as in the original design of Algorithm 1. Hyperparameters. For the hyper-parameters of our training, we set M = 10 and the noise variance σ = 0.01. For β, we decrease it from 0.5 to 0.2 by linear decay for the first 50K steps and exponential decay for the remaining steps. For the other hyperparameters of actor-critic training, we fix them the same as in Bai et al. (2022). Specifically, the Q-network is the fully connected neural network with three hidden layers all of which has 256 neurons. The learning rate for the actor and the critic are 10−4 and 3× 10−4, respectively. The optimizer is Adam. Results. We compare VIPeR with several state-of-the-art algorithms, including (i) BEAR (Kumar et al., 2019) that use MMD distance to constraint policy to the offline data, (ii) UWAC (Wu et al., 2021) that improves BEAR using dropout uncertainty, (iii) CQL (Kumar et al., 2020) that minimizes Q-values of OOD actions, (iv) MOPO (Yu et al., 2020) that uses model-based uncertainty via ensemble dynamics, (v) TD3-BC (Fujimoto & Gu, 2021) that uses adaptive behavior cloning, and (vi) PBRL (Bai et al., 2022) that use uncertainty quantification via disagreement of bootstrapped Q-functions. We follow the evaluation protocol in Bai et al. (2022). We run our algorithm for five seeds and report the average final evaluation scores with standard deviation. We report the scores of our method and the baselines in Table 2. We can see that our method has a strong advantage of good performance (highest scores) in 11 out of 15 settings, and has good stability (small std) in all settings. Overall, we also have the strongest average scores aggregated over all settings. B EXTENDED DISCUSSION Here we provide extended discussion of our result. B.1 COMPARISON WITH OTHER WORKS AND DISCUSSION We provide further discussion regarding comparison with other works in the literature. 6In our experiment, we also observe that without this penalization term, the method struggles to learn any good policy. However, using only the penalization term without the first term in Eq. (1), we observe that the method cannot learn either. Comparing to Jin et al. (2021). When the underlying MDP reduces into a linear MDP, if we use the linear model as the plug-in parametric model in Algorithm 1, our bound reduces into Õ ( κH5/2dlin√ K ) which improves the bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) by a factor of √ dlin and worsen by a factor of √ H due to the data splitting. Thus, our bound is more favorable in the linear MDPs with high-dimensional features. Moreover, our bound is guaranteed in more practical scenarios where the offline data can have been adaptively generated and is not required to uniformly cover the state-action space. The explicit bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) is obtained under the assumption that the offline data have uniform coverage and are generated independently on the episode basis. Comparing to Yang et al. (2020). Though Yang et al. (2020) work in the online regime, it shares some part of the literature with our work in function approximation for RL. Besides different learning regimes (offline versus online), we offer three key distinctions which can potentially be used in the online regime as well: (i) perturbed rewards, (ii) optimization, and (iii) data split. Regarding (i), our perturbed reward design can be applied to online RL with function approximation to obtain a provably efficient online RL that is computationally efficient and thus remove the need of maintaining explicit confidence regions and performing the inverse of a large covariance matrix. Regarding (ii), we incorporate the optimization analysis into our algorithm which makes our algorithm and analysis more practical. We also note that unlike (Yang et al., 2020), we do not make any assumption on the eigenvalue decay rate of the empirical NTK kernel as the empirical NTK kernel is data-dependent. Regarding (iii), our data split technique completely removes the factor√ logN∞(H, 1/K,B) in the bound at the expense of increasing the bound by a factor of √ H . In complex models, such log covering number can be excessively larger than the horizon H , making the algorithm too optimistic in the online regime (optimistic in the offline regime, respectively). For example, the target function class is RKHS with a γ-polynomial decay, the log covering number scales as (Yang et al., 2020, Lemma D1),√ logN∞(H, 1/K,B) ≲ K 2 αγ−1 , for some α ∈ (0, 1). In the case of two-layer ReLU NTK, γ = d (Bietti & Mairal, 2019), thus√ logN∞(H, 1/K,B) ≲ K 2 αd−1 which is much larger than √ H when the size of dataset is large. Note that our data-splitting technique is general that can be used in the online regime as well. Comparing to Xu & Liang (2022). Xu & Liang (2022) consider a different setting where pertimestep rewards are not available and only the total reward of the whole trajectory is given. Used with neural function approximation, they obtain Õ(DeffH2/ √ K) where Deff is their effective dimension. Note that Xu & Liang (2022) do not use data splitting and still achieve the same order of Deff as our result with data splitting. It at first might appear that our bound is inferior to their bound as we pay the cost of √ H due to data splitting. However, to obtain that bound, they make three critical assumptions: (i) the offline data trajectories are independently and identically distributed (i.i.d.) (see their Assumption 3), (ii) the offline data is uniformly explorative over all dimensions of the feature space (also see their Assumption 3), and (iii) the eigenfunctions of the induced NTK RKHS has finite spectrum (see their Assumption 4). The i.i.d. assumption under the RKHS space with finite dimensions (due to the finite spectrum assumption) and the well-explored dataset is critical in their proof to use a matrix concentration that does not incur an extra factor of √ Deff as it would normally do without these assumptions (see Section E, the proof of their Lemma 2). Note that the celebrated ReLU NTK does not satisfy the finite spectrum assumption (Bietti & Mairal, 2019). Moreover, we do not make any of these three assumptions above for our bound to hold. That suggests that our bound is much more general. In addition, we do not need to compute any confidence regions nor perform the inverse of a large covariance matrix. Comparing to Yin et al. (2023). During the submission of our work, a concurrent work of Yin et al. (2023) appeared online. Yin et al. (2023) study provably efficient offline RL with a general parametric function approximation that unifies the guarantees of offline RL in linear and generalized linear MDPs, and beyond with potential applications to other classes of functions in practice. We remark that the result in Yin et al. (2023) is orthogonal/complementary to our paper since they consider the parametric class with third-time differentiability which cannot apply to neural networks (not necessarily overparameterized) with non-smooth activation such as ReLU. In addition, they do not consider reward perturbing in their algorithmic design or optimization errors in their analysis. B.2 WORSE-CASE RATE OF EFFECTIVE DIMENSION In the main paper, we prove an Õ ( κH5/2d̃√ K ) sub-optimality bound which depends on the notion of effective dimension defined in Definition 2. Here we give a worst-case rate of the effective dimension d̃ for the two-layer ReLU NTK. We first briefly review the background of RKHS. LetH be an RKHS defined on X ⊆ Rd with kernel function ρ : X ×X → R. Let ⟨·, ·⟩H : H×H → R and ∥ · ∥H : H → R be the inner product and the RKSH norm on H. By the reproducing kernel property of H, there exists a feature mapping ϕ : X → H such that f(x) = ⟨f, ϕ(x)⟩H and ρ(x, x′) = ⟨ϕ(x), ϕ(x′)⟩H. We assume that the kernel function ρ is uniformly bounded, i.e. supx∈X ρ(x, x) <∞. Let L2(X ) be the space of square-integral functions on X with respect to the Lebesgue measure and let ⟨·, ·⟩L2 be the inner product on L2(X ). The kernel function ρ induces an integral operator Tρ : L2(X )→ L2(X ) defined as Tρf(x) = ∫ X ρ(x, x′)f(x′)dx′. By Mercer’s theorem (Steinwart & Christmann, 2008), Tρ has countable and positive eigenvalues {λi}i≥1 and eigenfunctions {νi}i≥1. The kernel function andH can be expressed as ρ(x, x′) = ∞∑ i=1 λiνi(x)νi(x ′), H = {f ∈ L2(X ) : ∞∑ i=1 ⟨f, νi⟩L2 λi <∞}. Now consider the NTK defined in Definition 1: Kntk(x, x ′) = Ew∼N (0,Id/d)⟨xσ ′(wTx), x′σ′(wTx′)⟩. It follows from (Bietti & Mairal, 2019, Proposition 1) that λi ≍ i−d. Thus, by (Srinivas et al., 2010, Theorem 5), the data-dependent effective dimension ofHntk can be bounded in the worst case by d̃ ≲ K ′(d+1)/(2d). We remark that this is the worst-case bound that considers uniformly over all possible realizable of training data. The effective dimension d̃ is on the other hand data-dependent, i.e. its value depends on the specific training data at hand thus d̃ can be actually much smaller than the worst-case rate. C PROOF OF THEOREM 1 AND THEOREM 2 In this section, we provide both the outline and detailed proofs of Theorem 1 and Theorem 2. C.1 TECHNICAL REVIEW AND PROOF OVERVIEW Technical Review. In what follows, we provide more detailed discussion when placing our technical contribution in the context of the related literature. Our technical result starts with the value difference lemma in Jin et al. (2021) to connect bounding the suboptimality of an offline algorithm to controlling the uncertainty quantification in the value estimates. Thus, our key technical contribution is to provably quantify the uncertainty of the perturbed value function estimates which were obtained via reward perturbing and gradient descent. This problem setting is largely different from the current analysis of overparameterized neural networks for supervised learning which does not require uncertainty quantification. Our work is not the first to consider uncertainty quantification with overparameterized neural networks, since it has been studied in Zhou et al. (2020); Nguyen-Tang et al. (2022a); Jia et al. (2022). However, there are significant technical differences between our work and these works. The work in Zhou et al. (2020); Nguyen-Tang et al. (2022a) considers contextual bandits with overparameterized neural networks trained by (S)GD and quantifies the uncertainty of the value function with explicit empirical covariance matrices. We consider general MDP and use reward perturbing to implicitly obtain uncertainty, thus requiring different proof techniques. Jia et al. (2022) is more related to our work since they consider reward perturbing with overparameterized neural networks (but they consider contextual bandits). However, our reward perturbing strategy is largely different from that in Jia et al. (2022). Specifically, Jia et al. (2022) perturbs each reward only once while we perturb each reward multiple times, where the number of perturbing times is crucial in our work and needs to be controlled carefully. We show in Theorem 1 that our reward perturbing strategy is effective in enforcing sufficient pessimism for offline learning in general MDP and the empirical results in Figure 2, Figure 3, Figure 5, and Table 2 are strongly consistent with our theoretical suggestion. Thus, our technical proofs are largely different from those of Jia et al. (2022). Finally, the idea of perturbing rewards multiple times in our algorithm is inspired by Ishfaq et al. (2021). However, Ishfaq et al. (2021) consider reward perturbing for obtaining optimism in online RL. While perturbing rewards are intuitive to obtain optimism for online RL, for offline RL, under distributional shift, it can be paradoxically difficult to properly obtain pessimism with randomization and ensemble (Ghasemipour et al., 2022), especially with neural function approximation. We show affirmatively in our work that simply taking the minimum of the randomized value functions after perturbing rewards multiple times is sufficient to obtain provable pessimism for offline RL. In addition, Ishfaq et al. (2021) do not consider neural network function approximation and optimization. Controlling the uncertainty of randomization (via reward perturbing) under neural networks with extra optimization errors induced by gradient descent sets our technical proof significantly apart from that of Ishfaq et al. (2021). Besides all these differences, in this work, we propose an intricately-designed data splitting technique that avoids the uniform convergence argument and could be of independent interest for studying sample-efficient RL with complex function approximation. Proof Overview. The key steps for proving Theorem 1 and Theorem 2 are highlighted in Subsection C.2 and Subsection C.3, respectively. Here, we discuss an overview of our proof strategy. The key technical challenge in our proof is to quantify the uncertainty of the perturbed value function estimates. To deal with this, we carefully control both the near-linearity of neural networks in the NTK regime and the estimation error induced by reward perturbing. A key result that we use to control the linear approximation to the value function estimates is Lemma D.3. The technical challenge in establishing Lemma D.3 is how to carefully control and propagate the optimization error incurred by gradient descent. The complete proof of Lemma D.3 is provided in Section E.3. The implicit uncertainty quantifier induced by the reward perturbing is established in Lemma D.1 and Lemma D.2, where we carefully design a series of intricate auxiliary loss functions and establish the anti-concentrability of the perturbed value function estimates. This requires a careful design of the variance of the noises injected into the rewards. To deal with removing a potentially large covering number when we quantify the implicit uncertainty, we propose our data splitting technique which is validated in the proof of Lemma D.1 in Section E.1. Moreover, establishing Lemma D.1 in the overparameterization regime induces an additional challenge since a standard analysis would result in a vacuous bound that scales with the overparameterization. We avoid this issue by carefully incorporating the use of the effective dimension in Lemma D.1. C.2 PROOF OF THEOREM 1 In this subsection, we present the proof of Theorem 1. We first decompose the suboptimality SubOpt(π̃; s) and present the main lemmas to bound the evaluation error and the summation of the implicit confidence terms, respectively. The detailed proof of these lemmas are deferred to Section D. For proof convenience, we first provide the key parameters that we use consistently throughout our proofs in Table 3. We define the model evaluation error at any (x, h) ∈ X × [H] as errh(x) = (BhṼh+1 − Q̃h)(x), (3) where Bh is the Bellman operator defined in Section 3, and Ṽh and Q̃h are the estimated (action-) state value functions returned by Algorithm 1. Using the standard suboptimality decomposition (Jin et al., 2021, Lemma 3.1), for any s1 ∈ S, SubOpt(π̃; s1) = − H∑ h=1 Eπ̃ [errh(sh, ah)] + H∑ h=1 Eπ∗ [errh(sh, ah)] + H∑ h=1 Eπ∗ [ ⟨Q̃h(sh, ·), π∗h(·|sh)− π̃h(·|sh)⟩A ] ︸ ︷︷ ︸ ≤0 , where the third term is non-positive as π̃h is greedy with respect to Q̃h. Thus, for any s1 ∈ S, we have SubOpt(π̃; s1) ≤ − H∑ h=1 Eπ̃ [errh(sh, ah)] + H∑ h=1 Eπ∗ [errh(sh, ah)] . (4) In the following main lemma, we bound the evaluation error errh(s, a). In the rest of the proof, we consider an additional parameter R and fix any δ ∈ (0, 1). Lemma C.1. Let m = Ω ( d3/2R−1 log3/2( √ m/R) ) R = O ( m1/2 log−3m ) , m = Ω ( K ′10(H + ψ)2 log(3K ′H/δ) ) λ > 1 K ′C2g ≥ λR ≥ max{4B̃1, 4B̃2, 2 √ 2λ−1K ′(H + ψ + γh,1)2 + 4γ2h,2}, η ≤ (λ+K ′C2g )−1, ψ > ι, σh ≥ β,∀h ∈ [H], (5) where B̃1, B̃2, γh,1, γh,2, and ι are defined in Table 3,Cg is a absolute constant given in Lemma G.1, and R is an additional parameter. Let M = log HSAδ / log 1 1−Φ(−1) where Φ(·) is the cumulative distribution function of the standard normal distribution. With probability at least 1−MHm−2−2δ, for any (x, h) ∈ X × [H], we have −ι ≤ errh(x) ≤ σh(1 + √ 2 log(MSAH/δ)) · ∥g(x;W0)∥Λ−1h + ι where Λh := λImd + ∑ k∈Ih g(x k h;W0)g(x k h;W0) T ∈ Rmd×md. Now we can prove Theorem 1. Proof of Theorem 1. Theorem 1 can directly follow from substituting Lemma C.1 into Equation (4). We now only need to simplify the conditions in Equation (5). To satisfy Equation (5), it suffices to set λ = 1 + HK ψ = 1 > ι σh = β 8CgR 4/3m−1/6 √ logm ≤ 1 λ−1K ′H2 ≥ 2 B̃1 ≤ √ 2K ′(H + ψ + γh,1)2 + λγ2h,2 + 1 √ K ′CgR 1/3m−1/6 √ logm ≤ 1 B̃2 ≤ K ′CgR4/3m−1/6 √ logm ≤ 1. Combining with Equation 5, we have λ = 1 + HK ψ = 1 > ι σh = β η ≲ (λ+K ′)−1 m ≳ max { R8 log3m,K ′10(H + 1)2 log(3K ′H/δ), d3/2R−1 log3/2( √ m/R),K ′6R8 log3m } m ≳ [2K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) + 1]3K ′3R log3m 4 √ K ′(H + 1 + β √ log(K ′M/δ)) + 4β √ d log(dK ′M/δ) ≤ R ≲ K ′. (6) Note that with the above choice of λ = 1 + HK , we have K ′ log λ = log(1 + 1 K ′ )K ′ ≤ log 3 < 2. We further set that m ≳ B2K ′2d log(3H/δ), we have β = BK ′√ m (2 √ d+ √ 2 log(3H/ δ))λ−1/2Cg + λ 1/2B + (H + ψ) [√ d̃h log(1 + K ′ λ ) +K ′ log λ+ 2 log(3H/δ) ] ≤ 1 + λ1/2B + (H + 1) [√ d̃h log(1 + K ′ λ ) + 2 + 2 log(3H/δ) ] = o( √ K ′). Thus, 4 √ K ′(H + 1 + β √ log(K ′M/δ)) + 4β √ d log(dK ′M/δ) << K ′ for K ′ large enough. Therefore, there exists R that satisfies Equation (6). We now only need to verify ι < 1. We have ι0 = Bm −1/2(2 √ d+ √ 2 log(3H/δ)) ≤ 1/3, ι1 = CgR 4/3m−1/6 √ logm+ Cg ( B̃1 + B̃2 + λ −1(1− ηλ)J ( K ′(H + 1 + γh,1) 2 + λγ2h,2 )) ≲ 1/3 if (1− ηλ)J [ K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) ] ≲ 1. (7) Note that (1− ηλ)J ≤ e−ηλJ , K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) ≲ K ′H2λβ2d log(dK ′M/δ). Thus, Equation (7) is satisfied if J ≳ ηλ log ( K ′H2λβ2d log(dK ′M/δ) ) . Finally note that ι2 ≤ ι1. Rearranging the derived conditions here gives the complete parameter conditions in Theorem 1. Specifically, the polynomial form of m is m ≳ max{R8 log3m,K ′10(H + 1)2 log(3K ′H/δ), d3/2R−1 log3/2( √ m/R),K ′6R8 log3m, B2K ′2d log(3H/δ)}, m ≳ [2K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) + 1]3K ′3R log3m. C.3 PROOF OF THEOREM 2 In this subsection, we give a detailed proof of Theorem 2. We first present intermediate lemmas whose proofs are deferred to Section D. For any h ∈ [H] and k ∈ Ih = [(H − h)K ′ +1, . . . , (H − h+ 1)K ′], we define the filtration Fkh = σ ( {(sth′ , ath′ , rth′)} t≤k h′∈[H] ∪ {(s k+1 h′ , a k+1 h′ , r k+1 h′ )}h′≤h−1 ∪ {(s k+1 h , a k+1 h )} ) . Let Λkh := λI + ∑ t∈Ik,t≤k g(xth;W0)g(x t h;W0) T , β̃ := β(1 + 2 √ log(SAH/δ)). In the following lemma, we connect the expected sub-optimality of π̃ to the summation of the uncertainty quantifier at empirical data. Lemma C.2. Suppose that the conditions in Theorem 1 all hold. With probability at least 1 − MHm−2 − 3δ, SubOpt(π̃) ≤ 2β̃ K ′ H∑ h=1 ∑ k∈Ih Eπ∗ [ ∥g(xh;W0)∥(Λkh)−1 ∣∣∣∣Fk−1h , sk1]+ 163K ′H log(log2(K ′H)/δ) + 2 K ′ + 2ι, Lemma C.3. Under Assumption 5.2, for any h ∈ [H] and fixed W0, with probability at least 1− δ,∑ k∈Ih Eπ∗ [ ∥g(xh;W0)∥(Λkh)−1 ∣∣∣∣Fk−1, sk1] ≤ ∑ k∈Ih κ∥g(xh;W0)∥(Λkh)−1 + κ √ K ′ log(1/δ) λ . Lemma C.4. If λ ≥ C2g and m = Ω(K ′4 log(K ′H/δ)), then with probability at least 1− δ, for any h ∈ [H], we have ∑ k∈Ih ∥g(xh;W0)∥2(Λkh)−1 ≤ 2d̃h log(1 +K ′/λ) + 1. where d̃h is the effective dimension defined in Definition 2. Proof of Theorem 2. Theorem 2 directly follows from Lemma C.2-C.3-C.4 using the union bound. D PROOF OF LEMMA C.1 In this section, we provide the proof for Lemma C.1. We set up preparation for all the results in the rest of the paper and provide intermediate lemmas that we use to prove Lemma C.1. The detailed proofs of these intermediate lemmas are deferred to Section E. D.1 PREPARATION To prepare for the lemmas and proofs in the rest of the paper, we define the following quantities. Recall that we use abbreviation x = (s, a) ∈ X ⊂ Sd−1 and xkh = (skh, akh) ∈ X ⊂ Sd−1. For any h ∈ [H] and i ∈ [M ], we define the perturbed loss function L̃ih(W ) := 1 2 ∑ k∈Ih ( f(xkh;W )− ỹ i,k h ) )2 + λ 2 ∥W + ζih −W0∥22, (8) where ỹi,kh := r k h + Ṽh+1(s k h+1) + ξ i,k h , Ṽh+1 is computed by Algorithm 1 at Line 10 for timestep h+1, and {ξi,kh } and ζih are the Gaussian noises obtained at Line 5 of Algorithm 1. Here the subscript h and the superscript i in L̃ih(W ) emphasize the dependence on the ensemble sample i and timestep h. The gradient descent update rule of L̃ih(W ) is W̃ i,(j+1) h = W̃ i,(j) h − η∇L̃ i h(W ), (9) where W̃ i,(0)h =W0 is the initialization parameters. Note that W̃ ih = GradientDescent(λ, η, J, D̃ih, ζih,W0) = W̃ i,(J) h , where W̃ ih is returned by Line 7 of Algorithm 1. We consider a non-perturbed auxiliary loss function Lh(W ) := 1 2 ∑ k∈Ih ( f(xkh;W )− ykh) )2 + λ 2 ∥W −W0∥22, (10) where ykh := r k h + Ṽh+1(s k h+1). Note that Lh(W ) is simply a non-perturbed version of L̃ih(W ) where we drop all the noises {ξ i,k h } and {ζih}. We consider the gradient update rule for Lh(W ) as follows Ŵ (j+1) h = Ŵ (j) h − η∇Lh(W ), (11) where Ŵ (0)h =W0 is the initialization parameters. To correspond with W̃ i h, we denote Ŵh := Ŵ (J) h . (12) We also define the auxiliary loss functions for both non-perturbed and perturbed data in the linear model with feature g(·;W0) as follows L̃i,linh (W ) := 1 2 ∑ k∈Ih ( ⟨g(xkh;W0),W ⟩ − ỹ i,k h )2 + λ 2 ∥W + ζih −W0∥22, (13) Llinh (W ) := 1 2 ∑ k∈Ih ( ⟨g(xkh;W0),W ⟩ − ykh )2 + λ 2 ∥W −W0∥22. (14) We consider the auxiliary gradient updates for L̃i,linh (W ) as W̃ i,lin,(j+1) h = W̃ i,lin,(j) h − η∇L̃ i,lin h (W ), (15) Ŵ lin,(j+1) h = Ŵ lin,(j) h − η∇L̃ lin h (W ), (16) where W̃ i,lin,(0)h = Ŵ i,lin,(0) h = W0 for all i, h. Finally, we define the least-square solutions to the auxili
1. What is the focus of the paper regarding reinforcement learning? 2. What are the strengths of the proposed algorithm, particularly in terms of its practicality and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding the presentation of the theorems and their limitations? 4. Do you have any concerns about the algorithm's ability to handle different qualities of offline data? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper looks at the problem of reinforcement learning from offline data. They authors introduce PERVI, which uses "randomized value functions" to generate an approximate posterior distribution over value functions, and then acts pessimistically with respect to those estimates for safety. The authors support their new algorithm through an analysis in tabular MDPs, as well as more empirical evaluation with neural network function approximation. Strengths And Weaknesses There are several things to like about this paper: The problem of RL and decision making with offline data is an important one for the community. The PERVI algorithm passes a "sanity check" intuitively... by this I mean that it's not just an algorithm for a proof... but you have a sense this is something close to something someone would actually want to use. The quality of the writing and presentation in the paper overall is very high. The paper has a progression of intuition, to hard theoretical guarantees in simple settings, to empirical success in more complex settings. I like the way of analysing problems with overparameterized NTK! Discussion of related work and the key intuitions for the approach appear to be pretty comprehensive for a short paper, although I am likely missing important pieces. There are some places where the paper probably could be further strengthened: In some sense, many of the results and analyses are sort of incremental. The application of randomized value functions plus pessimism has existed before, but this is a slightly new twist on that as opposed to a "game changing" new perspective. Some of the ways the Theorems are presented are really messy... you need to read through lines and lines of bizarre constants/terms to even get to the result! Something must be missing (at a high level) from these theorems, since they don't really expose clearly a dependence on the quality of the offline data... an algorithm should be able to learn very differently when the demonstration data is very good, versus when it is very bad... and a good algorithm should be able to kind of work that out and leverage it. Why do you use the term "SubOpt" instead of regret? Does PERVI (pervy?) raise some of the issues that NeurIPS (nips) had to deal with? Clarity, Quality, Novelty And Reproducibility Overall I think the paper scores highly across these attributes.
ICLR
Title VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation Abstract We propose a novel algorithm for offline reinforcement learning called Value Iteration with Perturbed Rewards (VIPeR), which amalgamates the pessimism principle with random perturbations of the value function. Most current offline RL algorithms explicitly construct statistical confidence regions to obtain pessimism via lower confidence bounds (LCB), which cannot easily scale to complex problems where a neural network is used to estimate the value functions. Instead, VIPeR implicitly obtains pessimism by simply perturbing the offline data multiple times with carefully-designed i.i.d. Gaussian noises to learn an ensemble of estimated state-action value functions and acting greedily with respect to the minimum of the ensemble. The estimated state-action values are obtained by fitting a parametric model (e.g., neural networks) to the perturbed datasets using gradient descent. As a result, VIPeR only needsO(1) time complexity for action selection, while LCB-based algorithms require at least Ω(K), where K is the total number of trajectories in the offline data. We also propose a novel data-splitting technique that helps remove a factor involving the log of the covering number in our bound. We prove that VIPeR yields a provable uncertainty quantifier with overparameterized neural networks and enjoys a bound on sub-optimality of Õ(κHd̃/ √ K), where d̃ is the effective dimension, H is the horizon length and κ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation. N/A √ K), where d̃ is the effective dimension, H is the horizon length and κ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation. 1 INTRODUCTION Offline reinforcement learning (offline RL) (Lange et al., 2012; Levine et al., 2020) is a practical paradigm of RL for domains where active exploration is not permissible. Instead, the learner can access a fixed dataset of previous experiences available a priori. Offline RL finds applications in several critical domains where exploration is prohibitively expensive or even implausible, including healthcare (Gottesman et al., 2019; Nie et al., 2021), recommendation systems (Strehl et al., 2010; Thomas et al., 2017), and econometrics (Kitagawa & Tetenov, 2018; Athey & Wager, 2021), among others. The recent surge of interest in this area and renewed research efforts have yielded several important empirical successes (Chen et al., 2021; Wang et al., 2023; 2022; Meng et al., 2021). A key challenge in offline RL is to efficiently exploit the given offline dataset to learn an optimal policy in the absence of any further exploration. The dominant approaches to offline RL address this challenge by incorporating uncertainty from the offline dataset into decision-making (Buckman et al., 2021; Jin et al., 2021; Xiao et al., 2021; Nguyen-Tang et al., 2022a; Ghasemipour et al., 2022; An et al., 2021; Bai et al., 2022). The main component of these uncertainty-aware approaches to offline RL is the pessimism principle, which constrains the learned policy to the offline data and leads to various lower confidence bound (LCB)-based algorithms. However, these methods are not easily extended or scaled to complex problems where neural function approximation is used to estimate the value functions. In particular, it is costly to explicitly compute the statistical confidence regions of the model or value functions if the class of function approximator is given by overparameterized neural networks. For example, constructing the LCB for neural offline contextual bandits (NguyenTang et al., 2022a) and RL (Xu & Liang, 2022) requires computing the inverse of a large covariance matrix whose size scales with the number of parameters in the neural network. This computational cost hinders the practical application of these provably efficient offline RL algorithms. Therefore, a largely open question is how to design provably computationally efficient algorithms for offline RL with neural network function approximation. In this work, we present a solution based on a computational approach that combines the pessimism principle with randomizing the value function (Osband et al., 2016; Ishfaq et al., 2021). The algorithm is strikingly simple: we randomly perturb the offline rewards several times and act greedily with respect to the minimum of the estimated state-action values. The intuition is that taking the minimum from an ensemble of randomized state-action values can efficiently achieve pessimism with high probability while avoiding explicit computation of statistical confidence regions. We learn the state-action value function by training a neural network using gradient descent (GD). Further, we consider a novel data-splitting technique that helps remove the dependence on the potentially large log covering number in the learning bound. We show that the proposed algorithm yields a provable uncertainty quantifier with overparameterized neural network function approximation and achieves a sub-optimality bound of Õ(κH5/2d̃/ √ K), where K is the total number of episodes in the offline data, d̃ is the effective dimension, H is the horizon length, and κ measures the distributional shift. We achieve computational efficiency since the proposed algorithm only needsO(1) time complexity for action selection, while LCB-based algorithms require O(K2) time complexity. We empirically corroborate the statistical and computational efficiency of our proposed algorithm on a wide set of synthetic and real-world datasets. The experimental results show that the proposed algorithm has a strong advantage in computational efficiency while outperforming LCB-based neural algorithms. To the best of our knowledge, ours is the first offline RL algorithm that is both provably and computationally efficient in general MDPs with neural network function approximation. 2 RELATED WORK Randomized value functions for RL. For online RL, Osband et al. (2016; 2019) were the first to explore randomization of estimates of the value function for exploration. Their approach was inspired by posterior sampling for RL (Osband et al., 2013), which samples a value function from a posterior distribution and acts greedily with respect to the sampled function. Concretely, Osband et al. (2016; 2019) generate randomized value functions by injecting Gaussian noise into the training data and fitting a model on the perturbed data. Jia et al. (2022) extended the idea of perturbing rewards to online contextual bandits with neural function approximation. Ishfaq et al. (2021) obtained a provably efficient method for online RL with general function approximation using the perturbed rewards. While randomizing the value function is an intuitive approach to obtaining optimism in online RL, obtaining pessimism from the randomized value functions can be tricky in offline RL. Indeed, Ghasemipour et al. (2022) point out a critical flaw in several popular existing methods for offline RL that update an ensemble of randomized Q-networks toward a shared pessimistic temporal difference target. In this paper, we propose a simple fix to obtain pessimism properly by updating each randomized value function independently and taking the minimum over an ensemble of randomized value functions to form a pessimistic value function. Offline RL with function approximation. Provably efficient offline RL has been studied extensively for linear function approximation. Jin et al. (2021) were the first to show that pessimistic value iteration is provably efficient for offline linear MDPs. Xiong et al. (2023); Yin et al. (2022) improved upon Jin et al. (2021) by leveraging variance reduction. Xie et al. (2021) proposed a Bellman-consistency assumption with general function approximation, which improves the bound of Jin et al. (2021) by a factor of √ d when realized to finite action space and linear MDPs. Wang et al. (2021); Zanette (2021) studied the statistical hardness of offline RL with linear function approximation via exponential lower bound, and Foster et al. (2021) suggested that only realizability and strong uniform data coverage are not sufficient for sample-efficient offline RL. Beyond linearity, some works study offline RL for general function approximation, both parametric and nonparametric. These approaches are either based on Fitted-Q Iteration (FQI) (Munos & Szepesvári, 2008; Le et al., 2019; Chen & Jiang, 2019; Duan et al., 2021a;b; Hu et al., 2021; Nguyen-Tang et al., 2022b) or the pessimism principle (Uehara & Sun, 2022; Nguyen-Tang et al., 2022a; Jin et al., 2021). While pessimism-based algorithms avoid the strong assumptions of data coverage used by FQI-based algorithms, they require an explicit computation of valid confidence regions and possibly the inverse of a large covariance matrix which is computationally prohibitive and does not scale to complex function approximation setting. This limits the applicability of pessimism-based, provably efficient offline RL to practical settings. A very recent work Bai et al. (2022) estimates the uncertainty for constructing LCB via the disagreement of bootstrapped Q-functions. However, the uncertainty quantifier is only guaranteed in linear MDPs and must be computed explicitly. We provide a more detailed discussion of our technical contribution in the context of existing literature in Section C.1. 3 PRELIMINARIES In this section, we provide basic background on offline RL and overparameterized neural networks. 3.1 EPISODIC TIME-INHOMOGENOUS MARKOV DECISION PROCESSES (MDPS) A finite-horizon Markov decision process (MDP) is denoted as the tupleM = (S,A,P, r,H, d1), where S is an arbitrary state space, A an arbitrary action space, H the episode length, and d1 the initial state distribution. We assume that SA := |S||A| is finite but arbitrarily large, e.g., it can be as large as the total number of atoms in the observable universe ≈ 1082. Let P(S) denote the set of probability measures over S. A time-inhomogeneous transition kernel P = {Ph}Hh=1, where Ph : S × A → P(S) maps each state-action pair (sh, ah) to a probability distribution Ph(·|sh, ah). Let r = {rh}Hh=1 where rh : S × A → [0, 1] is the mean reward function at step h. A policy π = {πh}Hh=1 assigns each state sh ∈ S to a probability distribution, πh(·|sh), over the action space and induces a random trajectory s1, a1, r1, . . . , sH , aH , rH , sH+1 where s1 ∼ d1, ah ∼ πh(·|sh), sh+1 ∼ Ph(·|sh, ah). We define the state value function V πh ∈ RS and the actionstate value function Qπh ∈ RS×A at each timestep h as Qπh(s, a) = Eπ[ ∑H t=h rt|sh = s, ah = a], and V πh (s) = Ea∼π(·|s) [Qπh(s, a)], where the expectation Eπ is taken with respect to the randomness of the trajectory induced by π. Let Ph denote the transition operator defined as (PhV )(s, a) := Es′∼Ph(·|s,a)[V (s′)]. For any V : S → R, we define the Bellman operator at timestep h as (BhV )(s, a) := rh(s, a) + (PhV )(s, a). The Bellman equations are given as follows. For any (s, a, h) ∈ S ×A× [H], Qπh(s, a) = (BhV πh+1)(s, a), V πh (s) = ⟨Qπh(s, ·), πh(·|s)⟩A, V πH+1(s) = 0, where [H] := {1, 2, . . . ,H}, and ⟨·, ·⟩A denotes the summation over all a ∈ A. We define an optimal policy π∗ as any policy that yields the optimal value function, i.e. V π ∗ h (s) = supπ V π h (s) for any (s, h) ∈ S × [H]. For simplicity, we denote V π∗h and Qπ ∗ h as V ∗ h and Q ∗ h, respectively. The Bellman optimality equation can be written as Q∗h(s, a) = (BhV ∗h+1)(s, a), V ∗h (s) = max a∈A Q∗h(s, a), V ∗ H+1(s) = 0. Define the occupancy density as dπh(s, a) := P((sh, ah) = (s, a)|π) which is the probability that we visit state s and take action a at timestep h if we follow the policy π. We denote dπ ∗ h by d ∗ h. Offline regime. In the offline regime, the learner has access to a fixed dataset D = {(sth, ath, rth, sth+1)} t∈[K] h∈[H] generated a priori by some unknown behaviour policy µ = {µh}h∈[H]. Here, K is the total number of trajectories, and ath ∼ µh(·|sth), sth+1 ∼ Ph(·|sth, ath) for any (t, h) ∈ [K] × [H]. Note that we allow the trajectory at any time t ∈ [K] to depend on the trajectories at previous times. The goal of offline RL is to learn a policy π̂, based on (historical data) D, such that π̂ achieves small sub-optimality, which we define as SubOpt(π̂) := Es1∼d1 [SubOpt(π̂; s1)] , where SubOpt(π̂; s1) := V π ∗ 1 (s1)− V π̂1 (s1). Algorithm 1 Value Iteration with Perturbed Rewards (VIPeR) 1: Input: Offline data D = {(skh, akh, rkh)} k∈[K] h∈[H], a parametric function family F = {f(·, ·;W ) : W ∈ W} ⊂ {X → R} (e.g. neural networks), perturbed variances {σh}h∈[H], number of bootstraps M , regularization parameter λ, step size η, number of gradient descent steps J , and cutoff margin ψ, split indices {Ih}h∈[H] where Ih := [(H − h)K ′ + 1, . . . , (H − h+ 1)K ′] 2: Initialize ṼH+1(·)← 0 and initialize f(·, ·;W ) with initial parameter W0 3: for h = H, . . . , 1 do 4: for i = 1, . . . ,M do 5: Sample {ξk,ih }k∈Ih ∼ N (0, σ2h) and ζih = {ζ j,i h }j∈[d] ∼ N (0, σ2hId) 6: Perturb the dataset D̃ih ← {skh, akh, rkh + Ṽh+1(skh+1) + ξ k,i h }k∈Ih ▷ Perturbation 7: Let W̃ ih ← GradientDescent(λ, η, J, D̃ih, ζih,W0) (Algorithm 2) ▷ Optimization 8: end for 9: Compute Q̃h(·, ·)← min{mini∈[M ]f(·, ·; W̃ ih), (H − h+ 1)(1 + ψ)}+ ▷ Pessimism 10: π̃h ← argmaxπh⟨Q̃h, πh⟩ and Ṽh ← ⟨Q̃h, π̃h⟩ ▷ Greedy 11: end for 12: Output: π̃ = {π̃h}h∈[H]. Notation. For simplicity, we write xth = (sth, ath) and x = (s, a). We write Õ(·) to hide logarithmic factors of the problem parameters (d,H,K,m, 1/δ) in the standard Big-Oh notation. We use Ω(·) as the standard Omega notation. We write u ≲ v if u = O(v) and write u ≳ v if v ≲ u. We write A ⪯ B iff B −A is a positive definite matrix. Id denotes the d× d identity matrix. 3.2 OVERPARAMETERIZED NEURAL NETWORKS In this paper, we consider neural function approximation setting where the state-action value function is approximated by a two-layer neural network. For simplicity, we denoteX := S×A and view it as a subset of Rd. Without loss of generality, we assume X ⊂ Sd−1 := {x ∈ Rd : ∥x∥2 = 1}. We consider a standard two-layer neural network: f(x;W, b) = 1√ m ∑m i=1 biσ(w T i x), where m is an even number, σ(·) = max{·, 0} is the ReLU activation function (Arora et al., 2018), and W = (wT1 , . . . , w T m) T ∈ Rmd. During the training, we initialize (W, b) via the symmetric initialization scheme (Gao et al., 2019) as follows: For any i ≤ m2 , wi = wm2 +i ∼ N (0, Id/d), and bm 2 +i = −bi ∼ Unif({−1, 1}).1 During the training, we optimize over W while the bi are kept fixed, thus we write f(x;W, b) as f(x;W ). Denote g(x;W ) = ∇W f(x;W ) ∈ Rmd, and let W0 be the initial parameters of W . We assume that the neural network is overparameterized, i.e, the width m is sufficiently larger than the number of samples K. Overparameterization has been shown to be effective in studying the convergence and the interpolation behaviour of neural networks (Arora et al., 2019; Allen-Zhu et al., 2019; Hanin & Nica, 2020; Cao & Gu, 2019; Belkin, 2021). Under such an overparameterization regime, the dynamics of the training of the neural network can be captured using the framework of the neural tangent kernel (NTK) (Jacot et al., 2018). 4 ALGORITHM In this section, we present the proposed algorithm called Value Iteration with Perturbed Rewards, or VIPeR; see Algorithm 1 for the pseudocode. The key idea underlying VIPeR is to train a parametric model (e.g., a neural network) on a perturbed-reward dataset several times and act pessimistically by picking the minimum over an ensemble of estimated state-action value functions. In particular, at each timestep h ∈ [H], we drawM independent samples of zero-mean Gaussian noise with variance σh. We use these samples to perturb the sum of the observed rewards, rkh, and the estimated value function with a one-step lookahead, i.e., Ṽh+1(skh+1) (see Line 6 of Algorithm 1). The weights W̃ i h are then updated by minimizing the perturbed regularized squared loss on {D̃ih}i∈[M ] using gradient descent (Line 7). We pick the value function pessimistically by selecting the minimum over the finite ensemble. The chosen value function is truncated at (H − h+ 1)(1 + ψ) (see Line 9), where 1This symmetric initialization scheme makes f(x;W0) = 0 and ⟨g(x;W0),W0⟩ = 0 for any x. ψ ≥ 0 is a small cutoff margin (more on this when we discuss the theoretical analysis). The returned policy is greedy with respect to the truncated pessimistic value function (see Line 10). Algorithm 2 GradientDescent(λ, η, J, D̃ih, ζih,W0) 1: Input: Regularization parameter λ, step size η, number of gradient descent steps J , perturbed dataset D̃ih = {skh, akh, rkh + Ṽh+1(s k h+1) + ξ t,i h }k∈Ih , regularization per- turber ζih, initial parameter W0 2: L(W ) := 12 ∑ k∈Ih(f(s k h, a k h;W ) − (rkh + Ṽh+1(s k h+1) + ξ k,i h )) 2 + λ2 ∥W + ζ i h −W0∥22 3: for j = 0, . . . , J − 1 do 4: Wj+1 ←Wj − η∇L(Wj) 5: end for 6: Output: WJ . It is important to note that we split the trajectory indices [K] evenly into H disjoint buckets [K] = ∪h∈[H]Ih, where Ih = [(H − h)K ′ + 1, . . . , (H − h + 1)K ′] for K ′ := ⌊K/H⌋2, as illustrated in Figure 1. The estimated Q̃h is thus obtained only from the offline data with (trajectory) indices from Ih along with Ṽh+1. This novel design removes the data dependence structure in offline RL with function approximation (Nguyen-Tang et al., 2022b) and avoids a factor involving the log of the covering number in the bound on the sub-optimality of Algorithm 1, as we show in Section D.1. To deal with the non-linearity of the underlying MDP, we use a two-layer fully connected neural network as the parametric function family F in Algorithm 1. In other words, we approximate the state-action values: f(x;W ) = 1√ m ∑m i=1 biσ(w T i x), as described in Section 3.2. We use two-layer neural networks to simplify the computational analysis. We utilize gradient descent to train the state-action value functions {f(·, ·; W̃ ih)}i∈[M ], on perturbed rewards. The use of gradient descent is for the convenience of computational analysis, and our results can be extended to stochastic gradient descent by leveraging recent advances in the theory of deep learning (Allen-Zhu et al., 2019; Cao & Gu, 2019), albeit with a more involved analysis. Existing offline RL algorithms utilize estimates of statistical confidence regions to achieve pessimism in the offline setting. Explicitly constructing these confidence bounds is computationally expensive in complex problems where a neural network is used for function approximation. For example, the lower-confidence-bound-based algorithms in neural offline contextual bandits (NguyenTang et al., 2022a) and RL (Xu & Liang, 2022) require computing the inverse of a large covariance matrix with the size scaling with the number of network parameters. This is computationally prohibitive in most practical settings. Algorithm 1 (VIPeR) avoids such expensive computations while still obtaining provable pessimism and guaranteeing a rate of Õ( 1√ K ) on the sub-optimality, as we show in the next section. 5 SUB-OPTIMALITY ANALYSIS Next, we provide a theoretical guarantee on the sub-optimality of VIPeR for the function approximation class, F , represented by (overparameterized) neural networks. Our analysis builds on the recent advances in generalization and optimization of deep neural networks (Arora et al., 2019; Allen-Zhu et al., 2019; Hanin & Nica, 2020; Cao & Gu, 2019; Belkin, 2021) that leverage the observation that the dynamics of the neural parameters learned by (stochastic) gradient descent can be captured by the corresponding neural tangent kernel (NTK) space (Jacot et al., 2018) when the network is overparameterized. Next, we recall some definitions and state our key assumptions, formally. Definition 1 (NTK (Jacot et al., 2018)). The NTK kernel Kntk : X × X → R is defined as Kntk(x, x ′) = Ew∼N (0,Id/d)⟨xσ ′(wTx), x′σ′(wTx′)⟩, where σ′(u) = 1{u ≥ 0}. 2Without loss of generality, we assume K/H ∈ N. Let Hntk denote the reproducing kernel Hilbert space (RKHS) induced by the NTK, Kntk. SinceKntk is a universal kernel (Ji et al., 2020), we have that Hntk is dense in the space of continuous functions on (a compact set) X = S ×A (Rahimi & Recht, 2008). Definition 2 (Effective dimension). For any h ∈ [H], the effective dimension of the NTK matrix on data {xkh}k∈Ih is defined as d̃h := logdet(IK′ +Kh/λ) log(1 +K ′/λ) , where Kh := [Kntk(xih, x j h)]i,j∈Ih is the Gram matrix of Kntk on the data {xkh}k∈Ih . We further define d̃ := maxh∈[H] d̃h. Remark 1. Intuitively, the effective dimension d̃h measures the number of principal dimensions over which the projection of the data {xkh}k∈Ih in the RKHSHntk is spread. It was first introduced by Valko et al. (2013) for kernelized contextual bandits and was subsequently adopted by Yang & Wang (2020) and Zhou et al. (2020) for kernelized RL and neural contextual bandits, respectively. The effective dimension is data-dependent and can be bounded by d̃ ≲ K ′(d+1)/(2d) in the worst case (see Section B for more details).3 Definition 3 (RKHS of the infinite-width NTK). Define Q∗ := {f(x) = ∫ Rd c(w) Txσ′(wTx)dw : supw ∥c(w)∥2 p0(w) < B}, where c : Rd → Rd is any function, p0 is the probability density function of N (0, Id/d), and B is some positive constant. We make the following assumption about the regularity of the underlying MDP under function approximation. Assumption 5.1 (Completeness). For any V : S → [0, H + 1] and any h ∈ [H], BhV ∈ Q∗.4 Assumption 5.1 ensures that the Bellman operator Bh can be captured by an infinite-width neural network. This assumption is mild as Q∗ is a dense subset of Hntk (Gao et al., 2019, Lemma C.1) when B = ∞, thus Q∗ is an expressive function class when B is sufficiently large. Moreover, similar assumptions have been used in many prior works on provably efficient RL with function approximation (Cai et al., 2019; Wang et al., 2020; Yang et al., 2020; Nguyen-Tang et al., 2022b). Next, we present a bound on the suboptimality of the policy π̃ returned by Algorithm 1. Recall that we use the initialization scheme described in Section 3.2. Fix any δ ∈ (0, 1). Theorem 1. Let σh = σ := 1 + λ 1 2B + (H + 1) [ d̃ log(1 +K ′/λ) + 2 + 2 log(3H/δ) ] 1 2 . Let m = poly(K ′, H, d,B, d̃, λ, δ) be some high-order polynomial of the problem parameters, λ = 1 + HK , η ≲ (λ +K ′)−1, J ≳ K ′ log(K ′(H √ d̃ + B)), ψ = 1, and M = log HSAδ / log 1 1−Φ(−1) , where Φ(·) is the cumulative distribution function of the standard normal distribution. Then, under Assumption 5.1, with probability at least 1−MHm−2 − 2δ, for any s1 ∈ S, we have that SubOpt(π̃; s1) ≤ σ(1 + √ 2 log(MSAH/δ)) · Eπ∗ [ H∑ h=1 ∥g(sh, ah;W0)∥Λ−1h ] + Õ( 1 K ′ ) where Λh := λImd + ∑ k∈Ih g(s k h, a k h;W0)g(s k h, a k h;W0) T ∈ Rmd×md. Remark 2. Theorem 1 shows that the randomized design in our proposed algorithm yields a provable uncertainty quantifier even though we do not explicitly maintain any confidence regions in the algorithm. The implicit pessimism via perturbed rewards introduces an extra factor of 1 + √ 2 log(MSAH/δ) into the confidence parameter β. We build upon Theorem 1 to obtain an explicit bound using the following data coverage assumption. Assumption 5.2 (Optimal-Policy Concentrability). ∃κ <∞, sup(h,sh,ah) d∗h(sh,ah) dµh(sh,ah) ≤ κ. 3Note that this is the worst-case bound, and the effective dimension can be significantly smaller in practice. 4We consider V : S → [0, H + 1] instead of V : S → [0, H] due to the cutoff margin ψ in Algorithm 1. Assumption 5.2 requires any positive-probability trajectory induced by the optimal policy to be covered by the behavior policy. This data coverage assumption is significantly milder than the uniform coverage assumptions in many FQI-based offline RL algorithms (Munos & Szepesvári, 2008; Chen & Jiang, 2019; Nguyen-Tang et al., 2022b) and is common in pessimism-based algorithms (Rashidinejad et al., 2021; Nguyen-Tang et al., 2022a; Chen & Jiang, 2022; Zhan et al., 2022). Theorem 2. For the same parameter settings and the same assumption as in Theorem 1, we have that with probability at least 1−MHm−2 − 5δ, SubOpt(π̃) ≤ 2σ̃κH√ K ′ √2d̃ log(1 +K ′/λ) + 1 + √ log Hδ λ + 16H 3K ′ log log2(K ′H) δ + Õ( 1 K ′ ), where σ̃ := σ(1 + √ 2 log(SAH/δ)). Remark 3. Theorem 2 shows that with appropriate parameter choice, VIPeR achieves a suboptimality of Õ ( κH3/2 √ d̃·max{B,H √ d̃}√ K ) . Compared to Yang et al. (2020), we improve by a factor of K 2 dγ−1 for some γ ∈ (0, 1) at the expense of √ H . When realized to a linear MDP in Rdlin , d̃ = dlin and our bound reduces into Õ ( κH5/2dlin√ K ) which improves the bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) by a factor of √ dlin. We provide the result summary and comparison in Table 1 and give a more detailed discussion in Subsection B.1. 6 EXPERIMENTS In this section, we empirically evaluate the proposed algorithm VIPeR against several state-of-the-art baselines, including (a) PEVI (Jin et al., 2021), which explicitly constructs lower confidence bound (LCB) for pessimism in a linear model (thus, we rename this algorithm as LinLCB for convenience in our experiments); (b) NeuraLCB (Nguyen-Tang et al., 2022a) which explicitly constructs an LCB using neural network gradients; (c) NeuraLCB (Diag), which is NeuraLCB with a diagonal approximation for estimating the confidence set as suggested in NeuraLCB (Nguyen-Tang et al., 2022a); (d) Lin-VIPeR which is VIPeR realized to the linear function approximation instead of neural network function approximation; (e) NeuralGreedy (LinGreedy, respectively) which uses neural networks (linear models, respectively) to fit the offline data and act greedily with respect to the estimated state-action value functions without any pessimism. Note that when the parametric class, F , in Algorithm 1 is that of neural networks, we refer to VIPeR as Neural-VIPeR. We do not utilize data splitting in the experiments. We provide further algorithmic details of the baselines in Section H. We evaluate all algorithms in two problem settings: (1) the underlying MDP is a linear MDP whose reward functions and transition kernels are linear in some known feature map (Jin et al., 2020), and (2) the underlying MDP is non-linear with horizon length H = 1 (i.e., non-linear contextual bandits) (Zhou et al., 2020), where the reward function is either synthetic or constructed from MNIST dataset (LeCun et al., 1998). We also evaluate (a variant of) our algorithm and show its strong performance advantage in the D4RL benchmark (Fu et al., 2020) in Section A.3. We implemented all algorithms in Pytorch (Paszke et al., 2019) on a server with Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz, 755G RAM, and one NVIDIA Tesla V100 Volta GPU Accelerator 32GB Graphics Card.5 6.1 LINEAR MDPS We first test the effectiveness of pessimism implicit in VIPeR (Algorithm 1). To that end, we construct a hard instance of linear MDPs (Yin et al., 2022; Min et al., 2021); due to page limitation, we defer the details of our construction to Section A.1. We test for different values of H ∈ {20, 30, 50, 80} and report the sub-optimality of LinLCB, Lin-VIPeR, and LinGreedy, averaged over 30 runs, in Figure 2. We find that LinGreedy, which is uncertainty-agnostic, fails to learn from offline data and has poor performance in terms of sub-optimality when compared to pessimism-based algorithms LinLCB and Lin-VIPeR. Further, LinLCB outperforms Lin-VIPeR when K is smaller than 400, but the performance of the two algorithms matches for larger sample sizes. Unlike LinLCB, Lin-VIPeR does not construct any confidence regions or require computing and inverting large (covariance) matrices. The Y-axis is in log scale; thus, Lin-VIPeR already has small sub-optimality in the first K ≈ 400 samples. These show the effectiveness of the randomized design for pessimism implicit in Algorithm 1. 6.2 NEURAL CONTEXTUAL BANDITS Next, we compare the performance and computational efficiency of various algorithms against VIPeR when neural networks are employed. For simplicity, we consider contextual bandits, a special case of MDPs with horizon H = 1. Following Zhou et al. (2020); Nguyen-Tang et al. (2022a), we use the bandit problems specified by the following reward functions: (a) r(s, a) = cos(3sT θa); (b) r(s, a) = exp(−10(sT θa)2), where s and θa are generated uniformly at random from the unit sphere Sd−1 with d = 16 and A = 10; (c) MNIST, where r(s, a) = 1 if a is the true label of the input image s and r(s, a) = 0, otherwise. To predict the value of different actions from the same state s using neural networks, we transform a state s ∈ Rd into dA-dimensional vectors s(1) = (s, 0, . . . , 0), s(2) = (0, s, 0, . . . , 0), . . . , s(A) = (0, . . . , 0, s) and train the network to map s(a) to r(s, a) given a pair of data (s, a). For Neural-VIPeR, NeuralGreedy, NeuraLCB, and NeuraLCB (Diag), we use the same neural network architecture with two hidden layers of width m = 64 and train the network with Adam optimizer (Kingma & Ba, 2015). Due to page limitations, we defer other experimental details and hyperparameter setting to Section A.2. We report the 5Our code is available here: https://github.com/thanhnguyentang/neural-offline-rl. sub-optimality averaged over 5 runs in Figure 3. We see that algorithms that use a linear model, i.e., LinLCB and Lin-VIPeR significantly underperform neural-based algorithms, i.e., NeuralGreedy, NeuraLCB, NeuraLCB (Diag) and Neural-VIPeR, attesting to the crucial role neural representations play in RL for non-linear problems. It is also interesting to observe from the experimental results that NeuraLCB does not always outperform its diagonal approximation, NeuraLCB (Diag) (e.g., in Figure 3(b)), putting a question mark on the empirical effectiveness of NTK-based uncertainty for offline RL. Finally, Neural-VIPeR outperforms all algorithms in the tested benchmarks, suggesting the effectiveness of our randomized design with neural function approximation. Figure 4 shows the average runtime for action selection of neural-based algorithms NeuraLCB, NeuraLCB (Diag), and Neural-VIPeR. We observe that algorithms that use explicit confidence regions, i.e., NeuraLCB and NeuraLCB (Diag), take significant time selecting an action when either the number of offline samples K or the network width m increases. This is perhaps not surprising because NeuraLCB and NeuraLCB (Diag) need to compute the inverse of a large covariance matrix to sample an action and maintain the confidence region for each action per state. The diagonal approximation significantly reduces the runtime of NeuraLCB, but the runtime still scales with the number of samples and the network width. In comparison, the runtime for action selection for Neural-VIPeR is constant. Since NeuraLCB, NeuraLCB (Diag), and Neural-VIPeR use the same neural network architecture, the runtime spent training one model is similar. The only difference is that Neural-VIPeR trains M models while NeuraLCB and NeuraLCB (Diag) train a single model. However, as the perturbed data in Algorithm 1 are independent, trainingM models in Neural-VIPeR is embarrassingly parallelizable. Finally, in Figure 5, we study the effect of the ensemble size on the performance of Neural-VIPeR. We use different values of M ∈ {1, 2, 5, 10, 20, 30, 50, 100, 200} for sample size K = 1000. We find that the sub-optimality of Neural-VIPeR decreases graciously as M increases. Indeed, the grid search from the previous experiment in Figure 3 also yields M = 10 and M = 20 from the search space M ∈ {1, 10, 20} as the best result. This suggests that the ensemble size can also play an important role as a hyperparameter that can determine the amount of pessimism needed in a practical setting. 7 CONCLUSION We propose a novel algorithmic approach for offline RL that involves randomly perturbing value functions and pessimism. Our algorithm eliminates the computational overhead of explicitly maintaining a valid confidence region and computing the inverse of a large covariance matrix for pessimism. We bound the suboptimality of the proposed algorithm as Õ ( κH5/2d̃/ √ K ) . We support our theoretical claims of computational efficiency and the effectiveness of our algorithm with extensive experiments. ACKNOWLEDGEMENTS This research was supported, in part, by DARPA GARD award HR00112020004, NSF CAREER award IIS-1943251, an award from the Institute of Assured Autonomy, and Spring 2022 workshop on “Learning and Games” at the Simons Institute for the Theory of Computing. A EXPERIMENT DETAILS A.1 LINEAR MDPS In this subsection, we provide further details to the experiment setup used in Subsection 6.1. We describe in detail a variant of the hard instance of linear MDPs (Yin et al., 2022) used in our experiment. The linear MDP has S = {0, 1},A = {0, 1, · · · , 99}, and the feature dimension d = 10. Each action a ∈ [99] = {1, . . . , 99} is represented by its binary encoding vector ua ∈ R8 with entry being either −1 or 1. The feature mapping ϕ(s, a) is given by ϕ(s, a) = [uTa , δ(s, a), 1− δ(s, a)]T ∈ R10, where δ(s, a) = 1 if (s, a) = (0, 0) and δ(s, a) = 0 otherwise. The true measure νh(s) is given by νh(s) = [0, · · · , 0, (1 − s) ⊕ αh, s ⊕ αh] where {αh}h∈[H] ∈ {0, 1}H are generated uniformly at random and ⊕ is the XOR operator. We define θh = [0, · · · , 0, r, 1 − r]T ∈ R10 where r = 0.99. Recall that the transition follows Ph(s′|s, a) = ⟨ϕ(s, a), νh(s′)⟩ and the mean reward rh(s, a) = ⟨ϕ(s, a), θh⟩. We generated a priori K ∈ {1, . . . , 1000} trajectories using the behavior policy µ, where for any h ∈ [H] we set µh(0|0) = p, µh(1|0) = 1 − p, µh(a|0) = 0,∀a > 1;µh(0|1) = p, µh(a|1) = (1− p)/99,∀a > 0, where we set p = 0.6. We run over K ∈ {1, . . . , 1000} and H ∈ {20, 30, 50, 80}. We set λ = 0.01 for all algorithms. For Lin-VIPeR, we grid searched σh = σ ∈ {0.0, 0.1, 0.5, 1.0, 2.0} and M ∈ {1, 2, 10, 20}. For LinLCB, we grid searched its uncertainty multiplier β ∈ {0.1, 0.5, 1, 2}. The sub-optimality metric is used to compare algorithms. For each H ∈ {20, 30, 50, 80}, each algorithm was executed for 30 times and the averaged results (with std) are reported in Figure 2. A.2 NEURAL CONTEXTUAL BANDITS In this subsection, we provide in detail the experimental and hyperparameter setup in our experiment in Subsection 6.2. For Neural-VIPeR, NeuralGreedy, NeuraLCB and NeuraLCB (Diag), we use the same neural network architecture with two hidden layers whose width m = 64, train the network with Adam optimizer (Kingma & Ba, 2015) with learning rate being grid-searched over {0.0001, 0.001, 0.01} and batch size of 64. For NeuraLCB, NeuraLCB (Diag), and LinLCB, we grid-searched β over {0.001, 0.01, 0.1, 1, 5, 10}. For Neural-VIPeR and Lin-VIPeR, we gridsearched σh = σ over {0.001, 0.01, 0.1, 1, 5, 10} andM over {1, 10, 20}. We did not run NeuraLCB in MNIST as the inverse of a full covariance matrix in this case is extremely expensive. We fixed the regularization parameter λ = 0.01 for all algorithms. Offline data is generated by the (1−ϵ)-optimal policy which generates non-optimal actions with probability ϵ and optimal actions with probability 1 − ϵ. We set ϵ = 0.5 in our experiments. To estimate the expected sub-optimality, we randomly obtain 1, 000 novel samples (i.e. not used in training) to compute the average sub-optimality and keep these same samples for all algorithms. A.3 EXPERIMENT IN D4RL BENCHMARK In this subsection, we evaluate the effectiveness of the reward perturbing design of VIPeR in the Gym domain in the D4RL benchmark (Fu et al., 2020). The Gym domain has three environments (HalfCheetah, Hopper, and Walker2d) with five datasets (random, medium, medium-replay, medium-expert, and expert), making up 15 different settings. Design. To adapt the design of VIPeR to continuous control, we use the actor-critic framework. Specifically, we have M critics {Qθi}i∈[M ] and one actor πϕ, where {θi}i∈[M ] and ϕ are the learnable parameters for the critics and actor, respectively. Note that in the continuous domain, we consider discounted MDP with discount factor γ, instead of finite-time episode MDP as we initially considered in our setting in the main paper. In the presence of the actor πϕ, there are two modifications to Algorithm 1. The first modification is that when training the critics {Qiθ}i∈[M ], we augment the training loss in Algorithm 2 with a new penalization term. Specifically, the critic loss for Qθi on a training sample τ := (s, a, r, s′) (sampled from the offline data D) is L(θi; τ) = (Qθi(s, a)− (r + γQθ̄i(s′) + ξ)) 2 + β Ea′∼πϕ(·|s) [ (Qθi(s, a ′)− Q̄(s, a′))2 ]︸ ︷︷ ︸ penalization term R(θi;s,ϕ) , (1) where θ̄i has the same value of the current θi but is kept fixed, Q̄ = 1M ∑M i=1Qθi and ξ ∼ N (0, σ2) is Gaussian noise, and β is a penalization parameter (note that β here is totally different from the β in Theorem 1). The penalization term R(θi; s, ϕ) discourages overestimation in the value function estimate Qθi for out-of-distribution (OOD) actions a′ ∼ πϕ(·|s). Our design of R(θi; s, ϕ) is initially inspired by the OOD penalization in Bai et al. (2022) that creates a pessimistic pseudo target for the values at OOD actions. Note that we do not need any penalization for OOD actions in our experiment for contextual bandits in Section 6.2. This is because in the contextual bandit setting in Section 6.2 the action space is finite and not large, thus the offline data often sufficiently cover all good actions. In the continuous domain such as the Gym domain of D4RL, however, it is almost certain that there are actions that are not covered by the offline data since the action space is continuous. We also note that the inclusion of the OOD action penalization term R(θi; s, ϕ) in this experiment does not contradict our guarantee in Theorem 1 since in the theorem we consider finite action space while in this experiment we consider continuous action space. We argue that the inclusion of some regularization for OOD actions (e.g., R(θi; s, ϕ)) is necessary for the continuous domain. 6 The second modification to Algorithm 1 for the continuous domain is the actor training, which is the implementation of policy extraction in line 10 of Algorithm 1. Specifically, to train the actor πϕ given the ensemble of critics {Qiθ}i∈[M ], we use soft actor update in Haarnoja et al. (2018) via max ϕ { Es∼D,a′∼πϕ(·|s) [ min i∈[M ] Qθi(s, a ′)− log πϕ(a′|s) ]} , (2) which is trained using gradient ascent in practice. Note that in the discrete action domain, we do not need such actor training as we can efficiently extract the greedy policy with respect to the estimated action-value functions when the action space is finite. Also note that we do not use data splitting and value truncation as in the original design of Algorithm 1. Hyperparameters. For the hyper-parameters of our training, we set M = 10 and the noise variance σ = 0.01. For β, we decrease it from 0.5 to 0.2 by linear decay for the first 50K steps and exponential decay for the remaining steps. For the other hyperparameters of actor-critic training, we fix them the same as in Bai et al. (2022). Specifically, the Q-network is the fully connected neural network with three hidden layers all of which has 256 neurons. The learning rate for the actor and the critic are 10−4 and 3× 10−4, respectively. The optimizer is Adam. Results. We compare VIPeR with several state-of-the-art algorithms, including (i) BEAR (Kumar et al., 2019) that use MMD distance to constraint policy to the offline data, (ii) UWAC (Wu et al., 2021) that improves BEAR using dropout uncertainty, (iii) CQL (Kumar et al., 2020) that minimizes Q-values of OOD actions, (iv) MOPO (Yu et al., 2020) that uses model-based uncertainty via ensemble dynamics, (v) TD3-BC (Fujimoto & Gu, 2021) that uses adaptive behavior cloning, and (vi) PBRL (Bai et al., 2022) that use uncertainty quantification via disagreement of bootstrapped Q-functions. We follow the evaluation protocol in Bai et al. (2022). We run our algorithm for five seeds and report the average final evaluation scores with standard deviation. We report the scores of our method and the baselines in Table 2. We can see that our method has a strong advantage of good performance (highest scores) in 11 out of 15 settings, and has good stability (small std) in all settings. Overall, we also have the strongest average scores aggregated over all settings. B EXTENDED DISCUSSION Here we provide extended discussion of our result. B.1 COMPARISON WITH OTHER WORKS AND DISCUSSION We provide further discussion regarding comparison with other works in the literature. 6In our experiment, we also observe that without this penalization term, the method struggles to learn any good policy. However, using only the penalization term without the first term in Eq. (1), we observe that the method cannot learn either. Comparing to Jin et al. (2021). When the underlying MDP reduces into a linear MDP, if we use the linear model as the plug-in parametric model in Algorithm 1, our bound reduces into Õ ( κH5/2dlin√ K ) which improves the bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) by a factor of √ dlin and worsen by a factor of √ H due to the data splitting. Thus, our bound is more favorable in the linear MDPs with high-dimensional features. Moreover, our bound is guaranteed in more practical scenarios where the offline data can have been adaptively generated and is not required to uniformly cover the state-action space. The explicit bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) is obtained under the assumption that the offline data have uniform coverage and are generated independently on the episode basis. Comparing to Yang et al. (2020). Though Yang et al. (2020) work in the online regime, it shares some part of the literature with our work in function approximation for RL. Besides different learning regimes (offline versus online), we offer three key distinctions which can potentially be used in the online regime as well: (i) perturbed rewards, (ii) optimization, and (iii) data split. Regarding (i), our perturbed reward design can be applied to online RL with function approximation to obtain a provably efficient online RL that is computationally efficient and thus remove the need of maintaining explicit confidence regions and performing the inverse of a large covariance matrix. Regarding (ii), we incorporate the optimization analysis into our algorithm which makes our algorithm and analysis more practical. We also note that unlike (Yang et al., 2020), we do not make any assumption on the eigenvalue decay rate of the empirical NTK kernel as the empirical NTK kernel is data-dependent. Regarding (iii), our data split technique completely removes the factor√ logN∞(H, 1/K,B) in the bound at the expense of increasing the bound by a factor of √ H . In complex models, such log covering number can be excessively larger than the horizon H , making the algorithm too optimistic in the online regime (optimistic in the offline regime, respectively). For example, the target function class is RKHS with a γ-polynomial decay, the log covering number scales as (Yang et al., 2020, Lemma D1),√ logN∞(H, 1/K,B) ≲ K 2 αγ−1 , for some α ∈ (0, 1). In the case of two-layer ReLU NTK, γ = d (Bietti & Mairal, 2019), thus√ logN∞(H, 1/K,B) ≲ K 2 αd−1 which is much larger than √ H when the size of dataset is large. Note that our data-splitting technique is general that can be used in the online regime as well. Comparing to Xu & Liang (2022). Xu & Liang (2022) consider a different setting where pertimestep rewards are not available and only the total reward of the whole trajectory is given. Used with neural function approximation, they obtain Õ(DeffH2/ √ K) where Deff is their effective dimension. Note that Xu & Liang (2022) do not use data splitting and still achieve the same order of Deff as our result with data splitting. It at first might appear that our bound is inferior to their bound as we pay the cost of √ H due to data splitting. However, to obtain that bound, they make three critical assumptions: (i) the offline data trajectories are independently and identically distributed (i.i.d.) (see their Assumption 3), (ii) the offline data is uniformly explorative over all dimensions of the feature space (also see their Assumption 3), and (iii) the eigenfunctions of the induced NTK RKHS has finite spectrum (see their Assumption 4). The i.i.d. assumption under the RKHS space with finite dimensions (due to the finite spectrum assumption) and the well-explored dataset is critical in their proof to use a matrix concentration that does not incur an extra factor of √ Deff as it would normally do without these assumptions (see Section E, the proof of their Lemma 2). Note that the celebrated ReLU NTK does not satisfy the finite spectrum assumption (Bietti & Mairal, 2019). Moreover, we do not make any of these three assumptions above for our bound to hold. That suggests that our bound is much more general. In addition, we do not need to compute any confidence regions nor perform the inverse of a large covariance matrix. Comparing to Yin et al. (2023). During the submission of our work, a concurrent work of Yin et al. (2023) appeared online. Yin et al. (2023) study provably efficient offline RL with a general parametric function approximation that unifies the guarantees of offline RL in linear and generalized linear MDPs, and beyond with potential applications to other classes of functions in practice. We remark that the result in Yin et al. (2023) is orthogonal/complementary to our paper since they consider the parametric class with third-time differentiability which cannot apply to neural networks (not necessarily overparameterized) with non-smooth activation such as ReLU. In addition, they do not consider reward perturbing in their algorithmic design or optimization errors in their analysis. B.2 WORSE-CASE RATE OF EFFECTIVE DIMENSION In the main paper, we prove an Õ ( κH5/2d̃√ K ) sub-optimality bound which depends on the notion of effective dimension defined in Definition 2. Here we give a worst-case rate of the effective dimension d̃ for the two-layer ReLU NTK. We first briefly review the background of RKHS. LetH be an RKHS defined on X ⊆ Rd with kernel function ρ : X ×X → R. Let ⟨·, ·⟩H : H×H → R and ∥ · ∥H : H → R be the inner product and the RKSH norm on H. By the reproducing kernel property of H, there exists a feature mapping ϕ : X → H such that f(x) = ⟨f, ϕ(x)⟩H and ρ(x, x′) = ⟨ϕ(x), ϕ(x′)⟩H. We assume that the kernel function ρ is uniformly bounded, i.e. supx∈X ρ(x, x) <∞. Let L2(X ) be the space of square-integral functions on X with respect to the Lebesgue measure and let ⟨·, ·⟩L2 be the inner product on L2(X ). The kernel function ρ induces an integral operator Tρ : L2(X )→ L2(X ) defined as Tρf(x) = ∫ X ρ(x, x′)f(x′)dx′. By Mercer’s theorem (Steinwart & Christmann, 2008), Tρ has countable and positive eigenvalues {λi}i≥1 and eigenfunctions {νi}i≥1. The kernel function andH can be expressed as ρ(x, x′) = ∞∑ i=1 λiνi(x)νi(x ′), H = {f ∈ L2(X ) : ∞∑ i=1 ⟨f, νi⟩L2 λi <∞}. Now consider the NTK defined in Definition 1: Kntk(x, x ′) = Ew∼N (0,Id/d)⟨xσ ′(wTx), x′σ′(wTx′)⟩. It follows from (Bietti & Mairal, 2019, Proposition 1) that λi ≍ i−d. Thus, by (Srinivas et al., 2010, Theorem 5), the data-dependent effective dimension ofHntk can be bounded in the worst case by d̃ ≲ K ′(d+1)/(2d). We remark that this is the worst-case bound that considers uniformly over all possible realizable of training data. The effective dimension d̃ is on the other hand data-dependent, i.e. its value depends on the specific training data at hand thus d̃ can be actually much smaller than the worst-case rate. C PROOF OF THEOREM 1 AND THEOREM 2 In this section, we provide both the outline and detailed proofs of Theorem 1 and Theorem 2. C.1 TECHNICAL REVIEW AND PROOF OVERVIEW Technical Review. In what follows, we provide more detailed discussion when placing our technical contribution in the context of the related literature. Our technical result starts with the value difference lemma in Jin et al. (2021) to connect bounding the suboptimality of an offline algorithm to controlling the uncertainty quantification in the value estimates. Thus, our key technical contribution is to provably quantify the uncertainty of the perturbed value function estimates which were obtained via reward perturbing and gradient descent. This problem setting is largely different from the current analysis of overparameterized neural networks for supervised learning which does not require uncertainty quantification. Our work is not the first to consider uncertainty quantification with overparameterized neural networks, since it has been studied in Zhou et al. (2020); Nguyen-Tang et al. (2022a); Jia et al. (2022). However, there are significant technical differences between our work and these works. The work in Zhou et al. (2020); Nguyen-Tang et al. (2022a) considers contextual bandits with overparameterized neural networks trained by (S)GD and quantifies the uncertainty of the value function with explicit empirical covariance matrices. We consider general MDP and use reward perturbing to implicitly obtain uncertainty, thus requiring different proof techniques. Jia et al. (2022) is more related to our work since they consider reward perturbing with overparameterized neural networks (but they consider contextual bandits). However, our reward perturbing strategy is largely different from that in Jia et al. (2022). Specifically, Jia et al. (2022) perturbs each reward only once while we perturb each reward multiple times, where the number of perturbing times is crucial in our work and needs to be controlled carefully. We show in Theorem 1 that our reward perturbing strategy is effective in enforcing sufficient pessimism for offline learning in general MDP and the empirical results in Figure 2, Figure 3, Figure 5, and Table 2 are strongly consistent with our theoretical suggestion. Thus, our technical proofs are largely different from those of Jia et al. (2022). Finally, the idea of perturbing rewards multiple times in our algorithm is inspired by Ishfaq et al. (2021). However, Ishfaq et al. (2021) consider reward perturbing for obtaining optimism in online RL. While perturbing rewards are intuitive to obtain optimism for online RL, for offline RL, under distributional shift, it can be paradoxically difficult to properly obtain pessimism with randomization and ensemble (Ghasemipour et al., 2022), especially with neural function approximation. We show affirmatively in our work that simply taking the minimum of the randomized value functions after perturbing rewards multiple times is sufficient to obtain provable pessimism for offline RL. In addition, Ishfaq et al. (2021) do not consider neural network function approximation and optimization. Controlling the uncertainty of randomization (via reward perturbing) under neural networks with extra optimization errors induced by gradient descent sets our technical proof significantly apart from that of Ishfaq et al. (2021). Besides all these differences, in this work, we propose an intricately-designed data splitting technique that avoids the uniform convergence argument and could be of independent interest for studying sample-efficient RL with complex function approximation. Proof Overview. The key steps for proving Theorem 1 and Theorem 2 are highlighted in Subsection C.2 and Subsection C.3, respectively. Here, we discuss an overview of our proof strategy. The key technical challenge in our proof is to quantify the uncertainty of the perturbed value function estimates. To deal with this, we carefully control both the near-linearity of neural networks in the NTK regime and the estimation error induced by reward perturbing. A key result that we use to control the linear approximation to the value function estimates is Lemma D.3. The technical challenge in establishing Lemma D.3 is how to carefully control and propagate the optimization error incurred by gradient descent. The complete proof of Lemma D.3 is provided in Section E.3. The implicit uncertainty quantifier induced by the reward perturbing is established in Lemma D.1 and Lemma D.2, where we carefully design a series of intricate auxiliary loss functions and establish the anti-concentrability of the perturbed value function estimates. This requires a careful design of the variance of the noises injected into the rewards. To deal with removing a potentially large covering number when we quantify the implicit uncertainty, we propose our data splitting technique which is validated in the proof of Lemma D.1 in Section E.1. Moreover, establishing Lemma D.1 in the overparameterization regime induces an additional challenge since a standard analysis would result in a vacuous bound that scales with the overparameterization. We avoid this issue by carefully incorporating the use of the effective dimension in Lemma D.1. C.2 PROOF OF THEOREM 1 In this subsection, we present the proof of Theorem 1. We first decompose the suboptimality SubOpt(π̃; s) and present the main lemmas to bound the evaluation error and the summation of the implicit confidence terms, respectively. The detailed proof of these lemmas are deferred to Section D. For proof convenience, we first provide the key parameters that we use consistently throughout our proofs in Table 3. We define the model evaluation error at any (x, h) ∈ X × [H] as errh(x) = (BhṼh+1 − Q̃h)(x), (3) where Bh is the Bellman operator defined in Section 3, and Ṽh and Q̃h are the estimated (action-) state value functions returned by Algorithm 1. Using the standard suboptimality decomposition (Jin et al., 2021, Lemma 3.1), for any s1 ∈ S, SubOpt(π̃; s1) = − H∑ h=1 Eπ̃ [errh(sh, ah)] + H∑ h=1 Eπ∗ [errh(sh, ah)] + H∑ h=1 Eπ∗ [ ⟨Q̃h(sh, ·), π∗h(·|sh)− π̃h(·|sh)⟩A ] ︸ ︷︷ ︸ ≤0 , where the third term is non-positive as π̃h is greedy with respect to Q̃h. Thus, for any s1 ∈ S, we have SubOpt(π̃; s1) ≤ − H∑ h=1 Eπ̃ [errh(sh, ah)] + H∑ h=1 Eπ∗ [errh(sh, ah)] . (4) In the following main lemma, we bound the evaluation error errh(s, a). In the rest of the proof, we consider an additional parameter R and fix any δ ∈ (0, 1). Lemma C.1. Let m = Ω ( d3/2R−1 log3/2( √ m/R) ) R = O ( m1/2 log−3m ) , m = Ω ( K ′10(H + ψ)2 log(3K ′H/δ) ) λ > 1 K ′C2g ≥ λR ≥ max{4B̃1, 4B̃2, 2 √ 2λ−1K ′(H + ψ + γh,1)2 + 4γ2h,2}, η ≤ (λ+K ′C2g )−1, ψ > ι, σh ≥ β,∀h ∈ [H], (5) where B̃1, B̃2, γh,1, γh,2, and ι are defined in Table 3,Cg is a absolute constant given in Lemma G.1, and R is an additional parameter. Let M = log HSAδ / log 1 1−Φ(−1) where Φ(·) is the cumulative distribution function of the standard normal distribution. With probability at least 1−MHm−2−2δ, for any (x, h) ∈ X × [H], we have −ι ≤ errh(x) ≤ σh(1 + √ 2 log(MSAH/δ)) · ∥g(x;W0)∥Λ−1h + ι where Λh := λImd + ∑ k∈Ih g(x k h;W0)g(x k h;W0) T ∈ Rmd×md. Now we can prove Theorem 1. Proof of Theorem 1. Theorem 1 can directly follow from substituting Lemma C.1 into Equation (4). We now only need to simplify the conditions in Equation (5). To satisfy Equation (5), it suffices to set λ = 1 + HK ψ = 1 > ι σh = β 8CgR 4/3m−1/6 √ logm ≤ 1 λ−1K ′H2 ≥ 2 B̃1 ≤ √ 2K ′(H + ψ + γh,1)2 + λγ2h,2 + 1 √ K ′CgR 1/3m−1/6 √ logm ≤ 1 B̃2 ≤ K ′CgR4/3m−1/6 √ logm ≤ 1. Combining with Equation 5, we have λ = 1 + HK ψ = 1 > ι σh = β η ≲ (λ+K ′)−1 m ≳ max { R8 log3m,K ′10(H + 1)2 log(3K ′H/δ), d3/2R−1 log3/2( √ m/R),K ′6R8 log3m } m ≳ [2K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) + 1]3K ′3R log3m 4 √ K ′(H + 1 + β √ log(K ′M/δ)) + 4β √ d log(dK ′M/δ) ≤ R ≲ K ′. (6) Note that with the above choice of λ = 1 + HK , we have K ′ log λ = log(1 + 1 K ′ )K ′ ≤ log 3 < 2. We further set that m ≳ B2K ′2d log(3H/δ), we have β = BK ′√ m (2 √ d+ √ 2 log(3H/ δ))λ−1/2Cg + λ 1/2B + (H + ψ) [√ d̃h log(1 + K ′ λ ) +K ′ log λ+ 2 log(3H/δ) ] ≤ 1 + λ1/2B + (H + 1) [√ d̃h log(1 + K ′ λ ) + 2 + 2 log(3H/δ) ] = o( √ K ′). Thus, 4 √ K ′(H + 1 + β √ log(K ′M/δ)) + 4β √ d log(dK ′M/δ) << K ′ for K ′ large enough. Therefore, there exists R that satisfies Equation (6). We now only need to verify ι < 1. We have ι0 = Bm −1/2(2 √ d+ √ 2 log(3H/δ)) ≤ 1/3, ι1 = CgR 4/3m−1/6 √ logm+ Cg ( B̃1 + B̃2 + λ −1(1− ηλ)J ( K ′(H + 1 + γh,1) 2 + λγ2h,2 )) ≲ 1/3 if (1− ηλ)J [ K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) ] ≲ 1. (7) Note that (1− ηλ)J ≤ e−ηλJ , K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) ≲ K ′H2λβ2d log(dK ′M/δ). Thus, Equation (7) is satisfied if J ≳ ηλ log ( K ′H2λβ2d log(dK ′M/δ) ) . Finally note that ι2 ≤ ι1. Rearranging the derived conditions here gives the complete parameter conditions in Theorem 1. Specifically, the polynomial form of m is m ≳ max{R8 log3m,K ′10(H + 1)2 log(3K ′H/δ), d3/2R−1 log3/2( √ m/R),K ′6R8 log3m, B2K ′2d log(3H/δ)}, m ≳ [2K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) + 1]3K ′3R log3m. C.3 PROOF OF THEOREM 2 In this subsection, we give a detailed proof of Theorem 2. We first present intermediate lemmas whose proofs are deferred to Section D. For any h ∈ [H] and k ∈ Ih = [(H − h)K ′ +1, . . . , (H − h+ 1)K ′], we define the filtration Fkh = σ ( {(sth′ , ath′ , rth′)} t≤k h′∈[H] ∪ {(s k+1 h′ , a k+1 h′ , r k+1 h′ )}h′≤h−1 ∪ {(s k+1 h , a k+1 h )} ) . Let Λkh := λI + ∑ t∈Ik,t≤k g(xth;W0)g(x t h;W0) T , β̃ := β(1 + 2 √ log(SAH/δ)). In the following lemma, we connect the expected sub-optimality of π̃ to the summation of the uncertainty quantifier at empirical data. Lemma C.2. Suppose that the conditions in Theorem 1 all hold. With probability at least 1 − MHm−2 − 3δ, SubOpt(π̃) ≤ 2β̃ K ′ H∑ h=1 ∑ k∈Ih Eπ∗ [ ∥g(xh;W0)∥(Λkh)−1 ∣∣∣∣Fk−1h , sk1]+ 163K ′H log(log2(K ′H)/δ) + 2 K ′ + 2ι, Lemma C.3. Under Assumption 5.2, for any h ∈ [H] and fixed W0, with probability at least 1− δ,∑ k∈Ih Eπ∗ [ ∥g(xh;W0)∥(Λkh)−1 ∣∣∣∣Fk−1, sk1] ≤ ∑ k∈Ih κ∥g(xh;W0)∥(Λkh)−1 + κ √ K ′ log(1/δ) λ . Lemma C.4. If λ ≥ C2g and m = Ω(K ′4 log(K ′H/δ)), then with probability at least 1− δ, for any h ∈ [H], we have ∑ k∈Ih ∥g(xh;W0)∥2(Λkh)−1 ≤ 2d̃h log(1 +K ′/λ) + 1. where d̃h is the effective dimension defined in Definition 2. Proof of Theorem 2. Theorem 2 directly follows from Lemma C.2-C.3-C.4 using the union bound. D PROOF OF LEMMA C.1 In this section, we provide the proof for Lemma C.1. We set up preparation for all the results in the rest of the paper and provide intermediate lemmas that we use to prove Lemma C.1. The detailed proofs of these intermediate lemmas are deferred to Section E. D.1 PREPARATION To prepare for the lemmas and proofs in the rest of the paper, we define the following quantities. Recall that we use abbreviation x = (s, a) ∈ X ⊂ Sd−1 and xkh = (skh, akh) ∈ X ⊂ Sd−1. For any h ∈ [H] and i ∈ [M ], we define the perturbed loss function L̃ih(W ) := 1 2 ∑ k∈Ih ( f(xkh;W )− ỹ i,k h ) )2 + λ 2 ∥W + ζih −W0∥22, (8) where ỹi,kh := r k h + Ṽh+1(s k h+1) + ξ i,k h , Ṽh+1 is computed by Algorithm 1 at Line 10 for timestep h+1, and {ξi,kh } and ζih are the Gaussian noises obtained at Line 5 of Algorithm 1. Here the subscript h and the superscript i in L̃ih(W ) emphasize the dependence on the ensemble sample i and timestep h. The gradient descent update rule of L̃ih(W ) is W̃ i,(j+1) h = W̃ i,(j) h − η∇L̃ i h(W ), (9) where W̃ i,(0)h =W0 is the initialization parameters. Note that W̃ ih = GradientDescent(λ, η, J, D̃ih, ζih,W0) = W̃ i,(J) h , where W̃ ih is returned by Line 7 of Algorithm 1. We consider a non-perturbed auxiliary loss function Lh(W ) := 1 2 ∑ k∈Ih ( f(xkh;W )− ykh) )2 + λ 2 ∥W −W0∥22, (10) where ykh := r k h + Ṽh+1(s k h+1). Note that Lh(W ) is simply a non-perturbed version of L̃ih(W ) where we drop all the noises {ξ i,k h } and {ζih}. We consider the gradient update rule for Lh(W ) as follows Ŵ (j+1) h = Ŵ (j) h − η∇Lh(W ), (11) where Ŵ (0)h =W0 is the initialization parameters. To correspond with W̃ i h, we denote Ŵh := Ŵ (J) h . (12) We also define the auxiliary loss functions for both non-perturbed and perturbed data in the linear model with feature g(·;W0) as follows L̃i,linh (W ) := 1 2 ∑ k∈Ih ( ⟨g(xkh;W0),W ⟩ − ỹ i,k h )2 + λ 2 ∥W + ζih −W0∥22, (13) Llinh (W ) := 1 2 ∑ k∈Ih ( ⟨g(xkh;W0),W ⟩ − ykh )2 + λ 2 ∥W −W0∥22. (14) We consider the auxiliary gradient updates for L̃i,linh (W ) as W̃ i,lin,(j+1) h = W̃ i,lin,(j) h − η∇L̃ i,lin h (W ), (15) Ŵ lin,(j+1) h = Ŵ lin,(j) h − η∇L̃ lin h (W ), (16) where W̃ i,lin,(0)h = Ŵ i,lin,(0) h = W0 for all i, h. Finally, we define the least-square solutions to the auxili
1. What is the main contribution of the paper, and how does it combine randomized value functions and the pessimism principle? 2. What are the strengths of the proposed algorithm, particularly regarding its time complexity and data splitting technique? 3. What are the weaknesses of the paper, specifically regarding its assumption 5.1 and its sensitivity to model errors? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any comparisons or differences between the proposed method and other works in the field, such as Xu & Liang (2022) and a recent paper on pessimistic offline RL with parametric function classes?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes the PEturbed-Reward Value Iteration (PERVI) which combines the randomized value function idea with the pessimism principle. PERVI only needs O ( 1 ) time complexity for action selection while LCB-based algorithms require at least Ω ( K 2 ) , where K is the total number of trajectories in the offline data. It proposes a novel data splitting technique that helps remove the potentially large log covering number in the learning bound. PERVI yields a provable uncertainty quantifier with overparameterized neural networks and achieves an O ~ ( κ H 5 / 2 d ~ K ) sub-optimality. statistical and computational efficiency of PERVI is validated with an empirical evaluation in a wide set of synthetic and real-world datasets. Strengths And Weaknesses Strength: This paper combines the randomized value function idea and the pessimism principle. In addition, this paper proposes a novel data splitting technique that helps remove the dependence on the potentially large log covering number in the learning bound. The authors empirically corroborate the statistical and computational efficiency of our proposed algorithm in a wide set of synthetic and real-world datasets. Weakness: 1. Your assumption 5.1 requires for any H + 1 -bounded function, Bellman update maps it to Q ∗ . What will happen if your function has ϵ -misspecified error, i.e. $\inf_{V}\sup_{||V'||\infty\leq H+1}||B_h V-V'||\infty \leq \epsilon , h o w w i l l t h e \epsilon$ model error affect your results? You don't need to derive the result for this case but explain in a few sentences is fine. While Appendix B.1 already provide a nice comparison with Xu&Liang,2022, they consider a different setting where only trajectory reward is observed. If the per-step award is aware, how would their result be like and how would it compare to PERVI? I am asking this since in your setting per-step reward is aware, so I am wondering which of the two methods would be better under this case. Minor: The recent paper https://arxiv.org/pdf/2210.00750.pdf also considers the pessimism offline RL with parametric function class. How is your result compared to theirs since both results contain the similar measurement ∑ h = 1 H | | g ( x ; W 0 ) | | Λ h − 1 in the suboptimality bound? Clarity, Quality, Novelty And Reproducibility The writing of this paper is clear and the quality of this paper is high.
ICLR
Title VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation Abstract We propose a novel algorithm for offline reinforcement learning called Value Iteration with Perturbed Rewards (VIPeR), which amalgamates the pessimism principle with random perturbations of the value function. Most current offline RL algorithms explicitly construct statistical confidence regions to obtain pessimism via lower confidence bounds (LCB), which cannot easily scale to complex problems where a neural network is used to estimate the value functions. Instead, VIPeR implicitly obtains pessimism by simply perturbing the offline data multiple times with carefully-designed i.i.d. Gaussian noises to learn an ensemble of estimated state-action value functions and acting greedily with respect to the minimum of the ensemble. The estimated state-action values are obtained by fitting a parametric model (e.g., neural networks) to the perturbed datasets using gradient descent. As a result, VIPeR only needsO(1) time complexity for action selection, while LCB-based algorithms require at least Ω(K), where K is the total number of trajectories in the offline data. We also propose a novel data-splitting technique that helps remove a factor involving the log of the covering number in our bound. We prove that VIPeR yields a provable uncertainty quantifier with overparameterized neural networks and enjoys a bound on sub-optimality of Õ(κHd̃/ √ K), where d̃ is the effective dimension, H is the horizon length and κ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation. N/A √ K), where d̃ is the effective dimension, H is the horizon length and κ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation. 1 INTRODUCTION Offline reinforcement learning (offline RL) (Lange et al., 2012; Levine et al., 2020) is a practical paradigm of RL for domains where active exploration is not permissible. Instead, the learner can access a fixed dataset of previous experiences available a priori. Offline RL finds applications in several critical domains where exploration is prohibitively expensive or even implausible, including healthcare (Gottesman et al., 2019; Nie et al., 2021), recommendation systems (Strehl et al., 2010; Thomas et al., 2017), and econometrics (Kitagawa & Tetenov, 2018; Athey & Wager, 2021), among others. The recent surge of interest in this area and renewed research efforts have yielded several important empirical successes (Chen et al., 2021; Wang et al., 2023; 2022; Meng et al., 2021). A key challenge in offline RL is to efficiently exploit the given offline dataset to learn an optimal policy in the absence of any further exploration. The dominant approaches to offline RL address this challenge by incorporating uncertainty from the offline dataset into decision-making (Buckman et al., 2021; Jin et al., 2021; Xiao et al., 2021; Nguyen-Tang et al., 2022a; Ghasemipour et al., 2022; An et al., 2021; Bai et al., 2022). The main component of these uncertainty-aware approaches to offline RL is the pessimism principle, which constrains the learned policy to the offline data and leads to various lower confidence bound (LCB)-based algorithms. However, these methods are not easily extended or scaled to complex problems where neural function approximation is used to estimate the value functions. In particular, it is costly to explicitly compute the statistical confidence regions of the model or value functions if the class of function approximator is given by overparameterized neural networks. For example, constructing the LCB for neural offline contextual bandits (NguyenTang et al., 2022a) and RL (Xu & Liang, 2022) requires computing the inverse of a large covariance matrix whose size scales with the number of parameters in the neural network. This computational cost hinders the practical application of these provably efficient offline RL algorithms. Therefore, a largely open question is how to design provably computationally efficient algorithms for offline RL with neural network function approximation. In this work, we present a solution based on a computational approach that combines the pessimism principle with randomizing the value function (Osband et al., 2016; Ishfaq et al., 2021). The algorithm is strikingly simple: we randomly perturb the offline rewards several times and act greedily with respect to the minimum of the estimated state-action values. The intuition is that taking the minimum from an ensemble of randomized state-action values can efficiently achieve pessimism with high probability while avoiding explicit computation of statistical confidence regions. We learn the state-action value function by training a neural network using gradient descent (GD). Further, we consider a novel data-splitting technique that helps remove the dependence on the potentially large log covering number in the learning bound. We show that the proposed algorithm yields a provable uncertainty quantifier with overparameterized neural network function approximation and achieves a sub-optimality bound of Õ(κH5/2d̃/ √ K), where K is the total number of episodes in the offline data, d̃ is the effective dimension, H is the horizon length, and κ measures the distributional shift. We achieve computational efficiency since the proposed algorithm only needsO(1) time complexity for action selection, while LCB-based algorithms require O(K2) time complexity. We empirically corroborate the statistical and computational efficiency of our proposed algorithm on a wide set of synthetic and real-world datasets. The experimental results show that the proposed algorithm has a strong advantage in computational efficiency while outperforming LCB-based neural algorithms. To the best of our knowledge, ours is the first offline RL algorithm that is both provably and computationally efficient in general MDPs with neural network function approximation. 2 RELATED WORK Randomized value functions for RL. For online RL, Osband et al. (2016; 2019) were the first to explore randomization of estimates of the value function for exploration. Their approach was inspired by posterior sampling for RL (Osband et al., 2013), which samples a value function from a posterior distribution and acts greedily with respect to the sampled function. Concretely, Osband et al. (2016; 2019) generate randomized value functions by injecting Gaussian noise into the training data and fitting a model on the perturbed data. Jia et al. (2022) extended the idea of perturbing rewards to online contextual bandits with neural function approximation. Ishfaq et al. (2021) obtained a provably efficient method for online RL with general function approximation using the perturbed rewards. While randomizing the value function is an intuitive approach to obtaining optimism in online RL, obtaining pessimism from the randomized value functions can be tricky in offline RL. Indeed, Ghasemipour et al. (2022) point out a critical flaw in several popular existing methods for offline RL that update an ensemble of randomized Q-networks toward a shared pessimistic temporal difference target. In this paper, we propose a simple fix to obtain pessimism properly by updating each randomized value function independently and taking the minimum over an ensemble of randomized value functions to form a pessimistic value function. Offline RL with function approximation. Provably efficient offline RL has been studied extensively for linear function approximation. Jin et al. (2021) were the first to show that pessimistic value iteration is provably efficient for offline linear MDPs. Xiong et al. (2023); Yin et al. (2022) improved upon Jin et al. (2021) by leveraging variance reduction. Xie et al. (2021) proposed a Bellman-consistency assumption with general function approximation, which improves the bound of Jin et al. (2021) by a factor of √ d when realized to finite action space and linear MDPs. Wang et al. (2021); Zanette (2021) studied the statistical hardness of offline RL with linear function approximation via exponential lower bound, and Foster et al. (2021) suggested that only realizability and strong uniform data coverage are not sufficient for sample-efficient offline RL. Beyond linearity, some works study offline RL for general function approximation, both parametric and nonparametric. These approaches are either based on Fitted-Q Iteration (FQI) (Munos & Szepesvári, 2008; Le et al., 2019; Chen & Jiang, 2019; Duan et al., 2021a;b; Hu et al., 2021; Nguyen-Tang et al., 2022b) or the pessimism principle (Uehara & Sun, 2022; Nguyen-Tang et al., 2022a; Jin et al., 2021). While pessimism-based algorithms avoid the strong assumptions of data coverage used by FQI-based algorithms, they require an explicit computation of valid confidence regions and possibly the inverse of a large covariance matrix which is computationally prohibitive and does not scale to complex function approximation setting. This limits the applicability of pessimism-based, provably efficient offline RL to practical settings. A very recent work Bai et al. (2022) estimates the uncertainty for constructing LCB via the disagreement of bootstrapped Q-functions. However, the uncertainty quantifier is only guaranteed in linear MDPs and must be computed explicitly. We provide a more detailed discussion of our technical contribution in the context of existing literature in Section C.1. 3 PRELIMINARIES In this section, we provide basic background on offline RL and overparameterized neural networks. 3.1 EPISODIC TIME-INHOMOGENOUS MARKOV DECISION PROCESSES (MDPS) A finite-horizon Markov decision process (MDP) is denoted as the tupleM = (S,A,P, r,H, d1), where S is an arbitrary state space, A an arbitrary action space, H the episode length, and d1 the initial state distribution. We assume that SA := |S||A| is finite but arbitrarily large, e.g., it can be as large as the total number of atoms in the observable universe ≈ 1082. Let P(S) denote the set of probability measures over S. A time-inhomogeneous transition kernel P = {Ph}Hh=1, where Ph : S × A → P(S) maps each state-action pair (sh, ah) to a probability distribution Ph(·|sh, ah). Let r = {rh}Hh=1 where rh : S × A → [0, 1] is the mean reward function at step h. A policy π = {πh}Hh=1 assigns each state sh ∈ S to a probability distribution, πh(·|sh), over the action space and induces a random trajectory s1, a1, r1, . . . , sH , aH , rH , sH+1 where s1 ∼ d1, ah ∼ πh(·|sh), sh+1 ∼ Ph(·|sh, ah). We define the state value function V πh ∈ RS and the actionstate value function Qπh ∈ RS×A at each timestep h as Qπh(s, a) = Eπ[ ∑H t=h rt|sh = s, ah = a], and V πh (s) = Ea∼π(·|s) [Qπh(s, a)], where the expectation Eπ is taken with respect to the randomness of the trajectory induced by π. Let Ph denote the transition operator defined as (PhV )(s, a) := Es′∼Ph(·|s,a)[V (s′)]. For any V : S → R, we define the Bellman operator at timestep h as (BhV )(s, a) := rh(s, a) + (PhV )(s, a). The Bellman equations are given as follows. For any (s, a, h) ∈ S ×A× [H], Qπh(s, a) = (BhV πh+1)(s, a), V πh (s) = ⟨Qπh(s, ·), πh(·|s)⟩A, V πH+1(s) = 0, where [H] := {1, 2, . . . ,H}, and ⟨·, ·⟩A denotes the summation over all a ∈ A. We define an optimal policy π∗ as any policy that yields the optimal value function, i.e. V π ∗ h (s) = supπ V π h (s) for any (s, h) ∈ S × [H]. For simplicity, we denote V π∗h and Qπ ∗ h as V ∗ h and Q ∗ h, respectively. The Bellman optimality equation can be written as Q∗h(s, a) = (BhV ∗h+1)(s, a), V ∗h (s) = max a∈A Q∗h(s, a), V ∗ H+1(s) = 0. Define the occupancy density as dπh(s, a) := P((sh, ah) = (s, a)|π) which is the probability that we visit state s and take action a at timestep h if we follow the policy π. We denote dπ ∗ h by d ∗ h. Offline regime. In the offline regime, the learner has access to a fixed dataset D = {(sth, ath, rth, sth+1)} t∈[K] h∈[H] generated a priori by some unknown behaviour policy µ = {µh}h∈[H]. Here, K is the total number of trajectories, and ath ∼ µh(·|sth), sth+1 ∼ Ph(·|sth, ath) for any (t, h) ∈ [K] × [H]. Note that we allow the trajectory at any time t ∈ [K] to depend on the trajectories at previous times. The goal of offline RL is to learn a policy π̂, based on (historical data) D, such that π̂ achieves small sub-optimality, which we define as SubOpt(π̂) := Es1∼d1 [SubOpt(π̂; s1)] , where SubOpt(π̂; s1) := V π ∗ 1 (s1)− V π̂1 (s1). Algorithm 1 Value Iteration with Perturbed Rewards (VIPeR) 1: Input: Offline data D = {(skh, akh, rkh)} k∈[K] h∈[H], a parametric function family F = {f(·, ·;W ) : W ∈ W} ⊂ {X → R} (e.g. neural networks), perturbed variances {σh}h∈[H], number of bootstraps M , regularization parameter λ, step size η, number of gradient descent steps J , and cutoff margin ψ, split indices {Ih}h∈[H] where Ih := [(H − h)K ′ + 1, . . . , (H − h+ 1)K ′] 2: Initialize ṼH+1(·)← 0 and initialize f(·, ·;W ) with initial parameter W0 3: for h = H, . . . , 1 do 4: for i = 1, . . . ,M do 5: Sample {ξk,ih }k∈Ih ∼ N (0, σ2h) and ζih = {ζ j,i h }j∈[d] ∼ N (0, σ2hId) 6: Perturb the dataset D̃ih ← {skh, akh, rkh + Ṽh+1(skh+1) + ξ k,i h }k∈Ih ▷ Perturbation 7: Let W̃ ih ← GradientDescent(λ, η, J, D̃ih, ζih,W0) (Algorithm 2) ▷ Optimization 8: end for 9: Compute Q̃h(·, ·)← min{mini∈[M ]f(·, ·; W̃ ih), (H − h+ 1)(1 + ψ)}+ ▷ Pessimism 10: π̃h ← argmaxπh⟨Q̃h, πh⟩ and Ṽh ← ⟨Q̃h, π̃h⟩ ▷ Greedy 11: end for 12: Output: π̃ = {π̃h}h∈[H]. Notation. For simplicity, we write xth = (sth, ath) and x = (s, a). We write Õ(·) to hide logarithmic factors of the problem parameters (d,H,K,m, 1/δ) in the standard Big-Oh notation. We use Ω(·) as the standard Omega notation. We write u ≲ v if u = O(v) and write u ≳ v if v ≲ u. We write A ⪯ B iff B −A is a positive definite matrix. Id denotes the d× d identity matrix. 3.2 OVERPARAMETERIZED NEURAL NETWORKS In this paper, we consider neural function approximation setting where the state-action value function is approximated by a two-layer neural network. For simplicity, we denoteX := S×A and view it as a subset of Rd. Without loss of generality, we assume X ⊂ Sd−1 := {x ∈ Rd : ∥x∥2 = 1}. We consider a standard two-layer neural network: f(x;W, b) = 1√ m ∑m i=1 biσ(w T i x), where m is an even number, σ(·) = max{·, 0} is the ReLU activation function (Arora et al., 2018), and W = (wT1 , . . . , w T m) T ∈ Rmd. During the training, we initialize (W, b) via the symmetric initialization scheme (Gao et al., 2019) as follows: For any i ≤ m2 , wi = wm2 +i ∼ N (0, Id/d), and bm 2 +i = −bi ∼ Unif({−1, 1}).1 During the training, we optimize over W while the bi are kept fixed, thus we write f(x;W, b) as f(x;W ). Denote g(x;W ) = ∇W f(x;W ) ∈ Rmd, and let W0 be the initial parameters of W . We assume that the neural network is overparameterized, i.e, the width m is sufficiently larger than the number of samples K. Overparameterization has been shown to be effective in studying the convergence and the interpolation behaviour of neural networks (Arora et al., 2019; Allen-Zhu et al., 2019; Hanin & Nica, 2020; Cao & Gu, 2019; Belkin, 2021). Under such an overparameterization regime, the dynamics of the training of the neural network can be captured using the framework of the neural tangent kernel (NTK) (Jacot et al., 2018). 4 ALGORITHM In this section, we present the proposed algorithm called Value Iteration with Perturbed Rewards, or VIPeR; see Algorithm 1 for the pseudocode. The key idea underlying VIPeR is to train a parametric model (e.g., a neural network) on a perturbed-reward dataset several times and act pessimistically by picking the minimum over an ensemble of estimated state-action value functions. In particular, at each timestep h ∈ [H], we drawM independent samples of zero-mean Gaussian noise with variance σh. We use these samples to perturb the sum of the observed rewards, rkh, and the estimated value function with a one-step lookahead, i.e., Ṽh+1(skh+1) (see Line 6 of Algorithm 1). The weights W̃ i h are then updated by minimizing the perturbed regularized squared loss on {D̃ih}i∈[M ] using gradient descent (Line 7). We pick the value function pessimistically by selecting the minimum over the finite ensemble. The chosen value function is truncated at (H − h+ 1)(1 + ψ) (see Line 9), where 1This symmetric initialization scheme makes f(x;W0) = 0 and ⟨g(x;W0),W0⟩ = 0 for any x. ψ ≥ 0 is a small cutoff margin (more on this when we discuss the theoretical analysis). The returned policy is greedy with respect to the truncated pessimistic value function (see Line 10). Algorithm 2 GradientDescent(λ, η, J, D̃ih, ζih,W0) 1: Input: Regularization parameter λ, step size η, number of gradient descent steps J , perturbed dataset D̃ih = {skh, akh, rkh + Ṽh+1(s k h+1) + ξ t,i h }k∈Ih , regularization per- turber ζih, initial parameter W0 2: L(W ) := 12 ∑ k∈Ih(f(s k h, a k h;W ) − (rkh + Ṽh+1(s k h+1) + ξ k,i h )) 2 + λ2 ∥W + ζ i h −W0∥22 3: for j = 0, . . . , J − 1 do 4: Wj+1 ←Wj − η∇L(Wj) 5: end for 6: Output: WJ . It is important to note that we split the trajectory indices [K] evenly into H disjoint buckets [K] = ∪h∈[H]Ih, where Ih = [(H − h)K ′ + 1, . . . , (H − h + 1)K ′] for K ′ := ⌊K/H⌋2, as illustrated in Figure 1. The estimated Q̃h is thus obtained only from the offline data with (trajectory) indices from Ih along with Ṽh+1. This novel design removes the data dependence structure in offline RL with function approximation (Nguyen-Tang et al., 2022b) and avoids a factor involving the log of the covering number in the bound on the sub-optimality of Algorithm 1, as we show in Section D.1. To deal with the non-linearity of the underlying MDP, we use a two-layer fully connected neural network as the parametric function family F in Algorithm 1. In other words, we approximate the state-action values: f(x;W ) = 1√ m ∑m i=1 biσ(w T i x), as described in Section 3.2. We use two-layer neural networks to simplify the computational analysis. We utilize gradient descent to train the state-action value functions {f(·, ·; W̃ ih)}i∈[M ], on perturbed rewards. The use of gradient descent is for the convenience of computational analysis, and our results can be extended to stochastic gradient descent by leveraging recent advances in the theory of deep learning (Allen-Zhu et al., 2019; Cao & Gu, 2019), albeit with a more involved analysis. Existing offline RL algorithms utilize estimates of statistical confidence regions to achieve pessimism in the offline setting. Explicitly constructing these confidence bounds is computationally expensive in complex problems where a neural network is used for function approximation. For example, the lower-confidence-bound-based algorithms in neural offline contextual bandits (NguyenTang et al., 2022a) and RL (Xu & Liang, 2022) require computing the inverse of a large covariance matrix with the size scaling with the number of network parameters. This is computationally prohibitive in most practical settings. Algorithm 1 (VIPeR) avoids such expensive computations while still obtaining provable pessimism and guaranteeing a rate of Õ( 1√ K ) on the sub-optimality, as we show in the next section. 5 SUB-OPTIMALITY ANALYSIS Next, we provide a theoretical guarantee on the sub-optimality of VIPeR for the function approximation class, F , represented by (overparameterized) neural networks. Our analysis builds on the recent advances in generalization and optimization of deep neural networks (Arora et al., 2019; Allen-Zhu et al., 2019; Hanin & Nica, 2020; Cao & Gu, 2019; Belkin, 2021) that leverage the observation that the dynamics of the neural parameters learned by (stochastic) gradient descent can be captured by the corresponding neural tangent kernel (NTK) space (Jacot et al., 2018) when the network is overparameterized. Next, we recall some definitions and state our key assumptions, formally. Definition 1 (NTK (Jacot et al., 2018)). The NTK kernel Kntk : X × X → R is defined as Kntk(x, x ′) = Ew∼N (0,Id/d)⟨xσ ′(wTx), x′σ′(wTx′)⟩, where σ′(u) = 1{u ≥ 0}. 2Without loss of generality, we assume K/H ∈ N. Let Hntk denote the reproducing kernel Hilbert space (RKHS) induced by the NTK, Kntk. SinceKntk is a universal kernel (Ji et al., 2020), we have that Hntk is dense in the space of continuous functions on (a compact set) X = S ×A (Rahimi & Recht, 2008). Definition 2 (Effective dimension). For any h ∈ [H], the effective dimension of the NTK matrix on data {xkh}k∈Ih is defined as d̃h := logdet(IK′ +Kh/λ) log(1 +K ′/λ) , where Kh := [Kntk(xih, x j h)]i,j∈Ih is the Gram matrix of Kntk on the data {xkh}k∈Ih . We further define d̃ := maxh∈[H] d̃h. Remark 1. Intuitively, the effective dimension d̃h measures the number of principal dimensions over which the projection of the data {xkh}k∈Ih in the RKHSHntk is spread. It was first introduced by Valko et al. (2013) for kernelized contextual bandits and was subsequently adopted by Yang & Wang (2020) and Zhou et al. (2020) for kernelized RL and neural contextual bandits, respectively. The effective dimension is data-dependent and can be bounded by d̃ ≲ K ′(d+1)/(2d) in the worst case (see Section B for more details).3 Definition 3 (RKHS of the infinite-width NTK). Define Q∗ := {f(x) = ∫ Rd c(w) Txσ′(wTx)dw : supw ∥c(w)∥2 p0(w) < B}, where c : Rd → Rd is any function, p0 is the probability density function of N (0, Id/d), and B is some positive constant. We make the following assumption about the regularity of the underlying MDP under function approximation. Assumption 5.1 (Completeness). For any V : S → [0, H + 1] and any h ∈ [H], BhV ∈ Q∗.4 Assumption 5.1 ensures that the Bellman operator Bh can be captured by an infinite-width neural network. This assumption is mild as Q∗ is a dense subset of Hntk (Gao et al., 2019, Lemma C.1) when B = ∞, thus Q∗ is an expressive function class when B is sufficiently large. Moreover, similar assumptions have been used in many prior works on provably efficient RL with function approximation (Cai et al., 2019; Wang et al., 2020; Yang et al., 2020; Nguyen-Tang et al., 2022b). Next, we present a bound on the suboptimality of the policy π̃ returned by Algorithm 1. Recall that we use the initialization scheme described in Section 3.2. Fix any δ ∈ (0, 1). Theorem 1. Let σh = σ := 1 + λ 1 2B + (H + 1) [ d̃ log(1 +K ′/λ) + 2 + 2 log(3H/δ) ] 1 2 . Let m = poly(K ′, H, d,B, d̃, λ, δ) be some high-order polynomial of the problem parameters, λ = 1 + HK , η ≲ (λ +K ′)−1, J ≳ K ′ log(K ′(H √ d̃ + B)), ψ = 1, and M = log HSAδ / log 1 1−Φ(−1) , where Φ(·) is the cumulative distribution function of the standard normal distribution. Then, under Assumption 5.1, with probability at least 1−MHm−2 − 2δ, for any s1 ∈ S, we have that SubOpt(π̃; s1) ≤ σ(1 + √ 2 log(MSAH/δ)) · Eπ∗ [ H∑ h=1 ∥g(sh, ah;W0)∥Λ−1h ] + Õ( 1 K ′ ) where Λh := λImd + ∑ k∈Ih g(s k h, a k h;W0)g(s k h, a k h;W0) T ∈ Rmd×md. Remark 2. Theorem 1 shows that the randomized design in our proposed algorithm yields a provable uncertainty quantifier even though we do not explicitly maintain any confidence regions in the algorithm. The implicit pessimism via perturbed rewards introduces an extra factor of 1 + √ 2 log(MSAH/δ) into the confidence parameter β. We build upon Theorem 1 to obtain an explicit bound using the following data coverage assumption. Assumption 5.2 (Optimal-Policy Concentrability). ∃κ <∞, sup(h,sh,ah) d∗h(sh,ah) dµh(sh,ah) ≤ κ. 3Note that this is the worst-case bound, and the effective dimension can be significantly smaller in practice. 4We consider V : S → [0, H + 1] instead of V : S → [0, H] due to the cutoff margin ψ in Algorithm 1. Assumption 5.2 requires any positive-probability trajectory induced by the optimal policy to be covered by the behavior policy. This data coverage assumption is significantly milder than the uniform coverage assumptions in many FQI-based offline RL algorithms (Munos & Szepesvári, 2008; Chen & Jiang, 2019; Nguyen-Tang et al., 2022b) and is common in pessimism-based algorithms (Rashidinejad et al., 2021; Nguyen-Tang et al., 2022a; Chen & Jiang, 2022; Zhan et al., 2022). Theorem 2. For the same parameter settings and the same assumption as in Theorem 1, we have that with probability at least 1−MHm−2 − 5δ, SubOpt(π̃) ≤ 2σ̃κH√ K ′ √2d̃ log(1 +K ′/λ) + 1 + √ log Hδ λ + 16H 3K ′ log log2(K ′H) δ + Õ( 1 K ′ ), where σ̃ := σ(1 + √ 2 log(SAH/δ)). Remark 3. Theorem 2 shows that with appropriate parameter choice, VIPeR achieves a suboptimality of Õ ( κH3/2 √ d̃·max{B,H √ d̃}√ K ) . Compared to Yang et al. (2020), we improve by a factor of K 2 dγ−1 for some γ ∈ (0, 1) at the expense of √ H . When realized to a linear MDP in Rdlin , d̃ = dlin and our bound reduces into Õ ( κH5/2dlin√ K ) which improves the bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) by a factor of √ dlin. We provide the result summary and comparison in Table 1 and give a more detailed discussion in Subsection B.1. 6 EXPERIMENTS In this section, we empirically evaluate the proposed algorithm VIPeR against several state-of-the-art baselines, including (a) PEVI (Jin et al., 2021), which explicitly constructs lower confidence bound (LCB) for pessimism in a linear model (thus, we rename this algorithm as LinLCB for convenience in our experiments); (b) NeuraLCB (Nguyen-Tang et al., 2022a) which explicitly constructs an LCB using neural network gradients; (c) NeuraLCB (Diag), which is NeuraLCB with a diagonal approximation for estimating the confidence set as suggested in NeuraLCB (Nguyen-Tang et al., 2022a); (d) Lin-VIPeR which is VIPeR realized to the linear function approximation instead of neural network function approximation; (e) NeuralGreedy (LinGreedy, respectively) which uses neural networks (linear models, respectively) to fit the offline data and act greedily with respect to the estimated state-action value functions without any pessimism. Note that when the parametric class, F , in Algorithm 1 is that of neural networks, we refer to VIPeR as Neural-VIPeR. We do not utilize data splitting in the experiments. We provide further algorithmic details of the baselines in Section H. We evaluate all algorithms in two problem settings: (1) the underlying MDP is a linear MDP whose reward functions and transition kernels are linear in some known feature map (Jin et al., 2020), and (2) the underlying MDP is non-linear with horizon length H = 1 (i.e., non-linear contextual bandits) (Zhou et al., 2020), where the reward function is either synthetic or constructed from MNIST dataset (LeCun et al., 1998). We also evaluate (a variant of) our algorithm and show its strong performance advantage in the D4RL benchmark (Fu et al., 2020) in Section A.3. We implemented all algorithms in Pytorch (Paszke et al., 2019) on a server with Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz, 755G RAM, and one NVIDIA Tesla V100 Volta GPU Accelerator 32GB Graphics Card.5 6.1 LINEAR MDPS We first test the effectiveness of pessimism implicit in VIPeR (Algorithm 1). To that end, we construct a hard instance of linear MDPs (Yin et al., 2022; Min et al., 2021); due to page limitation, we defer the details of our construction to Section A.1. We test for different values of H ∈ {20, 30, 50, 80} and report the sub-optimality of LinLCB, Lin-VIPeR, and LinGreedy, averaged over 30 runs, in Figure 2. We find that LinGreedy, which is uncertainty-agnostic, fails to learn from offline data and has poor performance in terms of sub-optimality when compared to pessimism-based algorithms LinLCB and Lin-VIPeR. Further, LinLCB outperforms Lin-VIPeR when K is smaller than 400, but the performance of the two algorithms matches for larger sample sizes. Unlike LinLCB, Lin-VIPeR does not construct any confidence regions or require computing and inverting large (covariance) matrices. The Y-axis is in log scale; thus, Lin-VIPeR already has small sub-optimality in the first K ≈ 400 samples. These show the effectiveness of the randomized design for pessimism implicit in Algorithm 1. 6.2 NEURAL CONTEXTUAL BANDITS Next, we compare the performance and computational efficiency of various algorithms against VIPeR when neural networks are employed. For simplicity, we consider contextual bandits, a special case of MDPs with horizon H = 1. Following Zhou et al. (2020); Nguyen-Tang et al. (2022a), we use the bandit problems specified by the following reward functions: (a) r(s, a) = cos(3sT θa); (b) r(s, a) = exp(−10(sT θa)2), where s and θa are generated uniformly at random from the unit sphere Sd−1 with d = 16 and A = 10; (c) MNIST, where r(s, a) = 1 if a is the true label of the input image s and r(s, a) = 0, otherwise. To predict the value of different actions from the same state s using neural networks, we transform a state s ∈ Rd into dA-dimensional vectors s(1) = (s, 0, . . . , 0), s(2) = (0, s, 0, . . . , 0), . . . , s(A) = (0, . . . , 0, s) and train the network to map s(a) to r(s, a) given a pair of data (s, a). For Neural-VIPeR, NeuralGreedy, NeuraLCB, and NeuraLCB (Diag), we use the same neural network architecture with two hidden layers of width m = 64 and train the network with Adam optimizer (Kingma & Ba, 2015). Due to page limitations, we defer other experimental details and hyperparameter setting to Section A.2. We report the 5Our code is available here: https://github.com/thanhnguyentang/neural-offline-rl. sub-optimality averaged over 5 runs in Figure 3. We see that algorithms that use a linear model, i.e., LinLCB and Lin-VIPeR significantly underperform neural-based algorithms, i.e., NeuralGreedy, NeuraLCB, NeuraLCB (Diag) and Neural-VIPeR, attesting to the crucial role neural representations play in RL for non-linear problems. It is also interesting to observe from the experimental results that NeuraLCB does not always outperform its diagonal approximation, NeuraLCB (Diag) (e.g., in Figure 3(b)), putting a question mark on the empirical effectiveness of NTK-based uncertainty for offline RL. Finally, Neural-VIPeR outperforms all algorithms in the tested benchmarks, suggesting the effectiveness of our randomized design with neural function approximation. Figure 4 shows the average runtime for action selection of neural-based algorithms NeuraLCB, NeuraLCB (Diag), and Neural-VIPeR. We observe that algorithms that use explicit confidence regions, i.e., NeuraLCB and NeuraLCB (Diag), take significant time selecting an action when either the number of offline samples K or the network width m increases. This is perhaps not surprising because NeuraLCB and NeuraLCB (Diag) need to compute the inverse of a large covariance matrix to sample an action and maintain the confidence region for each action per state. The diagonal approximation significantly reduces the runtime of NeuraLCB, but the runtime still scales with the number of samples and the network width. In comparison, the runtime for action selection for Neural-VIPeR is constant. Since NeuraLCB, NeuraLCB (Diag), and Neural-VIPeR use the same neural network architecture, the runtime spent training one model is similar. The only difference is that Neural-VIPeR trains M models while NeuraLCB and NeuraLCB (Diag) train a single model. However, as the perturbed data in Algorithm 1 are independent, trainingM models in Neural-VIPeR is embarrassingly parallelizable. Finally, in Figure 5, we study the effect of the ensemble size on the performance of Neural-VIPeR. We use different values of M ∈ {1, 2, 5, 10, 20, 30, 50, 100, 200} for sample size K = 1000. We find that the sub-optimality of Neural-VIPeR decreases graciously as M increases. Indeed, the grid search from the previous experiment in Figure 3 also yields M = 10 and M = 20 from the search space M ∈ {1, 10, 20} as the best result. This suggests that the ensemble size can also play an important role as a hyperparameter that can determine the amount of pessimism needed in a practical setting. 7 CONCLUSION We propose a novel algorithmic approach for offline RL that involves randomly perturbing value functions and pessimism. Our algorithm eliminates the computational overhead of explicitly maintaining a valid confidence region and computing the inverse of a large covariance matrix for pessimism. We bound the suboptimality of the proposed algorithm as Õ ( κH5/2d̃/ √ K ) . We support our theoretical claims of computational efficiency and the effectiveness of our algorithm with extensive experiments. ACKNOWLEDGEMENTS This research was supported, in part, by DARPA GARD award HR00112020004, NSF CAREER award IIS-1943251, an award from the Institute of Assured Autonomy, and Spring 2022 workshop on “Learning and Games” at the Simons Institute for the Theory of Computing. A EXPERIMENT DETAILS A.1 LINEAR MDPS In this subsection, we provide further details to the experiment setup used in Subsection 6.1. We describe in detail a variant of the hard instance of linear MDPs (Yin et al., 2022) used in our experiment. The linear MDP has S = {0, 1},A = {0, 1, · · · , 99}, and the feature dimension d = 10. Each action a ∈ [99] = {1, . . . , 99} is represented by its binary encoding vector ua ∈ R8 with entry being either −1 or 1. The feature mapping ϕ(s, a) is given by ϕ(s, a) = [uTa , δ(s, a), 1− δ(s, a)]T ∈ R10, where δ(s, a) = 1 if (s, a) = (0, 0) and δ(s, a) = 0 otherwise. The true measure νh(s) is given by νh(s) = [0, · · · , 0, (1 − s) ⊕ αh, s ⊕ αh] where {αh}h∈[H] ∈ {0, 1}H are generated uniformly at random and ⊕ is the XOR operator. We define θh = [0, · · · , 0, r, 1 − r]T ∈ R10 where r = 0.99. Recall that the transition follows Ph(s′|s, a) = ⟨ϕ(s, a), νh(s′)⟩ and the mean reward rh(s, a) = ⟨ϕ(s, a), θh⟩. We generated a priori K ∈ {1, . . . , 1000} trajectories using the behavior policy µ, where for any h ∈ [H] we set µh(0|0) = p, µh(1|0) = 1 − p, µh(a|0) = 0,∀a > 1;µh(0|1) = p, µh(a|1) = (1− p)/99,∀a > 0, where we set p = 0.6. We run over K ∈ {1, . . . , 1000} and H ∈ {20, 30, 50, 80}. We set λ = 0.01 for all algorithms. For Lin-VIPeR, we grid searched σh = σ ∈ {0.0, 0.1, 0.5, 1.0, 2.0} and M ∈ {1, 2, 10, 20}. For LinLCB, we grid searched its uncertainty multiplier β ∈ {0.1, 0.5, 1, 2}. The sub-optimality metric is used to compare algorithms. For each H ∈ {20, 30, 50, 80}, each algorithm was executed for 30 times and the averaged results (with std) are reported in Figure 2. A.2 NEURAL CONTEXTUAL BANDITS In this subsection, we provide in detail the experimental and hyperparameter setup in our experiment in Subsection 6.2. For Neural-VIPeR, NeuralGreedy, NeuraLCB and NeuraLCB (Diag), we use the same neural network architecture with two hidden layers whose width m = 64, train the network with Adam optimizer (Kingma & Ba, 2015) with learning rate being grid-searched over {0.0001, 0.001, 0.01} and batch size of 64. For NeuraLCB, NeuraLCB (Diag), and LinLCB, we grid-searched β over {0.001, 0.01, 0.1, 1, 5, 10}. For Neural-VIPeR and Lin-VIPeR, we gridsearched σh = σ over {0.001, 0.01, 0.1, 1, 5, 10} andM over {1, 10, 20}. We did not run NeuraLCB in MNIST as the inverse of a full covariance matrix in this case is extremely expensive. We fixed the regularization parameter λ = 0.01 for all algorithms. Offline data is generated by the (1−ϵ)-optimal policy which generates non-optimal actions with probability ϵ and optimal actions with probability 1 − ϵ. We set ϵ = 0.5 in our experiments. To estimate the expected sub-optimality, we randomly obtain 1, 000 novel samples (i.e. not used in training) to compute the average sub-optimality and keep these same samples for all algorithms. A.3 EXPERIMENT IN D4RL BENCHMARK In this subsection, we evaluate the effectiveness of the reward perturbing design of VIPeR in the Gym domain in the D4RL benchmark (Fu et al., 2020). The Gym domain has three environments (HalfCheetah, Hopper, and Walker2d) with five datasets (random, medium, medium-replay, medium-expert, and expert), making up 15 different settings. Design. To adapt the design of VIPeR to continuous control, we use the actor-critic framework. Specifically, we have M critics {Qθi}i∈[M ] and one actor πϕ, where {θi}i∈[M ] and ϕ are the learnable parameters for the critics and actor, respectively. Note that in the continuous domain, we consider discounted MDP with discount factor γ, instead of finite-time episode MDP as we initially considered in our setting in the main paper. In the presence of the actor πϕ, there are two modifications to Algorithm 1. The first modification is that when training the critics {Qiθ}i∈[M ], we augment the training loss in Algorithm 2 with a new penalization term. Specifically, the critic loss for Qθi on a training sample τ := (s, a, r, s′) (sampled from the offline data D) is L(θi; τ) = (Qθi(s, a)− (r + γQθ̄i(s′) + ξ)) 2 + β Ea′∼πϕ(·|s) [ (Qθi(s, a ′)− Q̄(s, a′))2 ]︸ ︷︷ ︸ penalization term R(θi;s,ϕ) , (1) where θ̄i has the same value of the current θi but is kept fixed, Q̄ = 1M ∑M i=1Qθi and ξ ∼ N (0, σ2) is Gaussian noise, and β is a penalization parameter (note that β here is totally different from the β in Theorem 1). The penalization term R(θi; s, ϕ) discourages overestimation in the value function estimate Qθi for out-of-distribution (OOD) actions a′ ∼ πϕ(·|s). Our design of R(θi; s, ϕ) is initially inspired by the OOD penalization in Bai et al. (2022) that creates a pessimistic pseudo target for the values at OOD actions. Note that we do not need any penalization for OOD actions in our experiment for contextual bandits in Section 6.2. This is because in the contextual bandit setting in Section 6.2 the action space is finite and not large, thus the offline data often sufficiently cover all good actions. In the continuous domain such as the Gym domain of D4RL, however, it is almost certain that there are actions that are not covered by the offline data since the action space is continuous. We also note that the inclusion of the OOD action penalization term R(θi; s, ϕ) in this experiment does not contradict our guarantee in Theorem 1 since in the theorem we consider finite action space while in this experiment we consider continuous action space. We argue that the inclusion of some regularization for OOD actions (e.g., R(θi; s, ϕ)) is necessary for the continuous domain. 6 The second modification to Algorithm 1 for the continuous domain is the actor training, which is the implementation of policy extraction in line 10 of Algorithm 1. Specifically, to train the actor πϕ given the ensemble of critics {Qiθ}i∈[M ], we use soft actor update in Haarnoja et al. (2018) via max ϕ { Es∼D,a′∼πϕ(·|s) [ min i∈[M ] Qθi(s, a ′)− log πϕ(a′|s) ]} , (2) which is trained using gradient ascent in practice. Note that in the discrete action domain, we do not need such actor training as we can efficiently extract the greedy policy with respect to the estimated action-value functions when the action space is finite. Also note that we do not use data splitting and value truncation as in the original design of Algorithm 1. Hyperparameters. For the hyper-parameters of our training, we set M = 10 and the noise variance σ = 0.01. For β, we decrease it from 0.5 to 0.2 by linear decay for the first 50K steps and exponential decay for the remaining steps. For the other hyperparameters of actor-critic training, we fix them the same as in Bai et al. (2022). Specifically, the Q-network is the fully connected neural network with three hidden layers all of which has 256 neurons. The learning rate for the actor and the critic are 10−4 and 3× 10−4, respectively. The optimizer is Adam. Results. We compare VIPeR with several state-of-the-art algorithms, including (i) BEAR (Kumar et al., 2019) that use MMD distance to constraint policy to the offline data, (ii) UWAC (Wu et al., 2021) that improves BEAR using dropout uncertainty, (iii) CQL (Kumar et al., 2020) that minimizes Q-values of OOD actions, (iv) MOPO (Yu et al., 2020) that uses model-based uncertainty via ensemble dynamics, (v) TD3-BC (Fujimoto & Gu, 2021) that uses adaptive behavior cloning, and (vi) PBRL (Bai et al., 2022) that use uncertainty quantification via disagreement of bootstrapped Q-functions. We follow the evaluation protocol in Bai et al. (2022). We run our algorithm for five seeds and report the average final evaluation scores with standard deviation. We report the scores of our method and the baselines in Table 2. We can see that our method has a strong advantage of good performance (highest scores) in 11 out of 15 settings, and has good stability (small std) in all settings. Overall, we also have the strongest average scores aggregated over all settings. B EXTENDED DISCUSSION Here we provide extended discussion of our result. B.1 COMPARISON WITH OTHER WORKS AND DISCUSSION We provide further discussion regarding comparison with other works in the literature. 6In our experiment, we also observe that without this penalization term, the method struggles to learn any good policy. However, using only the penalization term without the first term in Eq. (1), we observe that the method cannot learn either. Comparing to Jin et al. (2021). When the underlying MDP reduces into a linear MDP, if we use the linear model as the plug-in parametric model in Algorithm 1, our bound reduces into Õ ( κH5/2dlin√ K ) which improves the bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) by a factor of √ dlin and worsen by a factor of √ H due to the data splitting. Thus, our bound is more favorable in the linear MDPs with high-dimensional features. Moreover, our bound is guaranteed in more practical scenarios where the offline data can have been adaptively generated and is not required to uniformly cover the state-action space. The explicit bound Õ(d3/2linH2/ √ K) of PEVI (Jin et al., 2021, Corollary 4.6) is obtained under the assumption that the offline data have uniform coverage and are generated independently on the episode basis. Comparing to Yang et al. (2020). Though Yang et al. (2020) work in the online regime, it shares some part of the literature with our work in function approximation for RL. Besides different learning regimes (offline versus online), we offer three key distinctions which can potentially be used in the online regime as well: (i) perturbed rewards, (ii) optimization, and (iii) data split. Regarding (i), our perturbed reward design can be applied to online RL with function approximation to obtain a provably efficient online RL that is computationally efficient and thus remove the need of maintaining explicit confidence regions and performing the inverse of a large covariance matrix. Regarding (ii), we incorporate the optimization analysis into our algorithm which makes our algorithm and analysis more practical. We also note that unlike (Yang et al., 2020), we do not make any assumption on the eigenvalue decay rate of the empirical NTK kernel as the empirical NTK kernel is data-dependent. Regarding (iii), our data split technique completely removes the factor√ logN∞(H, 1/K,B) in the bound at the expense of increasing the bound by a factor of √ H . In complex models, such log covering number can be excessively larger than the horizon H , making the algorithm too optimistic in the online regime (optimistic in the offline regime, respectively). For example, the target function class is RKHS with a γ-polynomial decay, the log covering number scales as (Yang et al., 2020, Lemma D1),√ logN∞(H, 1/K,B) ≲ K 2 αγ−1 , for some α ∈ (0, 1). In the case of two-layer ReLU NTK, γ = d (Bietti & Mairal, 2019), thus√ logN∞(H, 1/K,B) ≲ K 2 αd−1 which is much larger than √ H when the size of dataset is large. Note that our data-splitting technique is general that can be used in the online regime as well. Comparing to Xu & Liang (2022). Xu & Liang (2022) consider a different setting where pertimestep rewards are not available and only the total reward of the whole trajectory is given. Used with neural function approximation, they obtain Õ(DeffH2/ √ K) where Deff is their effective dimension. Note that Xu & Liang (2022) do not use data splitting and still achieve the same order of Deff as our result with data splitting. It at first might appear that our bound is inferior to their bound as we pay the cost of √ H due to data splitting. However, to obtain that bound, they make three critical assumptions: (i) the offline data trajectories are independently and identically distributed (i.i.d.) (see their Assumption 3), (ii) the offline data is uniformly explorative over all dimensions of the feature space (also see their Assumption 3), and (iii) the eigenfunctions of the induced NTK RKHS has finite spectrum (see their Assumption 4). The i.i.d. assumption under the RKHS space with finite dimensions (due to the finite spectrum assumption) and the well-explored dataset is critical in their proof to use a matrix concentration that does not incur an extra factor of √ Deff as it would normally do without these assumptions (see Section E, the proof of their Lemma 2). Note that the celebrated ReLU NTK does not satisfy the finite spectrum assumption (Bietti & Mairal, 2019). Moreover, we do not make any of these three assumptions above for our bound to hold. That suggests that our bound is much more general. In addition, we do not need to compute any confidence regions nor perform the inverse of a large covariance matrix. Comparing to Yin et al. (2023). During the submission of our work, a concurrent work of Yin et al. (2023) appeared online. Yin et al. (2023) study provably efficient offline RL with a general parametric function approximation that unifies the guarantees of offline RL in linear and generalized linear MDPs, and beyond with potential applications to other classes of functions in practice. We remark that the result in Yin et al. (2023) is orthogonal/complementary to our paper since they consider the parametric class with third-time differentiability which cannot apply to neural networks (not necessarily overparameterized) with non-smooth activation such as ReLU. In addition, they do not consider reward perturbing in their algorithmic design or optimization errors in their analysis. B.2 WORSE-CASE RATE OF EFFECTIVE DIMENSION In the main paper, we prove an Õ ( κH5/2d̃√ K ) sub-optimality bound which depends on the notion of effective dimension defined in Definition 2. Here we give a worst-case rate of the effective dimension d̃ for the two-layer ReLU NTK. We first briefly review the background of RKHS. LetH be an RKHS defined on X ⊆ Rd with kernel function ρ : X ×X → R. Let ⟨·, ·⟩H : H×H → R and ∥ · ∥H : H → R be the inner product and the RKSH norm on H. By the reproducing kernel property of H, there exists a feature mapping ϕ : X → H such that f(x) = ⟨f, ϕ(x)⟩H and ρ(x, x′) = ⟨ϕ(x), ϕ(x′)⟩H. We assume that the kernel function ρ is uniformly bounded, i.e. supx∈X ρ(x, x) <∞. Let L2(X ) be the space of square-integral functions on X with respect to the Lebesgue measure and let ⟨·, ·⟩L2 be the inner product on L2(X ). The kernel function ρ induces an integral operator Tρ : L2(X )→ L2(X ) defined as Tρf(x) = ∫ X ρ(x, x′)f(x′)dx′. By Mercer’s theorem (Steinwart & Christmann, 2008), Tρ has countable and positive eigenvalues {λi}i≥1 and eigenfunctions {νi}i≥1. The kernel function andH can be expressed as ρ(x, x′) = ∞∑ i=1 λiνi(x)νi(x ′), H = {f ∈ L2(X ) : ∞∑ i=1 ⟨f, νi⟩L2 λi <∞}. Now consider the NTK defined in Definition 1: Kntk(x, x ′) = Ew∼N (0,Id/d)⟨xσ ′(wTx), x′σ′(wTx′)⟩. It follows from (Bietti & Mairal, 2019, Proposition 1) that λi ≍ i−d. Thus, by (Srinivas et al., 2010, Theorem 5), the data-dependent effective dimension ofHntk can be bounded in the worst case by d̃ ≲ K ′(d+1)/(2d). We remark that this is the worst-case bound that considers uniformly over all possible realizable of training data. The effective dimension d̃ is on the other hand data-dependent, i.e. its value depends on the specific training data at hand thus d̃ can be actually much smaller than the worst-case rate. C PROOF OF THEOREM 1 AND THEOREM 2 In this section, we provide both the outline and detailed proofs of Theorem 1 and Theorem 2. C.1 TECHNICAL REVIEW AND PROOF OVERVIEW Technical Review. In what follows, we provide more detailed discussion when placing our technical contribution in the context of the related literature. Our technical result starts with the value difference lemma in Jin et al. (2021) to connect bounding the suboptimality of an offline algorithm to controlling the uncertainty quantification in the value estimates. Thus, our key technical contribution is to provably quantify the uncertainty of the perturbed value function estimates which were obtained via reward perturbing and gradient descent. This problem setting is largely different from the current analysis of overparameterized neural networks for supervised learning which does not require uncertainty quantification. Our work is not the first to consider uncertainty quantification with overparameterized neural networks, since it has been studied in Zhou et al. (2020); Nguyen-Tang et al. (2022a); Jia et al. (2022). However, there are significant technical differences between our work and these works. The work in Zhou et al. (2020); Nguyen-Tang et al. (2022a) considers contextual bandits with overparameterized neural networks trained by (S)GD and quantifies the uncertainty of the value function with explicit empirical covariance matrices. We consider general MDP and use reward perturbing to implicitly obtain uncertainty, thus requiring different proof techniques. Jia et al. (2022) is more related to our work since they consider reward perturbing with overparameterized neural networks (but they consider contextual bandits). However, our reward perturbing strategy is largely different from that in Jia et al. (2022). Specifically, Jia et al. (2022) perturbs each reward only once while we perturb each reward multiple times, where the number of perturbing times is crucial in our work and needs to be controlled carefully. We show in Theorem 1 that our reward perturbing strategy is effective in enforcing sufficient pessimism for offline learning in general MDP and the empirical results in Figure 2, Figure 3, Figure 5, and Table 2 are strongly consistent with our theoretical suggestion. Thus, our technical proofs are largely different from those of Jia et al. (2022). Finally, the idea of perturbing rewards multiple times in our algorithm is inspired by Ishfaq et al. (2021). However, Ishfaq et al. (2021) consider reward perturbing for obtaining optimism in online RL. While perturbing rewards are intuitive to obtain optimism for online RL, for offline RL, under distributional shift, it can be paradoxically difficult to properly obtain pessimism with randomization and ensemble (Ghasemipour et al., 2022), especially with neural function approximation. We show affirmatively in our work that simply taking the minimum of the randomized value functions after perturbing rewards multiple times is sufficient to obtain provable pessimism for offline RL. In addition, Ishfaq et al. (2021) do not consider neural network function approximation and optimization. Controlling the uncertainty of randomization (via reward perturbing) under neural networks with extra optimization errors induced by gradient descent sets our technical proof significantly apart from that of Ishfaq et al. (2021). Besides all these differences, in this work, we propose an intricately-designed data splitting technique that avoids the uniform convergence argument and could be of independent interest for studying sample-efficient RL with complex function approximation. Proof Overview. The key steps for proving Theorem 1 and Theorem 2 are highlighted in Subsection C.2 and Subsection C.3, respectively. Here, we discuss an overview of our proof strategy. The key technical challenge in our proof is to quantify the uncertainty of the perturbed value function estimates. To deal with this, we carefully control both the near-linearity of neural networks in the NTK regime and the estimation error induced by reward perturbing. A key result that we use to control the linear approximation to the value function estimates is Lemma D.3. The technical challenge in establishing Lemma D.3 is how to carefully control and propagate the optimization error incurred by gradient descent. The complete proof of Lemma D.3 is provided in Section E.3. The implicit uncertainty quantifier induced by the reward perturbing is established in Lemma D.1 and Lemma D.2, where we carefully design a series of intricate auxiliary loss functions and establish the anti-concentrability of the perturbed value function estimates. This requires a careful design of the variance of the noises injected into the rewards. To deal with removing a potentially large covering number when we quantify the implicit uncertainty, we propose our data splitting technique which is validated in the proof of Lemma D.1 in Section E.1. Moreover, establishing Lemma D.1 in the overparameterization regime induces an additional challenge since a standard analysis would result in a vacuous bound that scales with the overparameterization. We avoid this issue by carefully incorporating the use of the effective dimension in Lemma D.1. C.2 PROOF OF THEOREM 1 In this subsection, we present the proof of Theorem 1. We first decompose the suboptimality SubOpt(π̃; s) and present the main lemmas to bound the evaluation error and the summation of the implicit confidence terms, respectively. The detailed proof of these lemmas are deferred to Section D. For proof convenience, we first provide the key parameters that we use consistently throughout our proofs in Table 3. We define the model evaluation error at any (x, h) ∈ X × [H] as errh(x) = (BhṼh+1 − Q̃h)(x), (3) where Bh is the Bellman operator defined in Section 3, and Ṽh and Q̃h are the estimated (action-) state value functions returned by Algorithm 1. Using the standard suboptimality decomposition (Jin et al., 2021, Lemma 3.1), for any s1 ∈ S, SubOpt(π̃; s1) = − H∑ h=1 Eπ̃ [errh(sh, ah)] + H∑ h=1 Eπ∗ [errh(sh, ah)] + H∑ h=1 Eπ∗ [ ⟨Q̃h(sh, ·), π∗h(·|sh)− π̃h(·|sh)⟩A ] ︸ ︷︷ ︸ ≤0 , where the third term is non-positive as π̃h is greedy with respect to Q̃h. Thus, for any s1 ∈ S, we have SubOpt(π̃; s1) ≤ − H∑ h=1 Eπ̃ [errh(sh, ah)] + H∑ h=1 Eπ∗ [errh(sh, ah)] . (4) In the following main lemma, we bound the evaluation error errh(s, a). In the rest of the proof, we consider an additional parameter R and fix any δ ∈ (0, 1). Lemma C.1. Let m = Ω ( d3/2R−1 log3/2( √ m/R) ) R = O ( m1/2 log−3m ) , m = Ω ( K ′10(H + ψ)2 log(3K ′H/δ) ) λ > 1 K ′C2g ≥ λR ≥ max{4B̃1, 4B̃2, 2 √ 2λ−1K ′(H + ψ + γh,1)2 + 4γ2h,2}, η ≤ (λ+K ′C2g )−1, ψ > ι, σh ≥ β,∀h ∈ [H], (5) where B̃1, B̃2, γh,1, γh,2, and ι are defined in Table 3,Cg is a absolute constant given in Lemma G.1, and R is an additional parameter. Let M = log HSAδ / log 1 1−Φ(−1) where Φ(·) is the cumulative distribution function of the standard normal distribution. With probability at least 1−MHm−2−2δ, for any (x, h) ∈ X × [H], we have −ι ≤ errh(x) ≤ σh(1 + √ 2 log(MSAH/δ)) · ∥g(x;W0)∥Λ−1h + ι where Λh := λImd + ∑ k∈Ih g(x k h;W0)g(x k h;W0) T ∈ Rmd×md. Now we can prove Theorem 1. Proof of Theorem 1. Theorem 1 can directly follow from substituting Lemma C.1 into Equation (4). We now only need to simplify the conditions in Equation (5). To satisfy Equation (5), it suffices to set λ = 1 + HK ψ = 1 > ι σh = β 8CgR 4/3m−1/6 √ logm ≤ 1 λ−1K ′H2 ≥ 2 B̃1 ≤ √ 2K ′(H + ψ + γh,1)2 + λγ2h,2 + 1 √ K ′CgR 1/3m−1/6 √ logm ≤ 1 B̃2 ≤ K ′CgR4/3m−1/6 √ logm ≤ 1. Combining with Equation 5, we have λ = 1 + HK ψ = 1 > ι σh = β η ≲ (λ+K ′)−1 m ≳ max { R8 log3m,K ′10(H + 1)2 log(3K ′H/δ), d3/2R−1 log3/2( √ m/R),K ′6R8 log3m } m ≳ [2K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) + 1]3K ′3R log3m 4 √ K ′(H + 1 + β √ log(K ′M/δ)) + 4β √ d log(dK ′M/δ) ≤ R ≲ K ′. (6) Note that with the above choice of λ = 1 + HK , we have K ′ log λ = log(1 + 1 K ′ )K ′ ≤ log 3 < 2. We further set that m ≳ B2K ′2d log(3H/δ), we have β = BK ′√ m (2 √ d+ √ 2 log(3H/ δ))λ−1/2Cg + λ 1/2B + (H + ψ) [√ d̃h log(1 + K ′ λ ) +K ′ log λ+ 2 log(3H/δ) ] ≤ 1 + λ1/2B + (H + 1) [√ d̃h log(1 + K ′ λ ) + 2 + 2 log(3H/δ) ] = o( √ K ′). Thus, 4 √ K ′(H + 1 + β √ log(K ′M/δ)) + 4β √ d log(dK ′M/δ) << K ′ for K ′ large enough. Therefore, there exists R that satisfies Equation (6). We now only need to verify ι < 1. We have ι0 = Bm −1/2(2 √ d+ √ 2 log(3H/δ)) ≤ 1/3, ι1 = CgR 4/3m−1/6 √ logm+ Cg ( B̃1 + B̃2 + λ −1(1− ηλ)J ( K ′(H + 1 + γh,1) 2 + λγ2h,2 )) ≲ 1/3 if (1− ηλ)J [ K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) ] ≲ 1. (7) Note that (1− ηλ)J ≤ e−ηλJ , K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) ≲ K ′H2λβ2d log(dK ′M/δ). Thus, Equation (7) is satisfied if J ≳ ηλ log ( K ′H2λβ2d log(dK ′M/δ) ) . Finally note that ι2 ≤ ι1. Rearranging the derived conditions here gives the complete parameter conditions in Theorem 1. Specifically, the polynomial form of m is m ≳ max{R8 log3m,K ′10(H + 1)2 log(3K ′H/δ), d3/2R−1 log3/2( √ m/R),K ′6R8 log3m, B2K ′2d log(3H/δ)}, m ≳ [2K ′(H + 1 + β √ log(K ′M/δ))2 + λβ2d log(dK ′M/δ) + 1]3K ′3R log3m. C.3 PROOF OF THEOREM 2 In this subsection, we give a detailed proof of Theorem 2. We first present intermediate lemmas whose proofs are deferred to Section D. For any h ∈ [H] and k ∈ Ih = [(H − h)K ′ +1, . . . , (H − h+ 1)K ′], we define the filtration Fkh = σ ( {(sth′ , ath′ , rth′)} t≤k h′∈[H] ∪ {(s k+1 h′ , a k+1 h′ , r k+1 h′ )}h′≤h−1 ∪ {(s k+1 h , a k+1 h )} ) . Let Λkh := λI + ∑ t∈Ik,t≤k g(xth;W0)g(x t h;W0) T , β̃ := β(1 + 2 √ log(SAH/δ)). In the following lemma, we connect the expected sub-optimality of π̃ to the summation of the uncertainty quantifier at empirical data. Lemma C.2. Suppose that the conditions in Theorem 1 all hold. With probability at least 1 − MHm−2 − 3δ, SubOpt(π̃) ≤ 2β̃ K ′ H∑ h=1 ∑ k∈Ih Eπ∗ [ ∥g(xh;W0)∥(Λkh)−1 ∣∣∣∣Fk−1h , sk1]+ 163K ′H log(log2(K ′H)/δ) + 2 K ′ + 2ι, Lemma C.3. Under Assumption 5.2, for any h ∈ [H] and fixed W0, with probability at least 1− δ,∑ k∈Ih Eπ∗ [ ∥g(xh;W0)∥(Λkh)−1 ∣∣∣∣Fk−1, sk1] ≤ ∑ k∈Ih κ∥g(xh;W0)∥(Λkh)−1 + κ √ K ′ log(1/δ) λ . Lemma C.4. If λ ≥ C2g and m = Ω(K ′4 log(K ′H/δ)), then with probability at least 1− δ, for any h ∈ [H], we have ∑ k∈Ih ∥g(xh;W0)∥2(Λkh)−1 ≤ 2d̃h log(1 +K ′/λ) + 1. where d̃h is the effective dimension defined in Definition 2. Proof of Theorem 2. Theorem 2 directly follows from Lemma C.2-C.3-C.4 using the union bound. D PROOF OF LEMMA C.1 In this section, we provide the proof for Lemma C.1. We set up preparation for all the results in the rest of the paper and provide intermediate lemmas that we use to prove Lemma C.1. The detailed proofs of these intermediate lemmas are deferred to Section E. D.1 PREPARATION To prepare for the lemmas and proofs in the rest of the paper, we define the following quantities. Recall that we use abbreviation x = (s, a) ∈ X ⊂ Sd−1 and xkh = (skh, akh) ∈ X ⊂ Sd−1. For any h ∈ [H] and i ∈ [M ], we define the perturbed loss function L̃ih(W ) := 1 2 ∑ k∈Ih ( f(xkh;W )− ỹ i,k h ) )2 + λ 2 ∥W + ζih −W0∥22, (8) where ỹi,kh := r k h + Ṽh+1(s k h+1) + ξ i,k h , Ṽh+1 is computed by Algorithm 1 at Line 10 for timestep h+1, and {ξi,kh } and ζih are the Gaussian noises obtained at Line 5 of Algorithm 1. Here the subscript h and the superscript i in L̃ih(W ) emphasize the dependence on the ensemble sample i and timestep h. The gradient descent update rule of L̃ih(W ) is W̃ i,(j+1) h = W̃ i,(j) h − η∇L̃ i h(W ), (9) where W̃ i,(0)h =W0 is the initialization parameters. Note that W̃ ih = GradientDescent(λ, η, J, D̃ih, ζih,W0) = W̃ i,(J) h , where W̃ ih is returned by Line 7 of Algorithm 1. We consider a non-perturbed auxiliary loss function Lh(W ) := 1 2 ∑ k∈Ih ( f(xkh;W )− ykh) )2 + λ 2 ∥W −W0∥22, (10) where ykh := r k h + Ṽh+1(s k h+1). Note that Lh(W ) is simply a non-perturbed version of L̃ih(W ) where we drop all the noises {ξ i,k h } and {ζih}. We consider the gradient update rule for Lh(W ) as follows Ŵ (j+1) h = Ŵ (j) h − η∇Lh(W ), (11) where Ŵ (0)h =W0 is the initialization parameters. To correspond with W̃ i h, we denote Ŵh := Ŵ (J) h . (12) We also define the auxiliary loss functions for both non-perturbed and perturbed data in the linear model with feature g(·;W0) as follows L̃i,linh (W ) := 1 2 ∑ k∈Ih ( ⟨g(xkh;W0),W ⟩ − ỹ i,k h )2 + λ 2 ∥W + ζih −W0∥22, (13) Llinh (W ) := 1 2 ∑ k∈Ih ( ⟨g(xkh;W0),W ⟩ − ykh )2 + λ 2 ∥W −W0∥22. (14) We consider the auxiliary gradient updates for L̃i,linh (W ) as W̃ i,lin,(j+1) h = W̃ i,lin,(j) h − η∇L̃ i,lin h (W ), (15) Ŵ lin,(j+1) h = Ŵ lin,(j) h − η∇L̃ lin h (W ), (16) where W̃ i,lin,(0)h = Ŵ i,lin,(0) h = W0 for all i, h. Finally, we define the least-square solutions to the auxili
1. What is the main contribution of the paper regarding pessimistic offline RL? 2. What are the strengths and weaknesses of the proposed approach compared to traditional methods? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What questions does the reviewer have regarding the paper, particularly on the definition of Q* and Hntk, the relationship between them, and the conditions in Theorem 1?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a novel approach to pessimism-based offline RL. Instead of the standard practice of explicitly constructing a lower confidence bound for value functions, this new approach uses perturbed rewards to implicitly quantify the training uncertainty and construct lower confidence bounds for the ground truth. Such an approach is argued to improve the practicality of pessimistic offline RL especially in situations where the value functions are approximated by large neural networks, whence the large scale model elucidates traditional theoretical analysis. The theoretical property of this approach is studied in overparametrized neural networks with gradient descent training. Empirical experiments are conducted to show the favorable performance of the proposed approach. Strengths And Weaknesses Strengths: A novel algorithm to pessimism. This work contributes to the literature of offline RL by proposing a solid and practical approach to constructing confidence lower bounds. This idea is clever and refreshing. It is both practically useful and theoretically interesting. Solid theoretical results. This paper provides solid theoretical analysis. It is quite invovled but I think it will be useful for future works in the literature. Weaknesses: Clarity of results. A drawback (which I think can be resolved to some extent) is that although the overall idea is clean, the presentation is way too overwhelming. For example, it will help the readers if important steps in the algorithms can be explained / highlighed / commented, or given more wording in explaining their roles. Also, Theorem 1 poses a huge challenge to the whole flow as the conditions are too complicated. I will suggest replacing it with an informal theorem, and put all these conditions to the appendix. Relation to the literature. Although the authors generally did a good job in relating to the literature, more discussion on the novelty of this idea can further improve this work and posit this paper appropriately in the literature. For instance, 1) how does the construction of uncertainty quantification differ from that in the online setting (e.g., Jia et al. 2022)? Is there special tricks for dealing with distribution shift? 2) how does the analysis of 2-layer neural networks rely on existing ones, and which parts are specific to the new offline setting? etc. Questions: Since Q ∗ is used for fitting the networks, I am curious about the definition of Q ∗ . Is it equivalent to the class of 2-layer neural networks with ReLU activation, or is it a superset of it? What is the relationship between H n t k and the class of 2-layer NN with ReLU activation? Please clarify this to help the readers. The conditions in Theorem 1 are too lengthy. Is it possible to reduce it to a cleaner version (potentially with some harmless relaxation)? It will help if there is a sketch or overview of the key theoretical techniques in Theorem 1. For example, why do the randomized rewards lead to a valid uncertainty quantifier. Minor issues: The last sentence in Section 4 is difficult to read. What does it mean by `` with a provable implicit uncertainty quantifier and O ~ ( 1 / K ) ''? Remark 1 is a bit difficult to read. What does it mean by ``data-dependent quantity that measures the number principle dimensions over which the.."? What does σ ′ mean in Definition 1? Please provide a formal definition. Clarity, Quality, Novelty And Reproducibility This paper is of high quality in general. The theoretical analysis and empirical results are solid. The paper is clearly written and properly posited in the literature. Though the idea of randomized rewards and pessimism have appeared in the literature, the idea of combining them and the theoretical / practical considerations therein are fresh.
ICLR
Title Localized Randomized Smoothing for Collective Robustness Certification Abstract Models for image segmentation, node classification and many other tasks map a single input to multiple labels. By perturbing this single shared input (e.g. the image) an adversary can manipulate several predictions (e.g. misclassify several pixels). A recent collective robustness certificate provides strong guarantees on the number of predictions that are simultaneously robust. This method is however limited to strictly local models, where each prediction is associated with a small receptive field. We propose a more general collective certificate for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e.g. based on their proximity in the image). The certificate is based on our novel localized randomized smoothing approach, where the random perturbation strength for different input regions is proportional to their importance for the outputs. The resulting locally smoothed model yields strong collective guarantees while maintaining high prediction quality on both image segmentation and node classification tasks. 1 INTRODUCTION There is a wide range of tasks that require models making multiple predictions based on a single input. For example, semantic segmentation requires assigning a label to each pixel in an image. When deploying such multi-output classifiers in practice, their robustness should be a key concern. After all – just like simple classifiers (Szegedy et al., 2014) – they can fall victim to adversarial attacks (Xie et al., 2017; Zügner & Günnemann, 2019; Belinkov & Bisk, 2018). Even without an adversary, random noise or measuring errors could cause one or multiple predictions to unexpectedly change. In the following, we derive a method that provides provable guarantees on how many predictions can be changed by an adversary. Since all outputs operate on the same input, they also have to be attacked simultaneously by choosing a single perturbed input. While attacks on a single prediction may be easy, attacks on different predictions may be mutually exclusive. We have to explicitly account for this fact to obtain a proper collective robustness certificate that provides tight bounds. There already exists a dedicated collective robustness certificate for multi-output classifiers (Schuchardt et al., 2021), but it is only benefical for models we call strictly local, where each output depends only on a small, well-defined subset of the input. One example are graph neural networks that classify each node in a graph based only on its neighborhood. Multi-output classifiers used in practice, however, are often only softly local. While – unlike strictly local models – all of their predictions are in principle dependent on the entire input, each output may assign different importance to different components. For example, deep convolutional networks used for image segmentation can have very small effective receptive fields (Luo et al., 2016; Liu et al., 2018b), i.e. primarily use a small region of the input in labeling each pixel. Many models used in node classification are based on the homophily assumption that connected nodes are mostly of the same class. Thus, they primarily use features from neighboring nodes to classify each node. Even if an architecture is not inherently softly local, a model may learn a softly local mapping through training. For example, a transformer (Vaswani et al., 2017) can in principle attend to any part of an input sequence. However, in practice the learned attention maps may be ”sparse”, with the prediction for each token being determined primarily by a few (not necessarily nearby) tokens (Shi et al., 2021). While an adversarial attack on a single prediction of a softly local model is conceptually no different from that on a single-output classifier, attacking multiple predictions simultaneously can be much more challenging. By definition, adversarial attacks have to be unnoticeable, meaning the adversary only has a limited budget for perturbing the input. When each output is focused on a different part of the input, the adversary has to decide on where to allocate their adversarial budget and may be unable to attack all outputs at once. Our collective robustness certificate explicitly accounts for this budget allocation problem faced by the adversary and can thus provide stronger robustness guarantees. Our certificate is based on randomized smoothing (Liu et al., 2018a; Lécuyer et al., 2019; Cohen et al., 2019). Randomized smoothing is a versatile black-box certification method that has originally been proposed for single-output classifiers. Instead of directly analysing a model, it constructs a smoothed classifier that returns the most likely prediction of the model under random perturbations of its input. One can then use statistical methods to certify the robustness of this smoothed classifier. We discuss more details in Section 2. Randomized smoothing is typically used with i.i.d. noise: Each part of the input (e.g. each pixel) independently undergoes random perturbations sampled from the same noise distribution. One can however also use non-i.i.d. noise (Eiras et al., 2021). This results in a smoothed classifier that is certifiably more robust to parts of the input that are smoothed with higher noise levels (e.g. larger standard deviation). We apply randomized smoothing to softly-local multi-output classifiers in a scheme we call localized randomized smoothing: Instead of using the same smoothing distribution for all outputs, we randomly smooth each output (or set of outputs) using a different non-i.i.d. distribution that matches its inherent soft locality. Using a low noise level for the most relevant parts of the input allows us to retain a high prediction quality (e.g. accuracy). Less relevant parts of the input can be smoothed with a higher noise level. The resulting certificates (one per output) explicitly quantify how robust each prediction is to perturbations of which section of the input – they are certificates of soft locality. After certifying each prediction independently using localized randomized smoothing, we construct a (mixed-integer) linear program that combines these per-prediction base certificates into a collective certificate that provably bounds the number of simultaneously attackable predictions. This linear program explicitly accounts for soft locality and the budget allocation problem it causes for the adversary. This allows us to prove much stronger guarantees of collective robustness than simply certifying each prediction independently. Our core contributions are: • Localized randomized smoothing, a novel smoothing scheme for multi-output classifiers. • A variance smoothing method for efficiently certifying smoothed models on discrete data. • A collective certificate that leverages our identified common interface for base certificates. 2 BACKGROUND AND RELATED WORK Randomized smoothing. Randomized smoothing is a flexible certification technique that can be used for various data types, perturbation models and tasks. For simplicity, we focus on a classification certificate for l2 perturbations (Cohen et al., 2019). Assume we have a continuous D-dimensional input space RD, a label set Y and a classifier g : RD → Y. We can use isotropic Gaussian noise with standard deviation σ ∈ R+ to construct the smoothed classifier f = argmaxy∈Y Prz∼N (x,σ) [g(z) = y] that returns the most likely prediction of base classifier g under the input distribution 1. Given an input x ∈ RD and the smoothed prediction y = f(x), we want to determine whether the prediction is robust to all l2 perturbations of magnitude , i.e. whether ∀x′ : ||x′−x||2 ≤ : f(x′) = y. Let q = Prz∼N (x,σ) [g(x) = y] be the probability of g predicting label y. The prediction of our smoothed classifier is robust if < σΦ−1(q) (Cohen et al., 2019). This result showcases a trade-off we alluded to in the previous section: The certificate can become stronger if the noise-level (here σ) is increased. But doing so could also lower the accuracy of the smoothed classifier or reduce q and thus weaken the certificate. White-box certificates for multi-output classifiers. There are multiple recent methods for certifying the robustness of specific multi-output models (see, for example, (Tran et al., 2021; Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2020; Ko et al., 2019; Ryou et al., 2021; Shi et al., 2020; Bonaert et al., 2021)) by analyzing their specific architecture and weights. They are however not designed to certify collective robustness. They can only determine independently for each prediction whether or not it can be adversarially attacked. Collective robustness certificates. Most directly related to our work is the certificate of Schuchardt et al. (2021). Like ours, it combines many per-prediction certificates into a collective certificate. But, unlike our novel localized smoothing approach, their certification procedure is only beneficial for strictly local models, i.e. models whose outputs operate on small subsets of the input. Furthermore, their certificate assumes binary data, while our certificate defines a common interface for various data types and perturbation models. A more detailed comparison can be found in Section D. Recently, Fischer et al. (2021) proposed a certificate for semantic segmentation. They consider a different notion of collective robustness: They are interested in determining whether all predictions are robust. In Section C.4 we discuss their method in detail and show that, when used for certifying our notion of collective robustness (i.e. the number of robust predictions), their method is no better than certifying each output independently using the certificate of Cohen et al. (2019). Furthermore, our certificate can be used to provide equally strong guarantees for their notion of collective robustness by checking whether the number of certified predictions equals the overall number of predictions. Another method that can be used for certifying collective robustness is center smoothing (Kumar & Goldstein, 2021). Center smoothing bounds how much a vector-valued prediction changes w.r.t to a distance function under adversarial perturbations. With the l0 pseudo-norm as the distance function, center smoothing bounds how many predictions of a classifier can be simultaneously changed. Randomized smoothing with non-i.i.d. noise. While not designed for certifying collective robustness, two recent certificates for non-i.i.d. Gaussian (Fischer et al., 2020) and uniform smoothing (Eiras et al., 2021) can be used as a component of our collective certification approach: They can serve as per-prediction base certificates, which can then be combined into our stronger collective certificate (more details in Section 4) . Note that we do not use the procedure for optimizing the smoothing distribution proposed by Eiras et al. (2021), as this would enable adversarial attacks on the smoothing distribution itself and invalidate the certificate (see discussion by Wang et al. (2021)). 3 COLLECTIVE THREAT MODEL Before certifying robustness, we have to define a threat model, which specifies the type of model that is attacked, the objective of the adversary and which perturbations they are allowed to use. We assume that we have a multi-output classifier f : XDin → YDout , that maps from a Din-dimensional vector space to Dout labels from label set Y. We further assume that this classifier f is the result of randomly smoothing a base classifier g, as discussed in Section 2. To simplify our notation, we write fn to refer to the function x 7→ f(x)n that outputs the n-th label. Given this multi-output classifier f , an input x ∈ XDin and the resulting vector of predictions y = f(x), the objective of the adversary is to cause as many predictions from a set of targeted indices T ⊆ {1, . . . , Dout} to change. That is, their objective is minx′∈Bx ∑ n∈T I [fn(x ′) = yn], where Bx ⊆ XDin is the perturbation model. Importantly, note that the minimization operator is outside the sum, meaning the predictions have to 1In practice, all probabilities have to be estimated using Monte Carlo sampling (see discussion in Section C). be attacked using a single input. As is common in robustness certification, we assume a norm-bound perturbation model. That is, given an input x ∈ XDin , the adversary is only allowed to use perturbed inputs from the set Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } with p, ≥ 0. 4 A RECIPE FOR COLLECTIVE CERTIFICATES Before discussing technical details, we provide a high-level overview of our method. In localized randomized smoothing, we assign each output gn of a base classifier g its own smoothing distribution Ψ(n) that matches our assumptions or knowledge about the base classifier’s soft locality, i.e. for each n ∈ {1, . . . , Dout} choose a Ψ(n) that induces more noise in input components that are less relevant for gn. For example, in Fig. 1, we assume that far-away regions of the image are less relevant and thus perturb pixels in the bottom left with more noise when classifying pixels in the top-right corner. The chosen smoothing distributions can then be used to construct the smoothed classifier f . Given an input x ∈ XDin and the corresponding smoothed prediction y = f(x), randomized smoothing makes it possible to compute per-prediction base certificates. That is, for each yn, one can compute a set H(n) ⊆ XDin of perturbed inputs that the prediction is robust to, i.e. ∀x′ ∈ Hn : fn(x ′) = yn. Our motivation for using non-i.i.d. distributions is that the H(n) will guarantee more robustness for input dimensions smoothed with more noise, i.e. quantify model locality. The objective of our adversary is minx′∈Bx ∑ n∈T I [fn(x ′) = yn] with collective perturbation model Bx ⊆ XDin . That is, they want to change as many predictions from the targeted set T as possible. A trivial lower bound can be obtained by counting how many predictions are – according to the base certificates – provably robust to the collective threat model. This can be expressed as∑ n∈T minx′∈Bx I [ x′ ∈ H(n) ] . In the following, we refer to this as the naı̈ve collective certificate. Thanks to our proposed localized smoothing scheme, we can use the following, tighter bound: min x′∈Bx ∑ n∈T I [fn(x ′) = yn] ≥ min x′∈Bx ∑ n∈T I [ x′ ∈ H(n) ] , (1) which preserves the fact that the adversary has to choose a single perturbed input. Because we use different non-i.i.d. smoothing distributions for different outputs, we provably know that each fn has varying levels of robustness for different parts of the input and that these robustness levels differ among outputs. Thus, in the r.h.s. problem the adversary has to allocate their limited budget across various input dimensions and may be unable to attack all predictions at once, just like when attacking the classifier in the l.h.s. objective (recall Section 1). This makes our collective certificate stronger than the naı̈ve collective certificate, which allows each prediction to be attacked independently. As stated in Section 1, the idea of combining base certificates into stronger collective certificates has already been explored by Schuchardt et al. (2021). But instead of using localized smoothing to capture the (soft) locality of a model, their approach leverages the fact that perturbations outside an output’s receptive field can be ignored. For softly local models, which have receptive fields covering the entire input, their certificate is no better than the naı̈ve certificate. Another novel insight underlying our approach is that various non-i.i.d. randomized smoothing certificates share a common interface, which makes our method applicable to diverse data types and perturbation models. In the next section, we formalize this common interface. We then discuss how it allows us to compute the collective certificate from Eq. 1 using (mixed-integer) linear programming. 5 COMMON INTERFACE FOR BASE CERTIFICATES A base certificate for a prediction yn = fn(x) is a set Hn ⊆ XDin of perturbed inputs that yn is provably robust to, i.e ∀x′ ∈ Hn : fn(x′) = yn. Note that base certificates do not have to be exact, but have to be sound, i.e. they do not have to specify all inputs to which the fn are robust but they must not contain any adversarial examples. As a common interface for base certificates, we propose that the sets Hn are parameterized by a weight vector w(n) ∈ RDin and a scalar η(n) that define a linear constraint on the element-wise distance between perturbed inputs and the clean input: H(n) = { x′ ∈ XDin ∣∣∣∣∣ Din∑ d=1 w (n) d · |x ′ d − xd|κ < η(n) } . (2) The weight vector encodes how robust yn is to perturbations of different components of the input. The scalar κ is important for collective robustness certification, because it encodes which collective perturbation model the base certificate is compatible with. For example, κ = 2 means that the base certificate can be used for certifying collective robustness to l2 perturbations. In the following, we present two base certificates implementing our interface: One for l2 perturbations of continuous data and one for perturbations of binary data. In Section B, we further present a certificate for binary data that can distinguish between adding and deleting bits and a certificate for l1 perturbations of continuous data. All base certificates guarantee more robustness for parts of the input smoothed with a higher noise level. The certificates for continuous data are based on known results (Fischer et al., 2020; Eiras et al., 2021) and merely reformulated to match our proposed interface, so that they can be used as part of our collective certification procedure. The certificates for discrete data however are original and based on the novel concept of variance smoothing. Gaussian smoothing for l2 perturbations of continuous data The first base certificate is a generalization of Gaussian smoothing to anisotropic noise, a corollary of Theorem A.1 from (Fischer et al., 2020). In the following, diag(z) refers to a diagonal matrix with diagonal entries z and Φ−1 : [0, 1]→ R refers to the the standard normal inverse cumulative distribution function. Proposition 1. Given an output gn : RDin → Y, let fn(x) = argmaxy∈Y Prz∼N (x,Σ) [gn(z) = y] be the corresponding smoothed output with Σ = diag (σ)2 andσ ∈ RDin+ . Given an inputx ∈ RDin and smoothed prediction yn = fn(x), let q = Prz∼N (x,Σ) [gn(z) = yn]. Then, ∀x′ ∈ H(n) : fn(x ′) = yn with H(n) defined as in Eq. 2, wd = 1σd2 , η = ( Φ(−1)(q) )2 and κ = 2. Bernoulli variance smoothing for perturbations of binary data For binary data, we use a smoothing distribution F(x,θ) with θ ∈ [0, 1]Din that independently flips the d’th bit with probability θd, i.e. for x, z ∈ {0, 1}Din and z ∼ F(x,θ) we have Pr[zd 6= xd] = θd. A corresponding certificate could be derived by generalizing (Lee et al., 2019), which considers a single shared θ ∈ [0, 1] with ∀d : θd = θ. However, the cost for computing this certificate would be exponential in the number of unique values in θ. We therefore propose a more efficient alternative. Instead of constructing a smoothed classifier that returns the most likely labels of the base classifier (as discussed in Section 2), we construct a smoothed classifier that returns the labels with the highest expected softmax scores (similar to CDF-smoothing (Kumar et al., 2020)). For this smoothed model, we can compute a robustness certificate in constant time. The certificate requires determining both the expected value and variance of softmax scores. We therefore call this method variance smoothing. While we use it for binary data, it is a general-purpose technique that can be applied to arbitrary domains and smoothing distributions (see discussion in Section B.2). In the following, we assume the label set Y to consist of numerical labels {1, . . . , |Y|}, which simplifies our notation. Theorem 1. Given an output gn : {0, 1}Din → ∆|Y| mapping to scores from the |Y|-dimensional probability simplex, let fn(x) = argmaxy∈YEz∼F(x,θ) [gn(z)y] be the corresponding smoothed classifier with θ ∈ [0, 1]Din . Given an input x ∈ {0, 1}Din and smoothed prediction yn = fn(x), let µ = Ez∼F(x,θ) [gn(z)y] and σ2 = Varz∼F(x,θ) [gn(z)y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = ln ( (1−θd)2 θd + (θd) 2 1−θd ) , η = ln ( 1 + 1σ2 ( µ− 12 )2) and κ = 0. 6 COMPUTING THE COLLECTIVE ROBUSTNESS CERTIFICATE With our common interface for base certificates in place, we can discuss how to compute the collective robustness certificate minx′∈Bx ∑ n∈T I [ x′ ∈ H(n) ] from Eq. 1. The result bounds the number of predictions yn with n ∈ {1, . . . , Dout} that can be simultaneously attacked by the adversary. In the following, we assume that the base certificates were obtained by using a smoothing distribution that is compatible with our lp collective perturbation model (i.e. κ = p), for example by using Gaussian noise for p = 2 or Bernoulli noise for p = 0. Inserting the definition of our base certificate interface from Eq. 2 and rewriting our perturbation model Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } as{ x′ ∈ XDin | ∑Din d=1 |x′d − xd|p ≤ p } , our objective from Eq. 1 can be expressed as min x′∈XDin ∑ n∈T I [ Din∑ d=1 w (n) d · |x ′ d − xd|p < η(n) ] s.t. Din∑ d=1 |x′d − xd|p ≤ p. (3) We can see that the perturbed inputx′ only affects the element-wise distances |x′d−xd|p. Rather than optimizing x′, we can instead directly optimize these distances, i.e. determine how much adversarial budget is allocated to each input dimension. For this, we define a vector of variables b ∈ RDin+ (or b ∈ {0, 1}Din for binary data). Replacing sums with inner products, we can restate Eq. 3 as min b∈RDin+ ∑ n∈T I [ bTw(n) < η(n) ] s.t. sum{b} ≤ p. (4) In a final step, we replace the indicator functions in Eq. 4 with a vector of boolean variables t ∈ {0, 1}Dout . Define the constants η(n) = p ·min ( 0,mind w (n) d ) . Then, min b∈RDin+ ,t∈{0,1}Dout ∑ n∈T tn s.t. ∀n : bTw(n) ≥ tnη(n) + (1− tn)η(n), sum{b} ≤ p. (5) is equivalent to Eq. 4. The first constraint guarantees that tn can only be set to 0 if the l.h.s. is greater or equal η(n), i.e. only when the base certificate can no longer guarantee robustness. The term involving η(n) ensures that for tn = 1 the problem is always feasible2. Eq. 5 can be solved using any mixed-integer linear programming solver. While the resulting MILP bears some semblance to that of Schuchardt et al. (2021), it is conceptually different. When evaluating their base certificates, they mask out parts of the budget vector b based on a model’s strict locality, while we weigh the budget vector based on the soft locality guaranteed by the base certificates. In addition, thanks to the interface specified in Section 5, our problem only involves a single linear constraint per prediction, making it much smaller and more efficient to solve. Interestingly, when using randomized smoothing base certificates for binary data, our certificate subsumes theirs, i.e. can provide the same robustness guarantees (see Section D.2). Improving efficiency. Still, the efficiency of our certificate in Eq. 5. certificate can be further improved. In Section A, we show that partitioning the outputs into Nout subsets sharing the same smoothing distribution and the the inputs into Nin subsets sharing the same noise level (for example like in Fig. 1), as well as quantizing the base certificate parameters η(n) into Nbin bins reduces the number of variables and constraints from Din + Dout and Dout + 1 to Nin + Nout · Nbins and Nout · Nbins + 1, respectively.We can thus control the problem size independent of the data’s dimensionality. We further derive a linear relaxation of the mixed-integer problem, which can be more efficiently solved while preserving the soundness of the certificate. 7 LIMITATIONS The main limitation of our approach is that it assumes softly local models. While it can be applied to arbitrary multi-output classifiers, it may not necessarily result in better certificates than randomized smoothing with i.i.d. distributions. Furthermore, choosing the smoothing distributions requires some a-priori knowledge or assumptions about which parts of the input are how relevant to making a prediction. Our experiments show that natural assumptions like homophily can be sufficient for choosing effective smoothing distributions. But doing so in other tasks may be more challenging. A limitation of (most) randomized smoothing certificates is that they use sampling to approximate the smoothed classifier. Because we use different smoothing distributions for different outputs, we can only use a fraction of the samples for each output. As discussed in Section A.1, we can alleviate this problem by sharing smoothing distributions among multiple outputs. Our experiments show that despite this issue, our method outperforms certificates that use a single smoothing distribution. Still, future work should try to improve the sample efficiency of randomized smoothing (for example by developing more methods for de-randomized smoothing (Levine & Feizi, 2020)).Any such advance could then be incorporated into our localized smoothing framework. 8 EXPERIMENTAL EVALUATION Our experimental evaluation has three objectives 1.) Verifying our main claim that localized randomized smoothing offers a better trade-off between accuracy and certifiable robustness than smoothing 2Because η(n) is the smallest value bTw(n) can take on, i.e. min b∈RDin+ bTw (n) d s.t. sum{b} ≤ p. with i.i.d. distributions. 2.) Determining to what extend the linear program underlying the proposed collective certificate strengthens our robustness guarantees. 3.) Assessing the efficacy of our novel variance smoothing certificate for binary data. Any of the used datasets and classifiers only serve as a means of comparing certificates. We thus use well-known and well-established architectures instead of overly focusing on maximizing prediction accuracy by using the latest SOTA models. We use two metrics to quantify certificate strength: Certified accuracy (i.e. the percentage of correct and certifiably robust predictions) and certified ratio (i.e. the percentage of certifiably robust predictions, regardless of correctness)3. As single-number metrics, we report the AUC of the certified accuracy/ratio functions w.r.t. adversarial budget (not to be confused with certifying some AUC metric). For localized smoothing, we evaluate both the naı̈ve collective certificate, i.e. certifying predictions independently (see Section 4), and the proposed LP-based certificate (using the linearly relaxed version from Appendix A.4). We compare our method to two baselines using i.i.d. randomized smoothing: The naı̈ve collective certificate and center smoothing (Kumar & Goldstein, 2021). For softly local models, the certificate of Schuchardt et al. (2021) is equivalent to the naı̈ve baseline. When used to certify the number of robust predictions, the segmentation certificate of Fischer et al. (2021) is at most as strong as the naı̈ve baseline (see Section C.4). Thus, our method is compared to all existing collective certificates listed in Section 2. In all experiments, we use Monte Carlo randomized smoothing. More details on the experimental setup can be found in Section E. 8.1 SEMANTIC SEGMENTATION Dataset and model. We evaluate our certificate for continuous data and l2 perturbations on the Pascal-VOC 2012 segmentation validation set. Training is performed on 10582 pairs of training samples extracted from SBD4 (Hariharan et al., 2011), To increase batch sizes and thus allow a more thorough investigation of different smoothing parameters, all images are downscaled to 50% of their original size. Our base model is a U-Net segmentation model with a ResNet-18 backbone. To obtain accurate and robust smoothed classifiers, base models should be trained on the smoothing distribution. We thus train 51 different instances of our base model, augmenting the training data with a different σtrain ∈ {0, 0.01, . . . , 0.5}. At test time, when evaluating a baseline i.i.d. certificate with smoothing distribution N (0, σ), we load the model trained with σtrain = σ. To perform localized randomized smoothing, we choose parameters σmin, σmax ∈ R+ and partition all images into regular grids of size 4 × 6 (similar to example Fig. 1). To classify pixels in grid cell (i, j), we sample noise for grid cell (k, l) using N (0, σ′), with σ′ ∈ [σmin, σmax] chosen proportional to the distance of (i, j) and (k, l) (more details in Section E.2.1). As the base model, we load the one trained with σtrain = σmin. Using the same distribution at train and test time for the i.i.d. baselines but not for localized smoothing is meant to skew the results in the baseline’s favor. But, in Section E.2.3, we also repeat our experiments using the same base model for i.i.d. and localized smoothing. Evaluation. The main goal of our experiments on segmentation is to verify that localized smoothing can offer a better trade-off between accuracy and certifiable robustness. That is, for all or most σ, there are σmin, σmax such that the locally smoothed model has higher accuracy and certifiable collective robustness than i.i.d. smoothing baselines using N (0, σ). Because σ, σmin, σmax ∈ R+, we can not evaluate all possible combinations. We therefore use the following scheme: We focus on the case σ ∈ [0, 0.5], which covers all distributions used in (Kumar & Goldstein, 2021) and 3In the case of image segmentation, we compute these metrics per image and then average over the dataset. 4Also known as ”Pascal trainaug” (Fischer et al., 2021). First, we evaluate our two baselines for five σ ∈ {0.1, 0.2, 0.3, 0.4, 0.5}. This results in baseline models with diverse levels of accuracy and robustness (e.g. the accuracy of the naı̈ve baseline shrinks from 87.7% to 64.9% and the AUC of its certified accuracy grows from 0.17 to 0.644). We then test whether, for each of the σ, we can find σmin, σmax such that the locally smoothed models attains higher accuracy and is certifiably more robust. Finally, to verify that {0.1, 0.2, 0.3, 0.4, 0.5} were not just a particularly poor choice of baseline parameters, we fix the chosen σmin, σmax. We then perform a fine-grained search over σ ∈ [0, 0.5] with resolution 0.01 to find a baseline model that has at least the same accuracy and certifiable robustness (as measured by certificate AUC) as any of the fixed locally smoothed models. If this is not possible, this provides strong evidence that the proposed smoothing scheme and certificate indeed offer a better trade-off. Fig. 2 shows one example. For σ = 0.4, the naı̈ve i.i.d. baseline has an accuracy of 72.5%. With σmin = 0.25, σmax = 1.5, the proposed localized smoothing certificate yields both a higher accuracy of 76.4% and a higher certified accuracy for all . It can certify robustness for up to 1.825, compared to 1.45 of the baseline and the AUC of its certified accuracy curve is 43.1% larger. Fig. 2 also highlights the usefulness of the linear program we derived in Section 5: Evaluating the localized smoothing base certificates independently, i.e. computing the naı̈ve collective certificate (dotted orange line), is not sufficient for outperforming the baseline. But combining them via the proposed linear program drastically increases the certified accuracy The results for all other combinations of smoothing distribution parameters, both baselines and both metrics of certificate strength can be found in Section E.2.3. Tables 1 and 2 summarize the first part of our evaluation procedure, in which we optimize the localized smoothing parameters. Safe for one exception (with σ = 0.2, center smoothing has a lower accuracy, but slightly larger certified ratio), the locally smoothed models have the same or higher accuracy, but provide stronger robustness guarantees. The difference is particularly large for σ ∈ {0.3, 0.4, 0.5}, where the accuracy of models smoothed with i.i.d. noise drops off, while our localized smoothing distribution preserves the most relevant parts of the image to allow for high accuracy. Table 5 summarizes the second part of our evaluation scheme, in which we perform a fine-grained search over [0, 0.5]. We find that there is no σ such that either of the i.i.d. baselines can outperform any of the chosen locally smoothed models w.r.t. AUC of their certified accuracy or certified ratio curves. This is ample evidence for our claim that localized smoothing offers a better trade-off than i.i.d. smoothing. Also, the collective LPs caused little computational overhead (avg. 0.68 s per LP, more details in Section E.2.3). 8.2 NODE CLASSIFICATION Dataset and model. We evaluate our certificate for binary data on the Cora-ML node classification dataset. We use two different base-models: Approximate Personalized Propagation of Neural Predictions (APPNP) (Klicpera et al., 2019) and a 6-layer Graph Convolutional network (GCN) (Kipf & Welling, 2017). Both models have a receptive field that covers most or all of the graph, meaning they are softly local. For details on model and training parameters, see Section E.3.1. As center smoothing has only been derived for Gaussian smoothing, we only compare to the naı̈ve baseline. For both, the baseline and our localized smoothing certificate, we use sparsity-aware randomized smoothing (Bojchevski et al., 2020) , i.e. flip 1-bits and 0-bits with different probabilities (θ− and θ+, respectively), which allows us to certify different levels of robustness to deletions and additions of bits. With localized randomized smoothing, we use the variance smoothing base certificate derived in Section B.2.2. We choose the distribution parameters for localized smoothing based on an assumption of homophily, i.e. nearby nodes are most relevant for classifying a node. We partition the graph into 5 clusters and define parameters θ±min and θ ± max. When classifying a node in cluster i, we randomly smooth attributes in cluster j with θ+ij , θ − ij that are based on linearly interpolating in [θ−min, θ − max] and [θ − min, θ − max] based on the affinity of the clusters (details in Section E.3.1). Evaluation. We first evaluate the new variance-based certificate and compare it to the certificate derived by Bojchevski et al. (2020). For this, we use only one cluster, meaning we use the same smoothing distribution for both. Fig. 11 in Section E.3 shows that the variance certificate is weaker than the baseline for additions, but better for deletions. It appears sufficiently effective to be used as a base certificate and integrated into a stronger, collective certificate. The parameter space of our smoothing distributions is large. For the localized approach we have four continuous parameters, as we have to specify both the minimal and maximal noise values. Therefore, it is difficult to show that our approach achieves a better accuracy-robustness trade-off over the whole noise space. However, we can investigate the accuracy-robustness trade-off within some areas of this space. For the localized approach we choose a few fixed combinations of the noise parameters θ±min and θ±max. To show our claim, we then optimise the baselines with parameters in an interval around our θ+min and θ − min. This is a smaller space, as the baselines only have two parameters. We select the baseline whose certified accuracy curve has the largest AUC. We perform the search for the best baseline for the addition and deletion scenario independently, i.e., the best baseline model for addition and deletion does not have to be the same. In Fig. 3, we see the certified accuracy of an APPNP model for a varying number of attribute additions and deletions (left and right respectively). To find the best distribution parameters for the baselines, we evaluated combinations of θ+ ∈ {0.04, 0.055, 0.07} and θ− ∈ [0.1, . . . , 0.827], using 11 equally spaced values for the interval. For adversarial additions, the best baseline yields a certified accuracy curve with an AUC of 4.51 compared to our 5.65. The best baseline for deletions has an AUC of 7.76 compared to our 16.26. Our method outperforms these optimized baselines for most adversarial budgets, while maintaining the same clean accuracy (i.e. certified accuracy at = 0). Experiments with different noise parameters and classifiers can be found in Section E.3. In general, we find that we significantly outperform the baseline when certifying robustness to deletions, but often have weaker certificates for additions (which may be inherent to the variance smoothing base certificates). Due to the large continuous parameter space, we cannot claim that localized smoothing outperforms the naı̈ve baseline everywhere. However, our results show that, for the tested parameter regions, localized smoothing can provide a significantly better accuracy-robustness trade-off. We found that using the collective LP instead of naı̈vely combining the base certificates can result in much stronger certificates: The AUC of the certified accuracy curve (averaged over all experiments) increased by 38.8% and 33.6% for addition and deletion, respectively. The collective LPs caused little computational overhead (avg. 10.9 s per LP, more details in Section E.3.3). 9 CONCLUSION In this work, we have proposed the first collective robustness certificate for softly local multi-output classifiers. It is based on localized randomized smoothing, i.e. randomly smoothing different outputs using different non-i.i.d. smoothing distributions matching the model’s locality. We have shown how per-output certificates based on localized smoothing can be computed and that they share a common interface. This interface allows them to be combined into a strong collective robustness certificate. Experiments on image segmentation and node classification tasks demonstrate that localized smoothing can offer a better robustness-accuracy trade-off than existing randomized smoothing techniques. Our results show that locality is linked to robustness, which suggests the research direction of building more effective local models to robustly solve multi-output tasks. 10 REPRODUCIBILITY STATEMENT We prove all theoretic results that were not already derived in the main text in Appendices A to C. To ensure reproducibility of the experimental results we provide detailed descriptions of the evaluation process with the respective parameters in Section E.2 and Section E.3. Code will be made available to reviewers via an anonymous link posted on OpenReview, as suggested by the guidelines. 11 ETHICS STATEMENT In this paper, we propose a method to increase the robustness of machine learning models against adversarial perturbations and to certify their robustness. We see this as an important step towards general usage of models in practice, as many existing methods are brittle to crafted attacks. Through the proposed method, we hope to contribute to the safe usage of machine learning. However, robust models also have to be seen with caution. As they are harder to fool, harmful purposes like mass surveillance are harder to avoid. We believe that it is still necessary to further research robustness of machine learning models as the positive effects can outweigh the negatives, but it is necessary to discuss the ethical implications of the usage in any specific application area. A.1 SHARING SMOOTHING DISTRIBUTIONS AMONG OUTPUTS In principle, our proposed certificate allows a different smoothing distribution Ψ(n) to be used per output gn of our base model. In practice, where we have to estimate properties of the smoothed classifier using Monte Carlo methods, this is problematic: Samples cannot be re-used, each of the many outputs requires its own round of sampling. We can increase the efficiency of our localized smoothing approach by partitioning our Dout outputs into Nout subsets that share the same smoothing distribution. When making smoothed predictions or computing base certificates, we can then reuse the same samples for all outputs within each subsets. More formally, we partition our Dout output dimensions into sets K(1), . . . ,K(Nout) with⋃̇Nout i=1 K(i) = {1, . . . , Dout}. (6) We then associate each set K(i) with a smoothing distribution Ψ(i). For each base model output gn with n ∈ K(i), we then use smoothing distribution Ψ(i) to construct the smoothed output fn, e.g. fn(x) = argmaxy∈Y Prz∼Ψ(i) [f(x+ z) = y] (note that we use a different smoothing paradigm for binary data, see Section 5). A.2 QUANTIZING CERTIFICATE PARAMETERS Recall that our base certificates from Section 5 are defined by a linear inequality: A prediction yn = fn(x) is robust to a perturbed input x′ ∈ XDin if ∑D d=1 w (n) d · |x′d − xd| p < η(n), for some p ≥ 0. The weight vectors w(n) ∈ RDin only depend on the smoothing distributions. A side of effect of sharing the same smoothing Ψ(i) among all outputs from a set K(i), as discussed in the previous section, is that the outputs also share the same weight vector w(i) ∈ RDin with ∀n ∈ K(i) : w(i) = w(n). Thus, for all smoothed outputs fn with n ∈ K(i), the smoothed prediction yn is robust if ∑D d=1 w (i) d · |x′d − xd| p < η(n). Evidently, the base certificates for outputs from a set K(i) only differ in their parameter η(n). Recall that in our collective linear program we use a vector of variables t ∈ {0, 1}Dout to indicate which predictions are robust according to their base certificates (see Section 6). If there are two outputs fn and fm with η(n) = η(m), then fn and fm have the same base certificate and their robustness can be modelled by the same indicator variable. Conversely, for each set of outputs K(i), we only need one indicator variable per unique η(n). By quantizing the η(n) within each subset K(i) (for example by defining equally sized bins between minn∈K(i) η(n) and maxn∈K(i) η(n) ), we can ensure that there is always a fixed number Nbins of indicator variables per subset. This way, we can reduce the number of indicator variables from Dout to Nout ·Nbins. To implement this idea, we define matrix of thresholds E ∈ RNout×Nbins with ∀i : min {Ei,:} ≤ minn∈K(i) ({ η(n) | n ∈ K(i) }) . We then define a function ξ : {1, . . . , Nout} × R→ R with ξ(i, η) = max ({Ei,j | j ∈ {1, . . . , Nbins ∧ Ei,j < η}) (7) that quantizes base certificate parameter η from output subset K(i) by mapping it to the next smallest threshold in Ei,:. For feasibility, like in Section 6 we need to compute the constant η(i) = min b∈RDin+ bTw (i) d s.t. sum{b} ≤ p to ensure feasibility of the problem. Note that, be- cause all outputs from a subset K(i) share the same weight vector w(i), we only have to compute this constant once per subset. We can bound the collective robustness of the targeted dimensions T of our vector of predictions y = f(x) as follows: min ∑ i∈{1,...,Nout} ∑ j∈{1,...,Nbins} Ti,j ∣∣∣{n ∈ T ∩K(i) ∣∣∣ξ (i, η(n)) = Ei,j }∣∣∣ (8) s.t. ∀i, j : bTw(i) ≥ Ti,jη(i) + (1− Ti,j)Ei,j , sum{b} ≤ p (9) b ∈ RDin+ , T ∈ {0, 1}Nout×Nbins . (10) Constraint Eq. 9 ensures that Ti,j is only set to 0 if bTw(i) ≥ Ei,j , i.e. all predictions from subset K(i) whose base certificate parameter η(n) is quantized to Ei,j are no longer robust. When this is the case, the objective function decreases by the number of these predictions. For Nout = Dout, Nbins = 1 and En,1 = η(n), we recover our general certificate from Section 6. Note that, if the quantization maps any parameter η(n) to a smaller number, the set H(n) becomes more restrictive, i.e. yn is considered robust to a smaller set of perturbed inputs. Thus, Eq. 8 is a lower bound on our general certificate from Section 6. A.3 SHARING NOISE LEVELS AMONG INPUTS Similar to how partitioning the output dimensions allows us to control the number of output variables t, partitioning the input dimensions and using the same noise level within each partition allows us to control the number of variables b that model the allocation of adversarial budget. Assume that we have partitioned our output dimensions into Nout subsets K(1), . . . ,K(Nout , with outputs in each subset sharing the same smoothing distribution Ψ(i), as explained in Section A.1. Let us now define Nin input subsets J(1), . . . , J(Nin) with⋃̇Nout i=1 J(i) = {1, . . . , Dout}. (11) Recall that a prediction yn = fn(x) with n ∈ K(i) is robust to a perturbed input x′ ∈ XDin if ∑D d=1 w (i) d · |x′d − xd| p < η(n) and that the weight vectors w(i) only depend on the smoothing distributions. Assume that we choose each smoothing distribution Ψ(i) such that ∀l ∈ {1, . . . , Nin},∀d, d′ ∈ J(l) : w(i)d = w (i) d′ , i.e. all input dimensions within each set J(l) have the same weight. This can be achieved by choosing Ψ(i) so that all dimensions in each input subset Jl are smoothed with the noise level (note that we can still use different Ψ(i), i.e. different noise levels for smoothing different sets of outputs). For example, one could use a Gaussian distribution with covariance matrix Σ = diag (σ)2 with ∀l ∈ {1, . . . , Nin},∀d, d′ ∈ J(l) : σd = σd′ . In this case, the evaluation of our base certificates can be simplified. Prediction yn = fn(x) is robust to a perturbed input x′ ∈ XDin if D∑ d=1 w (i) d · |x ′ d − xd| p < η(n) (12) = Nin∑ l=1 u(i) · ∑ d∈J(l) |x′d − xd| p < η(n), (13) with u ∈ RNin and ∀i ∈ {1, . . . , Nout},∀l ∈ {1, . . . , Nin},∀d ∈ J : uil = wid. That is, we can replace each weight vector w(i) that has one weight w(i)d per input dimension d with a smaller weight vector u(i) with one weight u(i)l per input subset J(l). For our linear program, this means that we no longer need a budget vector b ∈ RDin+ to model the element-wise distance |x′d − xd| p in each dimension d. Instead, we can use a smaller budget vector b ∈ RNin+ to model the overall distance within each input subset J(l), i.e. ∑ d∈J(l) |x′d − xd| p. Combined with the quantization of certificate parameters from the previous section, our optimization problem becomes min ∑ i∈{1,...,Nout} ∑ j∈{1,...,Nbins} Ti,j ∣∣∣{n ∈ T ∩K(i) ∣∣∣ξ (i, η(n)) = Ei,j }∣∣∣ (14) s.t. ∀i, j : bTu(i) ≥ Ti,jη(i) + (1− Ti,j)Ei,j , sum{b} ≤ p, (15) b ∈ RNin+ , T ∈ {0, 1}Nout×Nbins . (16) with u ∈ RNin and ∀i ∈ {1, . . . , Nout},∀l ∈ {1, . . . , Nin},∀d ∈ J : ωil = wid. For Nout = Dout, Nin = Din, Nbins = 1 and En,1 = η(n), we recover our general certificate from Section 6. When certifying robustness for binary data, we impose different constraints on b. To model that the adversary can not flip more bits than are present within each subset, we use a budget vector b ∈ NNin0 with ∀l ∈ {1, . . . , Nin} : bl ≤ ∣∣J(l)∣∣, instead of a continuous budget vector b ∈ RNin+ . A.4 LINEAR RELAXATION Combining the previous steps allows us to reduce the number of problem variables and linear constraints from Din + Dout and Dout + 1 to Nin + Nout · Nbins and Nout · Nbins + 1, respectively. Still, finding an optimal solution to the mixed-integer linear program may be too expensive. One can obtain a lower bound on the optimal value and thus a valid, albeit more pessimistic, robustness certificate by relaxing all to be continuous. When using the general certificate from Section 6, the binary vector t ∈ {0, 1}Dout can be relaxed to t ∈ [0, 1]Dout . When using the certificate with quantized base certificate parameters from Section A.2 or Section A.3, the binary matrix T ∈ [0, 1]Nout×Nbins can be relaxed to T ∈ [0, 1]Nout×Nbins . Conceptually, this means that predictions can be partially certified, i.e. tn ∈ (0, 1) or Ti,j ∈ (0, 1). In particular, a prediction can be partially certified even if we know that is impossible to attack under the collective perturbation model Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } . Just like Schuchardt et al. (2021), who encountered the same problem with their collective certificate, we circumvent this issue by first computing a set L ⊆ T of all targeted predictions in T that are guaranteed to always be robust: L = { n ∈ T ∣∣∣∣∣ ( max x∈Bx D∑ d=1 w (n) d · |x ′ d − xd| p ) < η(n) } (17) = { n ∈ T ∣∣∣max(max{w(n)} · p, 0) < η(n)} . (18) The equality follows from the fact that the most effective way of attacking a prediction is to allocate all adversarial budget to the least robust dimension, i.e. the dimension with the largest weight – unless all weights are negative. Because we know that all predictions with indices in L are robust, we do not have to include them in the collective optimization problem and can instead compute |L|+ min x′∈Bx ∑ n∈T\L I [ x′ ∈ H(n) ] . (19) The r.h.s. optimization can be solved using the general collective certificate from Section 6 or any of the more efficient, modified certificates from previous sections. When using the general collective certificate from Section 6 with binary data, the budget variables b ∈ {0, 1}Din can be relaxed to b ∈ [0, 1]Din . When using the modified collective certificate from Section A.3, the budget variables with b ∈ NNin0 can be relaxed to b ∈ R Nin + . The additional constraint ∀l ∈ {1, . . . , Nin} : bl ≤ ∣∣J(l)∣∣ can be kept in order to model that the adversary cannot flip (or partially flip) more bits than are present within each input subset J(l). B BASE CERTIFICATES In the following, we show why the base certificates presented in Section 5 hold and present alternatives for other collective perturbation models. B.1 GAUSSIAN SMOOTHING FOR l2 PERTURBATIONS OF CONTINUOUS DATA Proposition 1. Given an output gn : RDin → Y, let fn(x) = argmaxy∈Y Prz∼N (x,Σ) [gn(z) = y] be the corresponding smoothed output with Σ = diag (σ)2 andσ ∈ RDin+ . Given an inputx ∈ RDin and smoothed prediction yn = fn(x), let q = Prz∼N (x,Σ) [gn(z) = yn]. Then, ∀x′ ∈ H(n) : fn(x ′) = yn with H(n) defined as in Eq. 2, wd = 1σd2 , η = ( Φ(−1)(q) )2 and κ = 2. Proof. Based on the definition of the base certificate interface, we need to show that, ∀x′ ∈ H : fn(x ′) = yn with H = { x′ ∈ RDin ∣∣∣∣∣ Din∑ d=1 1 σ2d · |xd − x′d|2 < ( Φ−1(q) )2} . (20) Eiras et al. (2021) have shown that under the same conditions as above, but with a general covariance matrix Σ ∈ RDin×Din+ , a prediction yn is certifiably robust to a perturbed input x′ if√ (x− x′)Σ−1(x− x′) < 1 2 ( Φ−1(q)− Φ−1(q′) ) , (21) where q′ = maxy′n 6=yn Prz∼N (x,Σ) [gn(z) = y ′ n] is the probability of the second most likely prediction under the smoothing distribution. Because the probabilities of all possible predictions have to sum up to 1, we have q′ ≤ 1 − q. Since Φ−1 is monotonically increasing, we can obtain a lower bound on the r.h.s. of Eq. 21 and thus a more pessimistic certificate by substituting 1 − q for q′ (deriving such a ”binary certificate” from a ”multiclass certificate” is common in randomized smoothing and was already discussed in (Cohen et al., 2019)):√ (x− x′)Σ−1(x− x′) < 1 2 ( Φ−1(q)− Φ−1(1− q) ) , (22) In our case, Σ is a diagonal matrix diag (σ)2 with σ ∈ RDin+ . Thus Eq. 22 is equivalent to√√√√Din∑ d=1 (xd − x′d) 1 σ2d (xd − x′d) < 1 2 ( Φ−1(q)− Φ−1(1− q) ) . (23) Finally, using the fact that Φ−1(q)−Φ−1(1− q) = 2Φ−1(q) and eliminating the square root shows that we are certifiably robust if Din∑ d=1 1 σ2d · |xd − x′d|2 < ( Φ−1(q) )2 . (24) B.1.1 UNIFORM SMOOTHING FOR l1 PERTURBATIONS OF CONTINUOUS DATA An alternative base certificate for l1 perturbations is again due to Eiras et al. (2021). Using uniform instead of Gaussian noise later allows us to collective certify robustness to l1-norm-bound perturbations. In the following U(x,λ) with x ∈ RD, λ ∈ RD+ refers to a vector-valued random distribution in which the d-th element is uniformly distributed in [xd − λd, xd + λd]. Proposition 2. Given an output gn : RDin → Y, let f(x) = argmaxy∈Y Prz∼U(x,λ) [g(z) = y] be the corresponding smoothed classifier with λ ∈ RDin+ . Given an input x ∈ RDin and smoothed prediction y = f(x), let p = Prz∼U(x,λ) [g(z) = y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = 1/λd, η = Φ−1(q) and κ = 1. Proof. Based on the definition of H(n), we need to prove that ∀x′ ∈ H : fn(x′) = yn with H = { x′ ∈ RDin | Din∑ d=1 1 λd · |xd − x′d| < Φ−1(q) } , (25) Eiras et al. (2021) have shown that under the same conditions as above, a prediction yn is certifiably robust to a perturbed input x′ if Din∑ d=1 | 1 λd · (xd − x′d) | < 1 2 ( Φ−1(q)− Φ−1(1− q) ) , (26) where q′ = maxy′n 6=yn Prz∼U(x,λ) [gn(z) = y ′ n] is the probability of the second most likely prediction under the smoothing distribution. As in our previous proof for Gaussian smoothing, we can obtain a more pessimistic certificate by substituting 1−q for q′. Since Φ−1(q)−Φ−1(1−q) = 2Φ−1(q) and all λd are non-negative, we know that our prediction is certifiably robust if Din∑ d=1 1 λd · |xd − x′d| < Φ−1(p). (27) B.2 VARIANCE SMOOTHING We propose variance smoothing as a base certificate for binary data. Variance smoothing certifies predictions based on the mean and variance of the softmax score associated with a predicted label. It is in principle applicable to arbitrary data types. We focus on discrete data, but all results can be generalized from discrete to continuous data by replacing any sum over probability mass functions with integrals over probability density functions. We first derive a general form of variance smoothing before discussing our certificates for binary data in Section B.2.1 and Section B.2.2. Variance smoothing assumes that we make predictions by randomly smoothing a base model’s softmax scores. That is, given base model g : X→ ∆|Y| mapping from an arbitrary discrete input space X to scores from the |Y|-dimensional probability simplex ∆|Y|, we define the smoothed classifier f(x) = argmaxy∈YEz∼Ψ(x) [g(z)y]. Here, Ψ(x) is an arbitrary distribution over X parameterized by x, e.g a Normal distribution with mean x. The smoothed classifier does not return the most likely prediction, but the prediction associated with the highest expected softmax score. Given an input x ∈ X, smoothed prediction y = f(x) and a perturbed input x′ ∈ X, we want to determine whether f(x′) = y. By definition of our smoothed classifier, we know that f(x′) = y if y is the label with the highest expected softmax score. In particular, we know that f(x′) = y if y’s softmax score is larger than all other softmax scores combined, i.e. Ez∼Ψ(x′) [g(z)y] > 0.5 =⇒ f(x′) = y. (28) Computing Ez∼Ψ(x′) [g(z)y] exactly is usually not tractable – especially if we later want to evaluate robustness to many x′ from a whole perturbation model B ⊆ X. Therefore, we compute a lower bound on Ez∼Ψ(x′) [g(z)y]. If even this lower bound is larger than 0.5, we know that prediction y is certainly robust. For this, we define a set of functions H with gy ∈ H and compute the minimum softmax score across all functions from H: min h∈H Ez∼Ψ(x′) [h(z)] > 0.5 =⇒ f(x′) = y. (29) For our variance smoothing approach, we define H to be the set of all functions that have a larger or equal expected value and a smaller or equal variance, compared to our base model g applied to unperturbed input x. Let µ = Ez∼Ψ(x) [g(z)y] be the expected softmax score of our base model g for label y. Let σ2 = Ez∼Ψ(x) [ (g(z)y − ν)2 ] be the expected squared distance of the softmax score from a scalar ν ∈ R. (Choosing ν = µ yields the variance of the softmax score. An arbitrary ν is only needed for technical reasons related to Monte Carlo estimation Section C.2). Then, we define H = { h : X→ R ∣∣∣ Ez∼Ψ(x) [h(z)] ≥ µ ∧ Ez∼Ψ(x) [(h(z)− ν)2] ≤ σ2} (30) Clearly, by the definition of µ and σ2, we have gy ∈ H. Note that we do not restrict functions from H to the domain [0, 1], but allow arbitrary real-valued outputs. By evaluating Eq. 28 with H defined as in Eq. 29, we can determine if our prediciton is robust. To compute the optimal value , we need the following two Lemmata: Lemma 1. Given a discrete set X and the set Π of all probability mass functions over X, any two probability mass functions π1, π2 ∈ Π fulfill∑ z∈X π2(z) π1(z) π2(z) ≥ 1. (31) Proof. For a fixed probability mass function π1, Eq. 31 is lower-bounded by the minimal expected likelihood ratio that can be achieved by another π̃(z) ∈ Π:∑ z∈X π2(z) π1(z) π2(z) ≥ min π̃∈Π ∑ z∈X π̃(z) π1(z) π̃(z). (32) The r.h.s. term can be expressed as the constrained optimization problem min π̃ ∑ z∈X π̃(z) π1(z) π̃(z) s.t. ∑ z∈X π̃(z) = 1 (33) with the corresponding dual problem max λ∈R min π̃ ∑ z∈X π̃(z) π1(z) π̃(z) + λ ( −1 + ∑ z∈X π̃(z) ) . (34) The inner problem is convex in each π̃(z). Taking the gradient w.r.t. to π̃(z) for all z ∈ X shows that it has its minimum at ∀z ∈ X : π̃(z) = −λπ1(z)2 . Substituting into Eq. 34 results in max λ∈R ∑ z∈X λ2π1(z) 2 4π1(z) + λ ( −1− ∑ z∈X λπ1(z) 2 ) (35) = max λ∈R −λ2 ∑ z∈X π1(z) 4 − λ (36) = max λ∈R −λ 2 4 − λ (37) = 1. (38) Eq. 37 follows from the fact that π1(z) is a valid probability mass function. Due to duality, the optimal dual value 1 is a lower bound on the optimal value of our primal problem Eq. 31. Lemma 2. Given a probability distribution D over a R and a scalar ν ∈ R, let µ = Ez∼D and ξ = Ez∼D [ (z − ν)2 ] . Then ξ ≥ (µ− ν)2 Proof. Using the definitions of µ and ξ, as well as some simple algebra, we can show: ξ ≥ (µ− ν)2 (39) ⇐⇒ Ez∼D [ (z − ν)2 ] ≥ µ2 − 2µν + ν2 (40) ⇐⇒ Ez∼D [ z2 − 2zν + ν2 ] ≥ µ2 − 2µν + ν2 (41) ⇐⇒ Ez∼D [ z2 − 2zν + ν2 ] ≥ µ2 − 2µν + ν2 (42) ⇐⇒ Ez∼D [ z2 ] − 2µν + ν2 ≥ µ2 − 2µν + ν2 (43) ⇐⇒ Ez∼D [ z2 ] ≥ µ2 (44) It is well known for the variance that Ez∼D [ (z − µ)2 ] = Ez∼D [ z2 ] − µ2. Because the variance is always non-negative, the above inequality holds. Using the previously described approach and lemmata, we can show the soundness of the following robustness certificate: Theorem 3. Given a model g : X → ∆|Y| mapping from discrete set X to scores from the |Y|-dimensional probability simplex, let f(x) = argmaxy∈YEz∼Ψ(x) [g(z)y] be the corresponding smoothed classifier with smoothing distribution Ψ(x) and probability mass function πx(z) = Prz̃∼Ψ(x) [z̃ = z]. Given an input x ∈ X and smoothed prediction y = f(x), let µ = Ez∼Ψ(x) [g(z)y] and σ2 = Ez∼Ψ(x) [ (g(z)y − ν)2 ] with ν ∈ R. If ν ≤ µ, we know that f(x′) = y if ∑ z∈X πx′(z) 2 πx(z) < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 ) . (45) Proof. Following our discussion above, we know that f(x′) = y if Ez∼Ψ(x′) [g(z)y] > 0.5 with H defined as in Section 5. We can compute a (tight) lower bound on minh∈H Ez∼Ψ(x′) by following the functional optimization approach for randomized smoothing proposed by Zhang et al. (2020). That is, we solve a dual problem in which we optimize the value h(z) for each z ∈ X. By the definition of the set H, our optimization problem is min h:X→R Ez∼Ψ(x′) [h(z)] s.t. Ez∼Ψ(x) [h(z)] ≥ µ, Ez∼Ψ(x) [ (h(z)− ν)2 ] ≤ σ2. The corresponding dual problem with dual variables α, β ≥ 0 is max α,β≥0 min h:X→R Ez∼Ψ(x′) [h(z)] +α ( µ− Ez∼Ψ(x) [h(z)] ) + β ( Ez∼Ψ(x) [ (h(z)− ν)2 ] − σ2 ) . (46) We first move move all terms that don’t involve h out of the inner optimization problem: = max α,β≥0 αµ−βσ2 + min h:X→R Ez∼Ψ(x′) [h(z)]−αEz∼Ψ(x) [h(z)]+βEz∼Ψ(x) [ (h(z)− ν)2 ] (47) Writing out the expectation terms and combining them into one sum (or – in the case of continuous X – one integral), our dual problem becomes = max α,β≥0 αµ− βσ2 + min h:X→R ∑ z∈X h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) (48) (recall that πx′ and πx′ refer to the probability mass functions of the smoothing distributions). The inner optimization problem can be solved by finding the optimal h(z) in each point z: = max α,β≥0 αµ− βσ2 + ∑ z∈X min h(z)∈R h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) (49) Because β ≥ 0, each inner optimization problem is convex in h(z). We can thus find the optimal h∗(z) by setting the derivative to zero: d dh(z) h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) ! = 0 (50) ⇐⇒ πx′(z)− απx(z) + 2β (h(z)− ν)πx(z) ! = 0 (51) =⇒ h∗(z) = − πx ′(z) 2βπx(z) + α 2β + ν. (52) Substituting into Eq. 48 and simplifying leaves us with the dual problem max α,β≥0 αµ− βσ2 − α 2 4β + α 2β − αν + ν − 1 4β ∑ z∈X πx′(z) 2 πx(z) (53) In the following, let us use ρ = ∑ z∈X πx′ (z) 2 πx(z) as a shorthand for the expected likelihood ratio. The problem is concave in α. We can thus find the optimum α∗ by setting the derivative to zero, which gives us α∗ = 2β(µ− ν) + 1. Because β ≥ 0 and ou theorem assumes that ν ≤ µ, α∗ is a feasible solution to the dual problem. Substituting into Eq. 53 and simplifying results in max β≥0 α∗µ− βσ2 − α ∗2 4β + α∗ 2β − α∗ν + ν − 1 4β ρ (54) = max β≥0 β ( (µ− ν)2 − σ2 ) + µ+ 1 4β (1− ρ) . (55) Lemma 1 shows that the expected likelihood ratio ρ is always greater than or equal to 1. Lemma 2 shows that (µ− ν)2 − σ2 ≤ 0. Therefore Eq. 55 is concave in β. The optimal value of β can again be found by setting the derivative to zero: β∗ = √ 1− ρ 4 ((µ− ν)2 − σ2) . (56) Recall that our theorem assumes σ2 ≥ (µ− ν)2 and thus β∗ is real valued. Substituting into Eq. 56 shows that the maximum of our dual problem is µ+ √ (1− p) ((µ− ν)2 − σ2). (57) By duality, this is a lower bound on our primal problem minh∈H Ez∼Ψ(x′) [h(z)]. We know that our prediction is certifiably robust, i.e. f(x) = y, if minh∈H Ez∼Ψ(x′) [h(z)] > 0.5. So, in particular, our prediction is robust if µ+ √ (1− ρ) ((µ− ν)2 − σ2) > 0.5 (58) ⇐⇒ ρ < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 )2 (59) ⇐⇒ ∑ z∈X πx′(z) 2 πx(z) < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 )2 (60) The last equivalence is the result of inserting the definition of the expected likelihood ratio ρ. With Theorem 3 in place, we can certify robustness for arbitrary smoothing distributions, assuming we can compute the expected likelihood ratio. When we are working with discrete data and the smoothing distributions factorize (but are not necessarily i.i.d.), this can be done efficiently, as the two following base certificates for binary data demonstrate. B.2.1 BERNOULLI VARIANCE SMOOTHING FOR PERTURBATIONS OF BINARY DATA We begin by proving the base certificate presented in Section 5. Recall that we we use a smoothing distribution F(x,θ) with θ ∈ [0, 1]Din that independently flips the d’th bit with probability θd, i.e. for x, z ∈ {0, 1}Din and z ∼ F(x,θ) we have Pr[zd 6= xd] = θd. Theorem 1. Given an output gn : {0, 1}Din → ∆|Y| mapping to scores from the |Y|-dimensional probability simplex, let fn(x) = argmaxy∈YEz∼F(x,θ) [gn(z)y] be the corresponding smoothed classifier with θ ∈ [0, 1]Din . Given an input x ∈ {0, 1}Din and smoothed prediction yn = fn(x), let µ = Ez∼F(x,θ) [gn(z)y] and σ2 = Varz∼F(x,θ) [gn(z)y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = ln ( (1−θd)2 θd + (θd) 2 1−θd ) , η = ln ( 1 + 1σ2 ( µ− 12 )2) and κ = 0. Proof. Based on our definition of the base certificate interface from Section 5, we must show that ∀x′ ∈ H : fn(x′) = yn with H = { x′ ∈ {0, 1}Din ∣∣∣∣∣ Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) · |x′d − xd|0 < ln ( 1 + 1 σ2 ( µ− 1 2 )2)} , (61) Because all bits are flipped independently, our probability mass function πx(z) = Prz̃∼Ψ(x) [z̃ = z] factorizes: πx(z) = Din∏ d=1 πxd(zd) (62) with πxd(zd) = { θd if zd 6= xd 1− θd else . (63) Thus, our expected likelihood ratio can be written as ∑ z∈{0,1}Din πx′(z) 2 πx(z) = ∑ z∈{0,1}Din Din∏ d=1 πx′d(zd) 2 πxd(zd) = Din∏ d=1 ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) . (64) For each dimension d, we can distinguish two cases: If both the perturbed and unperturbed input are the same in dimension d, i.e. x′d = xd, then πx′ d (z) πxd (z) = 1 and thus ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = ∑ zd∈{0,1} πx′d(zd) = θd + (1− θd) = 1. (65) If the perturbed and unperturbed input differ in dimension d, then∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = (1− θd)2 θd + (θd) 2 1− θd . (66) Therefore, the expected likelihood ratio is Din∏ d=1 ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = Din∏ d=1 ( (1− θd)2 θd + (θd) 2 1− θd )|x′d−xd| . (67) Due to Theorem 3 (and using ν = µ when computing the variance), we know that our prediction is robust, i.e. fn(x′) = yn, if ∑ z∈{0,1}Din πx′(z) 2 πx(z) < 1 + 1 σ2 ( µ− 1 2 )2 (68) ⇐⇒ Din∏ d=1 ( (1− θd)2 θd + (θd) 2 1− θd )|x′d−xd| < 1 + 1 σ2 ( µ− 1 2 )2 (69) ⇐⇒ Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) |x′d − xd| < ln ( 1 + 1 σ2 ( µ− 1 2 )2) . (70) Because xd and x′d are binary, the last inequality is equivalent to Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) |x′d − xd|0 < ln ( 1 + 1 σ2 ( µ− 1 2 )2) . (71) B.2.2 SPARSITY-AWARE VARIANCE SMOOTHING FOR PERTURBATIONS OF BINARY DATA Sparsity-aware randomized smoothing (Bojchevski et al., 2020) is an alternative smoothing approach for binary data. It uses
1. What is the focus and contribution of the paper regarding randomized smoothing for multi-output classifiers? 2. What are the strengths of the proposed approach, particularly in terms of its intuitive use of anisotropic smoothing? 3. What are the weaknesses of the paper, especially regarding its theoretical analysis and experimental evaluation? 4. Do you have any concerns about the novelty and contributions of the paper compared to prior works? 5. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper leverages the recent anisotropic certificates for randomized smoothing for certifying multi-output classifiers. In particular, it leverages anisotropic Gaussian and Bernoulli smoothing for better collective robustness. Experimental evaluation was conducted on semantic segmentation and node classification to demonstrate the effectiveness of the proposed method. Review This paper has many merits: The motivation behind this work is clearly stated. The use of anisotropic smoothing is intuitive. The tasks on which the experiments are conducted are in line with the motivation. However, there several concerns in this work that need to be addressed: While the major part of this work is dedicated to the theoretical analysis, it is not clear to me which parts belong to the contributions of this work and which parts are restatements from other work. For example: In terms of the theoretical results in this work in sections 4, 5, and 6: what is exactly new (considered as a contribution)? For example, Proposition 1 and 4 are just a special case of the results of Eiras et.al. for when the covariance matrix is diagonal. In fact, with the formulation setup presented in section 2, a generalization of proposition 1 can be found in Appendix A of [1]. In terms of the analysis in sections 4 and 6, what is exactly new? For example, the bound derived in Equation (1) and the analysis in (3-6) are very similar to the results in Schuchardt et. al. In terms of experiments: it is mentioned that "showcasing state-of-the-art accuracy on datasets is not part of our objective". Why? If the proposed collective anisotropic certificate is preferable, then it is necessary to show that it improves the best baselines. What is the importance of certified ratio metric? Why not reporting wither certified accuracy or certified AUC in the case of segmentation. While two models could have similar certified ratios, their accuracy could significantly differ. Since the analysis of this work follows the work of Schuchardt et. al., a direct comparison on the strictly local setup should be presented. The experiments conducted include a single benchmark per task. The experiments should include multiple datasets to check whether the assumptions related to the use of anisotropic smoothing hold or not (e.g. homophily). The writing of the paper could be significantly improved. There are several parts were the text refer to tables/figures in the appendix without mentioning the appendix. Also, please consider moving more experiments from the appendix to the main work. Moreover, in the caption of Figure 1, σ min is mentioned twice, is that a typo? It is also repeated in page 8 in the second paragraph. [1]: Certified Defense to Image Transformations via Randomized Smoothing, NeurIPS 2020.
ICLR
Title Localized Randomized Smoothing for Collective Robustness Certification Abstract Models for image segmentation, node classification and many other tasks map a single input to multiple labels. By perturbing this single shared input (e.g. the image) an adversary can manipulate several predictions (e.g. misclassify several pixels). A recent collective robustness certificate provides strong guarantees on the number of predictions that are simultaneously robust. This method is however limited to strictly local models, where each prediction is associated with a small receptive field. We propose a more general collective certificate for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e.g. based on their proximity in the image). The certificate is based on our novel localized randomized smoothing approach, where the random perturbation strength for different input regions is proportional to their importance for the outputs. The resulting locally smoothed model yields strong collective guarantees while maintaining high prediction quality on both image segmentation and node classification tasks. 1 INTRODUCTION There is a wide range of tasks that require models making multiple predictions based on a single input. For example, semantic segmentation requires assigning a label to each pixel in an image. When deploying such multi-output classifiers in practice, their robustness should be a key concern. After all – just like simple classifiers (Szegedy et al., 2014) – they can fall victim to adversarial attacks (Xie et al., 2017; Zügner & Günnemann, 2019; Belinkov & Bisk, 2018). Even without an adversary, random noise or measuring errors could cause one or multiple predictions to unexpectedly change. In the following, we derive a method that provides provable guarantees on how many predictions can be changed by an adversary. Since all outputs operate on the same input, they also have to be attacked simultaneously by choosing a single perturbed input. While attacks on a single prediction may be easy, attacks on different predictions may be mutually exclusive. We have to explicitly account for this fact to obtain a proper collective robustness certificate that provides tight bounds. There already exists a dedicated collective robustness certificate for multi-output classifiers (Schuchardt et al., 2021), but it is only benefical for models we call strictly local, where each output depends only on a small, well-defined subset of the input. One example are graph neural networks that classify each node in a graph based only on its neighborhood. Multi-output classifiers used in practice, however, are often only softly local. While – unlike strictly local models – all of their predictions are in principle dependent on the entire input, each output may assign different importance to different components. For example, deep convolutional networks used for image segmentation can have very small effective receptive fields (Luo et al., 2016; Liu et al., 2018b), i.e. primarily use a small region of the input in labeling each pixel. Many models used in node classification are based on the homophily assumption that connected nodes are mostly of the same class. Thus, they primarily use features from neighboring nodes to classify each node. Even if an architecture is not inherently softly local, a model may learn a softly local mapping through training. For example, a transformer (Vaswani et al., 2017) can in principle attend to any part of an input sequence. However, in practice the learned attention maps may be ”sparse”, with the prediction for each token being determined primarily by a few (not necessarily nearby) tokens (Shi et al., 2021). While an adversarial attack on a single prediction of a softly local model is conceptually no different from that on a single-output classifier, attacking multiple predictions simultaneously can be much more challenging. By definition, adversarial attacks have to be unnoticeable, meaning the adversary only has a limited budget for perturbing the input. When each output is focused on a different part of the input, the adversary has to decide on where to allocate their adversarial budget and may be unable to attack all outputs at once. Our collective robustness certificate explicitly accounts for this budget allocation problem faced by the adversary and can thus provide stronger robustness guarantees. Our certificate is based on randomized smoothing (Liu et al., 2018a; Lécuyer et al., 2019; Cohen et al., 2019). Randomized smoothing is a versatile black-box certification method that has originally been proposed for single-output classifiers. Instead of directly analysing a model, it constructs a smoothed classifier that returns the most likely prediction of the model under random perturbations of its input. One can then use statistical methods to certify the robustness of this smoothed classifier. We discuss more details in Section 2. Randomized smoothing is typically used with i.i.d. noise: Each part of the input (e.g. each pixel) independently undergoes random perturbations sampled from the same noise distribution. One can however also use non-i.i.d. noise (Eiras et al., 2021). This results in a smoothed classifier that is certifiably more robust to parts of the input that are smoothed with higher noise levels (e.g. larger standard deviation). We apply randomized smoothing to softly-local multi-output classifiers in a scheme we call localized randomized smoothing: Instead of using the same smoothing distribution for all outputs, we randomly smooth each output (or set of outputs) using a different non-i.i.d. distribution that matches its inherent soft locality. Using a low noise level for the most relevant parts of the input allows us to retain a high prediction quality (e.g. accuracy). Less relevant parts of the input can be smoothed with a higher noise level. The resulting certificates (one per output) explicitly quantify how robust each prediction is to perturbations of which section of the input – they are certificates of soft locality. After certifying each prediction independently using localized randomized smoothing, we construct a (mixed-integer) linear program that combines these per-prediction base certificates into a collective certificate that provably bounds the number of simultaneously attackable predictions. This linear program explicitly accounts for soft locality and the budget allocation problem it causes for the adversary. This allows us to prove much stronger guarantees of collective robustness than simply certifying each prediction independently. Our core contributions are: • Localized randomized smoothing, a novel smoothing scheme for multi-output classifiers. • A variance smoothing method for efficiently certifying smoothed models on discrete data. • A collective certificate that leverages our identified common interface for base certificates. 2 BACKGROUND AND RELATED WORK Randomized smoothing. Randomized smoothing is a flexible certification technique that can be used for various data types, perturbation models and tasks. For simplicity, we focus on a classification certificate for l2 perturbations (Cohen et al., 2019). Assume we have a continuous D-dimensional input space RD, a label set Y and a classifier g : RD → Y. We can use isotropic Gaussian noise with standard deviation σ ∈ R+ to construct the smoothed classifier f = argmaxy∈Y Prz∼N (x,σ) [g(z) = y] that returns the most likely prediction of base classifier g under the input distribution 1. Given an input x ∈ RD and the smoothed prediction y = f(x), we want to determine whether the prediction is robust to all l2 perturbations of magnitude , i.e. whether ∀x′ : ||x′−x||2 ≤ : f(x′) = y. Let q = Prz∼N (x,σ) [g(x) = y] be the probability of g predicting label y. The prediction of our smoothed classifier is robust if < σΦ−1(q) (Cohen et al., 2019). This result showcases a trade-off we alluded to in the previous section: The certificate can become stronger if the noise-level (here σ) is increased. But doing so could also lower the accuracy of the smoothed classifier or reduce q and thus weaken the certificate. White-box certificates for multi-output classifiers. There are multiple recent methods for certifying the robustness of specific multi-output models (see, for example, (Tran et al., 2021; Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2020; Ko et al., 2019; Ryou et al., 2021; Shi et al., 2020; Bonaert et al., 2021)) by analyzing their specific architecture and weights. They are however not designed to certify collective robustness. They can only determine independently for each prediction whether or not it can be adversarially attacked. Collective robustness certificates. Most directly related to our work is the certificate of Schuchardt et al. (2021). Like ours, it combines many per-prediction certificates into a collective certificate. But, unlike our novel localized smoothing approach, their certification procedure is only beneficial for strictly local models, i.e. models whose outputs operate on small subsets of the input. Furthermore, their certificate assumes binary data, while our certificate defines a common interface for various data types and perturbation models. A more detailed comparison can be found in Section D. Recently, Fischer et al. (2021) proposed a certificate for semantic segmentation. They consider a different notion of collective robustness: They are interested in determining whether all predictions are robust. In Section C.4 we discuss their method in detail and show that, when used for certifying our notion of collective robustness (i.e. the number of robust predictions), their method is no better than certifying each output independently using the certificate of Cohen et al. (2019). Furthermore, our certificate can be used to provide equally strong guarantees for their notion of collective robustness by checking whether the number of certified predictions equals the overall number of predictions. Another method that can be used for certifying collective robustness is center smoothing (Kumar & Goldstein, 2021). Center smoothing bounds how much a vector-valued prediction changes w.r.t to a distance function under adversarial perturbations. With the l0 pseudo-norm as the distance function, center smoothing bounds how many predictions of a classifier can be simultaneously changed. Randomized smoothing with non-i.i.d. noise. While not designed for certifying collective robustness, two recent certificates for non-i.i.d. Gaussian (Fischer et al., 2020) and uniform smoothing (Eiras et al., 2021) can be used as a component of our collective certification approach: They can serve as per-prediction base certificates, which can then be combined into our stronger collective certificate (more details in Section 4) . Note that we do not use the procedure for optimizing the smoothing distribution proposed by Eiras et al. (2021), as this would enable adversarial attacks on the smoothing distribution itself and invalidate the certificate (see discussion by Wang et al. (2021)). 3 COLLECTIVE THREAT MODEL Before certifying robustness, we have to define a threat model, which specifies the type of model that is attacked, the objective of the adversary and which perturbations they are allowed to use. We assume that we have a multi-output classifier f : XDin → YDout , that maps from a Din-dimensional vector space to Dout labels from label set Y. We further assume that this classifier f is the result of randomly smoothing a base classifier g, as discussed in Section 2. To simplify our notation, we write fn to refer to the function x 7→ f(x)n that outputs the n-th label. Given this multi-output classifier f , an input x ∈ XDin and the resulting vector of predictions y = f(x), the objective of the adversary is to cause as many predictions from a set of targeted indices T ⊆ {1, . . . , Dout} to change. That is, their objective is minx′∈Bx ∑ n∈T I [fn(x ′) = yn], where Bx ⊆ XDin is the perturbation model. Importantly, note that the minimization operator is outside the sum, meaning the predictions have to 1In practice, all probabilities have to be estimated using Monte Carlo sampling (see discussion in Section C). be attacked using a single input. As is common in robustness certification, we assume a norm-bound perturbation model. That is, given an input x ∈ XDin , the adversary is only allowed to use perturbed inputs from the set Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } with p, ≥ 0. 4 A RECIPE FOR COLLECTIVE CERTIFICATES Before discussing technical details, we provide a high-level overview of our method. In localized randomized smoothing, we assign each output gn of a base classifier g its own smoothing distribution Ψ(n) that matches our assumptions or knowledge about the base classifier’s soft locality, i.e. for each n ∈ {1, . . . , Dout} choose a Ψ(n) that induces more noise in input components that are less relevant for gn. For example, in Fig. 1, we assume that far-away regions of the image are less relevant and thus perturb pixels in the bottom left with more noise when classifying pixels in the top-right corner. The chosen smoothing distributions can then be used to construct the smoothed classifier f . Given an input x ∈ XDin and the corresponding smoothed prediction y = f(x), randomized smoothing makes it possible to compute per-prediction base certificates. That is, for each yn, one can compute a set H(n) ⊆ XDin of perturbed inputs that the prediction is robust to, i.e. ∀x′ ∈ Hn : fn(x ′) = yn. Our motivation for using non-i.i.d. distributions is that the H(n) will guarantee more robustness for input dimensions smoothed with more noise, i.e. quantify model locality. The objective of our adversary is minx′∈Bx ∑ n∈T I [fn(x ′) = yn] with collective perturbation model Bx ⊆ XDin . That is, they want to change as many predictions from the targeted set T as possible. A trivial lower bound can be obtained by counting how many predictions are – according to the base certificates – provably robust to the collective threat model. This can be expressed as∑ n∈T minx′∈Bx I [ x′ ∈ H(n) ] . In the following, we refer to this as the naı̈ve collective certificate. Thanks to our proposed localized smoothing scheme, we can use the following, tighter bound: min x′∈Bx ∑ n∈T I [fn(x ′) = yn] ≥ min x′∈Bx ∑ n∈T I [ x′ ∈ H(n) ] , (1) which preserves the fact that the adversary has to choose a single perturbed input. Because we use different non-i.i.d. smoothing distributions for different outputs, we provably know that each fn has varying levels of robustness for different parts of the input and that these robustness levels differ among outputs. Thus, in the r.h.s. problem the adversary has to allocate their limited budget across various input dimensions and may be unable to attack all predictions at once, just like when attacking the classifier in the l.h.s. objective (recall Section 1). This makes our collective certificate stronger than the naı̈ve collective certificate, which allows each prediction to be attacked independently. As stated in Section 1, the idea of combining base certificates into stronger collective certificates has already been explored by Schuchardt et al. (2021). But instead of using localized smoothing to capture the (soft) locality of a model, their approach leverages the fact that perturbations outside an output’s receptive field can be ignored. For softly local models, which have receptive fields covering the entire input, their certificate is no better than the naı̈ve certificate. Another novel insight underlying our approach is that various non-i.i.d. randomized smoothing certificates share a common interface, which makes our method applicable to diverse data types and perturbation models. In the next section, we formalize this common interface. We then discuss how it allows us to compute the collective certificate from Eq. 1 using (mixed-integer) linear programming. 5 COMMON INTERFACE FOR BASE CERTIFICATES A base certificate for a prediction yn = fn(x) is a set Hn ⊆ XDin of perturbed inputs that yn is provably robust to, i.e ∀x′ ∈ Hn : fn(x′) = yn. Note that base certificates do not have to be exact, but have to be sound, i.e. they do not have to specify all inputs to which the fn are robust but they must not contain any adversarial examples. As a common interface for base certificates, we propose that the sets Hn are parameterized by a weight vector w(n) ∈ RDin and a scalar η(n) that define a linear constraint on the element-wise distance between perturbed inputs and the clean input: H(n) = { x′ ∈ XDin ∣∣∣∣∣ Din∑ d=1 w (n) d · |x ′ d − xd|κ < η(n) } . (2) The weight vector encodes how robust yn is to perturbations of different components of the input. The scalar κ is important for collective robustness certification, because it encodes which collective perturbation model the base certificate is compatible with. For example, κ = 2 means that the base certificate can be used for certifying collective robustness to l2 perturbations. In the following, we present two base certificates implementing our interface: One for l2 perturbations of continuous data and one for perturbations of binary data. In Section B, we further present a certificate for binary data that can distinguish between adding and deleting bits and a certificate for l1 perturbations of continuous data. All base certificates guarantee more robustness for parts of the input smoothed with a higher noise level. The certificates for continuous data are based on known results (Fischer et al., 2020; Eiras et al., 2021) and merely reformulated to match our proposed interface, so that they can be used as part of our collective certification procedure. The certificates for discrete data however are original and based on the novel concept of variance smoothing. Gaussian smoothing for l2 perturbations of continuous data The first base certificate is a generalization of Gaussian smoothing to anisotropic noise, a corollary of Theorem A.1 from (Fischer et al., 2020). In the following, diag(z) refers to a diagonal matrix with diagonal entries z and Φ−1 : [0, 1]→ R refers to the the standard normal inverse cumulative distribution function. Proposition 1. Given an output gn : RDin → Y, let fn(x) = argmaxy∈Y Prz∼N (x,Σ) [gn(z) = y] be the corresponding smoothed output with Σ = diag (σ)2 andσ ∈ RDin+ . Given an inputx ∈ RDin and smoothed prediction yn = fn(x), let q = Prz∼N (x,Σ) [gn(z) = yn]. Then, ∀x′ ∈ H(n) : fn(x ′) = yn with H(n) defined as in Eq. 2, wd = 1σd2 , η = ( Φ(−1)(q) )2 and κ = 2. Bernoulli variance smoothing for perturbations of binary data For binary data, we use a smoothing distribution F(x,θ) with θ ∈ [0, 1]Din that independently flips the d’th bit with probability θd, i.e. for x, z ∈ {0, 1}Din and z ∼ F(x,θ) we have Pr[zd 6= xd] = θd. A corresponding certificate could be derived by generalizing (Lee et al., 2019), which considers a single shared θ ∈ [0, 1] with ∀d : θd = θ. However, the cost for computing this certificate would be exponential in the number of unique values in θ. We therefore propose a more efficient alternative. Instead of constructing a smoothed classifier that returns the most likely labels of the base classifier (as discussed in Section 2), we construct a smoothed classifier that returns the labels with the highest expected softmax scores (similar to CDF-smoothing (Kumar et al., 2020)). For this smoothed model, we can compute a robustness certificate in constant time. The certificate requires determining both the expected value and variance of softmax scores. We therefore call this method variance smoothing. While we use it for binary data, it is a general-purpose technique that can be applied to arbitrary domains and smoothing distributions (see discussion in Section B.2). In the following, we assume the label set Y to consist of numerical labels {1, . . . , |Y|}, which simplifies our notation. Theorem 1. Given an output gn : {0, 1}Din → ∆|Y| mapping to scores from the |Y|-dimensional probability simplex, let fn(x) = argmaxy∈YEz∼F(x,θ) [gn(z)y] be the corresponding smoothed classifier with θ ∈ [0, 1]Din . Given an input x ∈ {0, 1}Din and smoothed prediction yn = fn(x), let µ = Ez∼F(x,θ) [gn(z)y] and σ2 = Varz∼F(x,θ) [gn(z)y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = ln ( (1−θd)2 θd + (θd) 2 1−θd ) , η = ln ( 1 + 1σ2 ( µ− 12 )2) and κ = 0. 6 COMPUTING THE COLLECTIVE ROBUSTNESS CERTIFICATE With our common interface for base certificates in place, we can discuss how to compute the collective robustness certificate minx′∈Bx ∑ n∈T I [ x′ ∈ H(n) ] from Eq. 1. The result bounds the number of predictions yn with n ∈ {1, . . . , Dout} that can be simultaneously attacked by the adversary. In the following, we assume that the base certificates were obtained by using a smoothing distribution that is compatible with our lp collective perturbation model (i.e. κ = p), for example by using Gaussian noise for p = 2 or Bernoulli noise for p = 0. Inserting the definition of our base certificate interface from Eq. 2 and rewriting our perturbation model Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } as{ x′ ∈ XDin | ∑Din d=1 |x′d − xd|p ≤ p } , our objective from Eq. 1 can be expressed as min x′∈XDin ∑ n∈T I [ Din∑ d=1 w (n) d · |x ′ d − xd|p < η(n) ] s.t. Din∑ d=1 |x′d − xd|p ≤ p. (3) We can see that the perturbed inputx′ only affects the element-wise distances |x′d−xd|p. Rather than optimizing x′, we can instead directly optimize these distances, i.e. determine how much adversarial budget is allocated to each input dimension. For this, we define a vector of variables b ∈ RDin+ (or b ∈ {0, 1}Din for binary data). Replacing sums with inner products, we can restate Eq. 3 as min b∈RDin+ ∑ n∈T I [ bTw(n) < η(n) ] s.t. sum{b} ≤ p. (4) In a final step, we replace the indicator functions in Eq. 4 with a vector of boolean variables t ∈ {0, 1}Dout . Define the constants η(n) = p ·min ( 0,mind w (n) d ) . Then, min b∈RDin+ ,t∈{0,1}Dout ∑ n∈T tn s.t. ∀n : bTw(n) ≥ tnη(n) + (1− tn)η(n), sum{b} ≤ p. (5) is equivalent to Eq. 4. The first constraint guarantees that tn can only be set to 0 if the l.h.s. is greater or equal η(n), i.e. only when the base certificate can no longer guarantee robustness. The term involving η(n) ensures that for tn = 1 the problem is always feasible2. Eq. 5 can be solved using any mixed-integer linear programming solver. While the resulting MILP bears some semblance to that of Schuchardt et al. (2021), it is conceptually different. When evaluating their base certificates, they mask out parts of the budget vector b based on a model’s strict locality, while we weigh the budget vector based on the soft locality guaranteed by the base certificates. In addition, thanks to the interface specified in Section 5, our problem only involves a single linear constraint per prediction, making it much smaller and more efficient to solve. Interestingly, when using randomized smoothing base certificates for binary data, our certificate subsumes theirs, i.e. can provide the same robustness guarantees (see Section D.2). Improving efficiency. Still, the efficiency of our certificate in Eq. 5. certificate can be further improved. In Section A, we show that partitioning the outputs into Nout subsets sharing the same smoothing distribution and the the inputs into Nin subsets sharing the same noise level (for example like in Fig. 1), as well as quantizing the base certificate parameters η(n) into Nbin bins reduces the number of variables and constraints from Din + Dout and Dout + 1 to Nin + Nout · Nbins and Nout · Nbins + 1, respectively.We can thus control the problem size independent of the data’s dimensionality. We further derive a linear relaxation of the mixed-integer problem, which can be more efficiently solved while preserving the soundness of the certificate. 7 LIMITATIONS The main limitation of our approach is that it assumes softly local models. While it can be applied to arbitrary multi-output classifiers, it may not necessarily result in better certificates than randomized smoothing with i.i.d. distributions. Furthermore, choosing the smoothing distributions requires some a-priori knowledge or assumptions about which parts of the input are how relevant to making a prediction. Our experiments show that natural assumptions like homophily can be sufficient for choosing effective smoothing distributions. But doing so in other tasks may be more challenging. A limitation of (most) randomized smoothing certificates is that they use sampling to approximate the smoothed classifier. Because we use different smoothing distributions for different outputs, we can only use a fraction of the samples for each output. As discussed in Section A.1, we can alleviate this problem by sharing smoothing distributions among multiple outputs. Our experiments show that despite this issue, our method outperforms certificates that use a single smoothing distribution. Still, future work should try to improve the sample efficiency of randomized smoothing (for example by developing more methods for de-randomized smoothing (Levine & Feizi, 2020)).Any such advance could then be incorporated into our localized smoothing framework. 8 EXPERIMENTAL EVALUATION Our experimental evaluation has three objectives 1.) Verifying our main claim that localized randomized smoothing offers a better trade-off between accuracy and certifiable robustness than smoothing 2Because η(n) is the smallest value bTw(n) can take on, i.e. min b∈RDin+ bTw (n) d s.t. sum{b} ≤ p. with i.i.d. distributions. 2.) Determining to what extend the linear program underlying the proposed collective certificate strengthens our robustness guarantees. 3.) Assessing the efficacy of our novel variance smoothing certificate for binary data. Any of the used datasets and classifiers only serve as a means of comparing certificates. We thus use well-known and well-established architectures instead of overly focusing on maximizing prediction accuracy by using the latest SOTA models. We use two metrics to quantify certificate strength: Certified accuracy (i.e. the percentage of correct and certifiably robust predictions) and certified ratio (i.e. the percentage of certifiably robust predictions, regardless of correctness)3. As single-number metrics, we report the AUC of the certified accuracy/ratio functions w.r.t. adversarial budget (not to be confused with certifying some AUC metric). For localized smoothing, we evaluate both the naı̈ve collective certificate, i.e. certifying predictions independently (see Section 4), and the proposed LP-based certificate (using the linearly relaxed version from Appendix A.4). We compare our method to two baselines using i.i.d. randomized smoothing: The naı̈ve collective certificate and center smoothing (Kumar & Goldstein, 2021). For softly local models, the certificate of Schuchardt et al. (2021) is equivalent to the naı̈ve baseline. When used to certify the number of robust predictions, the segmentation certificate of Fischer et al. (2021) is at most as strong as the naı̈ve baseline (see Section C.4). Thus, our method is compared to all existing collective certificates listed in Section 2. In all experiments, we use Monte Carlo randomized smoothing. More details on the experimental setup can be found in Section E. 8.1 SEMANTIC SEGMENTATION Dataset and model. We evaluate our certificate for continuous data and l2 perturbations on the Pascal-VOC 2012 segmentation validation set. Training is performed on 10582 pairs of training samples extracted from SBD4 (Hariharan et al., 2011), To increase batch sizes and thus allow a more thorough investigation of different smoothing parameters, all images are downscaled to 50% of their original size. Our base model is a U-Net segmentation model with a ResNet-18 backbone. To obtain accurate and robust smoothed classifiers, base models should be trained on the smoothing distribution. We thus train 51 different instances of our base model, augmenting the training data with a different σtrain ∈ {0, 0.01, . . . , 0.5}. At test time, when evaluating a baseline i.i.d. certificate with smoothing distribution N (0, σ), we load the model trained with σtrain = σ. To perform localized randomized smoothing, we choose parameters σmin, σmax ∈ R+ and partition all images into regular grids of size 4 × 6 (similar to example Fig. 1). To classify pixels in grid cell (i, j), we sample noise for grid cell (k, l) using N (0, σ′), with σ′ ∈ [σmin, σmax] chosen proportional to the distance of (i, j) and (k, l) (more details in Section E.2.1). As the base model, we load the one trained with σtrain = σmin. Using the same distribution at train and test time for the i.i.d. baselines but not for localized smoothing is meant to skew the results in the baseline’s favor. But, in Section E.2.3, we also repeat our experiments using the same base model for i.i.d. and localized smoothing. Evaluation. The main goal of our experiments on segmentation is to verify that localized smoothing can offer a better trade-off between accuracy and certifiable robustness. That is, for all or most σ, there are σmin, σmax such that the locally smoothed model has higher accuracy and certifiable collective robustness than i.i.d. smoothing baselines using N (0, σ). Because σ, σmin, σmax ∈ R+, we can not evaluate all possible combinations. We therefore use the following scheme: We focus on the case σ ∈ [0, 0.5], which covers all distributions used in (Kumar & Goldstein, 2021) and 3In the case of image segmentation, we compute these metrics per image and then average over the dataset. 4Also known as ”Pascal trainaug” (Fischer et al., 2021). First, we evaluate our two baselines for five σ ∈ {0.1, 0.2, 0.3, 0.4, 0.5}. This results in baseline models with diverse levels of accuracy and robustness (e.g. the accuracy of the naı̈ve baseline shrinks from 87.7% to 64.9% and the AUC of its certified accuracy grows from 0.17 to 0.644). We then test whether, for each of the σ, we can find σmin, σmax such that the locally smoothed models attains higher accuracy and is certifiably more robust. Finally, to verify that {0.1, 0.2, 0.3, 0.4, 0.5} were not just a particularly poor choice of baseline parameters, we fix the chosen σmin, σmax. We then perform a fine-grained search over σ ∈ [0, 0.5] with resolution 0.01 to find a baseline model that has at least the same accuracy and certifiable robustness (as measured by certificate AUC) as any of the fixed locally smoothed models. If this is not possible, this provides strong evidence that the proposed smoothing scheme and certificate indeed offer a better trade-off. Fig. 2 shows one example. For σ = 0.4, the naı̈ve i.i.d. baseline has an accuracy of 72.5%. With σmin = 0.25, σmax = 1.5, the proposed localized smoothing certificate yields both a higher accuracy of 76.4% and a higher certified accuracy for all . It can certify robustness for up to 1.825, compared to 1.45 of the baseline and the AUC of its certified accuracy curve is 43.1% larger. Fig. 2 also highlights the usefulness of the linear program we derived in Section 5: Evaluating the localized smoothing base certificates independently, i.e. computing the naı̈ve collective certificate (dotted orange line), is not sufficient for outperforming the baseline. But combining them via the proposed linear program drastically increases the certified accuracy The results for all other combinations of smoothing distribution parameters, both baselines and both metrics of certificate strength can be found in Section E.2.3. Tables 1 and 2 summarize the first part of our evaluation procedure, in which we optimize the localized smoothing parameters. Safe for one exception (with σ = 0.2, center smoothing has a lower accuracy, but slightly larger certified ratio), the locally smoothed models have the same or higher accuracy, but provide stronger robustness guarantees. The difference is particularly large for σ ∈ {0.3, 0.4, 0.5}, where the accuracy of models smoothed with i.i.d. noise drops off, while our localized smoothing distribution preserves the most relevant parts of the image to allow for high accuracy. Table 5 summarizes the second part of our evaluation scheme, in which we perform a fine-grained search over [0, 0.5]. We find that there is no σ such that either of the i.i.d. baselines can outperform any of the chosen locally smoothed models w.r.t. AUC of their certified accuracy or certified ratio curves. This is ample evidence for our claim that localized smoothing offers a better trade-off than i.i.d. smoothing. Also, the collective LPs caused little computational overhead (avg. 0.68 s per LP, more details in Section E.2.3). 8.2 NODE CLASSIFICATION Dataset and model. We evaluate our certificate for binary data on the Cora-ML node classification dataset. We use two different base-models: Approximate Personalized Propagation of Neural Predictions (APPNP) (Klicpera et al., 2019) and a 6-layer Graph Convolutional network (GCN) (Kipf & Welling, 2017). Both models have a receptive field that covers most or all of the graph, meaning they are softly local. For details on model and training parameters, see Section E.3.1. As center smoothing has only been derived for Gaussian smoothing, we only compare to the naı̈ve baseline. For both, the baseline and our localized smoothing certificate, we use sparsity-aware randomized smoothing (Bojchevski et al., 2020) , i.e. flip 1-bits and 0-bits with different probabilities (θ− and θ+, respectively), which allows us to certify different levels of robustness to deletions and additions of bits. With localized randomized smoothing, we use the variance smoothing base certificate derived in Section B.2.2. We choose the distribution parameters for localized smoothing based on an assumption of homophily, i.e. nearby nodes are most relevant for classifying a node. We partition the graph into 5 clusters and define parameters θ±min and θ ± max. When classifying a node in cluster i, we randomly smooth attributes in cluster j with θ+ij , θ − ij that are based on linearly interpolating in [θ−min, θ − max] and [θ − min, θ − max] based on the affinity of the clusters (details in Section E.3.1). Evaluation. We first evaluate the new variance-based certificate and compare it to the certificate derived by Bojchevski et al. (2020). For this, we use only one cluster, meaning we use the same smoothing distribution for both. Fig. 11 in Section E.3 shows that the variance certificate is weaker than the baseline for additions, but better for deletions. It appears sufficiently effective to be used as a base certificate and integrated into a stronger, collective certificate. The parameter space of our smoothing distributions is large. For the localized approach we have four continuous parameters, as we have to specify both the minimal and maximal noise values. Therefore, it is difficult to show that our approach achieves a better accuracy-robustness trade-off over the whole noise space. However, we can investigate the accuracy-robustness trade-off within some areas of this space. For the localized approach we choose a few fixed combinations of the noise parameters θ±min and θ±max. To show our claim, we then optimise the baselines with parameters in an interval around our θ+min and θ − min. This is a smaller space, as the baselines only have two parameters. We select the baseline whose certified accuracy curve has the largest AUC. We perform the search for the best baseline for the addition and deletion scenario independently, i.e., the best baseline model for addition and deletion does not have to be the same. In Fig. 3, we see the certified accuracy of an APPNP model for a varying number of attribute additions and deletions (left and right respectively). To find the best distribution parameters for the baselines, we evaluated combinations of θ+ ∈ {0.04, 0.055, 0.07} and θ− ∈ [0.1, . . . , 0.827], using 11 equally spaced values for the interval. For adversarial additions, the best baseline yields a certified accuracy curve with an AUC of 4.51 compared to our 5.65. The best baseline for deletions has an AUC of 7.76 compared to our 16.26. Our method outperforms these optimized baselines for most adversarial budgets, while maintaining the same clean accuracy (i.e. certified accuracy at = 0). Experiments with different noise parameters and classifiers can be found in Section E.3. In general, we find that we significantly outperform the baseline when certifying robustness to deletions, but often have weaker certificates for additions (which may be inherent to the variance smoothing base certificates). Due to the large continuous parameter space, we cannot claim that localized smoothing outperforms the naı̈ve baseline everywhere. However, our results show that, for the tested parameter regions, localized smoothing can provide a significantly better accuracy-robustness trade-off. We found that using the collective LP instead of naı̈vely combining the base certificates can result in much stronger certificates: The AUC of the certified accuracy curve (averaged over all experiments) increased by 38.8% and 33.6% for addition and deletion, respectively. The collective LPs caused little computational overhead (avg. 10.9 s per LP, more details in Section E.3.3). 9 CONCLUSION In this work, we have proposed the first collective robustness certificate for softly local multi-output classifiers. It is based on localized randomized smoothing, i.e. randomly smoothing different outputs using different non-i.i.d. smoothing distributions matching the model’s locality. We have shown how per-output certificates based on localized smoothing can be computed and that they share a common interface. This interface allows them to be combined into a strong collective robustness certificate. Experiments on image segmentation and node classification tasks demonstrate that localized smoothing can offer a better robustness-accuracy trade-off than existing randomized smoothing techniques. Our results show that locality is linked to robustness, which suggests the research direction of building more effective local models to robustly solve multi-output tasks. 10 REPRODUCIBILITY STATEMENT We prove all theoretic results that were not already derived in the main text in Appendices A to C. To ensure reproducibility of the experimental results we provide detailed descriptions of the evaluation process with the respective parameters in Section E.2 and Section E.3. Code will be made available to reviewers via an anonymous link posted on OpenReview, as suggested by the guidelines. 11 ETHICS STATEMENT In this paper, we propose a method to increase the robustness of machine learning models against adversarial perturbations and to certify their robustness. We see this as an important step towards general usage of models in practice, as many existing methods are brittle to crafted attacks. Through the proposed method, we hope to contribute to the safe usage of machine learning. However, robust models also have to be seen with caution. As they are harder to fool, harmful purposes like mass surveillance are harder to avoid. We believe that it is still necessary to further research robustness of machine learning models as the positive effects can outweigh the negatives, but it is necessary to discuss the ethical implications of the usage in any specific application area. A.1 SHARING SMOOTHING DISTRIBUTIONS AMONG OUTPUTS In principle, our proposed certificate allows a different smoothing distribution Ψ(n) to be used per output gn of our base model. In practice, where we have to estimate properties of the smoothed classifier using Monte Carlo methods, this is problematic: Samples cannot be re-used, each of the many outputs requires its own round of sampling. We can increase the efficiency of our localized smoothing approach by partitioning our Dout outputs into Nout subsets that share the same smoothing distribution. When making smoothed predictions or computing base certificates, we can then reuse the same samples for all outputs within each subsets. More formally, we partition our Dout output dimensions into sets K(1), . . . ,K(Nout) with⋃̇Nout i=1 K(i) = {1, . . . , Dout}. (6) We then associate each set K(i) with a smoothing distribution Ψ(i). For each base model output gn with n ∈ K(i), we then use smoothing distribution Ψ(i) to construct the smoothed output fn, e.g. fn(x) = argmaxy∈Y Prz∼Ψ(i) [f(x+ z) = y] (note that we use a different smoothing paradigm for binary data, see Section 5). A.2 QUANTIZING CERTIFICATE PARAMETERS Recall that our base certificates from Section 5 are defined by a linear inequality: A prediction yn = fn(x) is robust to a perturbed input x′ ∈ XDin if ∑D d=1 w (n) d · |x′d − xd| p < η(n), for some p ≥ 0. The weight vectors w(n) ∈ RDin only depend on the smoothing distributions. A side of effect of sharing the same smoothing Ψ(i) among all outputs from a set K(i), as discussed in the previous section, is that the outputs also share the same weight vector w(i) ∈ RDin with ∀n ∈ K(i) : w(i) = w(n). Thus, for all smoothed outputs fn with n ∈ K(i), the smoothed prediction yn is robust if ∑D d=1 w (i) d · |x′d − xd| p < η(n). Evidently, the base certificates for outputs from a set K(i) only differ in their parameter η(n). Recall that in our collective linear program we use a vector of variables t ∈ {0, 1}Dout to indicate which predictions are robust according to their base certificates (see Section 6). If there are two outputs fn and fm with η(n) = η(m), then fn and fm have the same base certificate and their robustness can be modelled by the same indicator variable. Conversely, for each set of outputs K(i), we only need one indicator variable per unique η(n). By quantizing the η(n) within each subset K(i) (for example by defining equally sized bins between minn∈K(i) η(n) and maxn∈K(i) η(n) ), we can ensure that there is always a fixed number Nbins of indicator variables per subset. This way, we can reduce the number of indicator variables from Dout to Nout ·Nbins. To implement this idea, we define matrix of thresholds E ∈ RNout×Nbins with ∀i : min {Ei,:} ≤ minn∈K(i) ({ η(n) | n ∈ K(i) }) . We then define a function ξ : {1, . . . , Nout} × R→ R with ξ(i, η) = max ({Ei,j | j ∈ {1, . . . , Nbins ∧ Ei,j < η}) (7) that quantizes base certificate parameter η from output subset K(i) by mapping it to the next smallest threshold in Ei,:. For feasibility, like in Section 6 we need to compute the constant η(i) = min b∈RDin+ bTw (i) d s.t. sum{b} ≤ p to ensure feasibility of the problem. Note that, be- cause all outputs from a subset K(i) share the same weight vector w(i), we only have to compute this constant once per subset. We can bound the collective robustness of the targeted dimensions T of our vector of predictions y = f(x) as follows: min ∑ i∈{1,...,Nout} ∑ j∈{1,...,Nbins} Ti,j ∣∣∣{n ∈ T ∩K(i) ∣∣∣ξ (i, η(n)) = Ei,j }∣∣∣ (8) s.t. ∀i, j : bTw(i) ≥ Ti,jη(i) + (1− Ti,j)Ei,j , sum{b} ≤ p (9) b ∈ RDin+ , T ∈ {0, 1}Nout×Nbins . (10) Constraint Eq. 9 ensures that Ti,j is only set to 0 if bTw(i) ≥ Ei,j , i.e. all predictions from subset K(i) whose base certificate parameter η(n) is quantized to Ei,j are no longer robust. When this is the case, the objective function decreases by the number of these predictions. For Nout = Dout, Nbins = 1 and En,1 = η(n), we recover our general certificate from Section 6. Note that, if the quantization maps any parameter η(n) to a smaller number, the set H(n) becomes more restrictive, i.e. yn is considered robust to a smaller set of perturbed inputs. Thus, Eq. 8 is a lower bound on our general certificate from Section 6. A.3 SHARING NOISE LEVELS AMONG INPUTS Similar to how partitioning the output dimensions allows us to control the number of output variables t, partitioning the input dimensions and using the same noise level within each partition allows us to control the number of variables b that model the allocation of adversarial budget. Assume that we have partitioned our output dimensions into Nout subsets K(1), . . . ,K(Nout , with outputs in each subset sharing the same smoothing distribution Ψ(i), as explained in Section A.1. Let us now define Nin input subsets J(1), . . . , J(Nin) with⋃̇Nout i=1 J(i) = {1, . . . , Dout}. (11) Recall that a prediction yn = fn(x) with n ∈ K(i) is robust to a perturbed input x′ ∈ XDin if ∑D d=1 w (i) d · |x′d − xd| p < η(n) and that the weight vectors w(i) only depend on the smoothing distributions. Assume that we choose each smoothing distribution Ψ(i) such that ∀l ∈ {1, . . . , Nin},∀d, d′ ∈ J(l) : w(i)d = w (i) d′ , i.e. all input dimensions within each set J(l) have the same weight. This can be achieved by choosing Ψ(i) so that all dimensions in each input subset Jl are smoothed with the noise level (note that we can still use different Ψ(i), i.e. different noise levels for smoothing different sets of outputs). For example, one could use a Gaussian distribution with covariance matrix Σ = diag (σ)2 with ∀l ∈ {1, . . . , Nin},∀d, d′ ∈ J(l) : σd = σd′ . In this case, the evaluation of our base certificates can be simplified. Prediction yn = fn(x) is robust to a perturbed input x′ ∈ XDin if D∑ d=1 w (i) d · |x ′ d − xd| p < η(n) (12) = Nin∑ l=1 u(i) · ∑ d∈J(l) |x′d − xd| p < η(n), (13) with u ∈ RNin and ∀i ∈ {1, . . . , Nout},∀l ∈ {1, . . . , Nin},∀d ∈ J : uil = wid. That is, we can replace each weight vector w(i) that has one weight w(i)d per input dimension d with a smaller weight vector u(i) with one weight u(i)l per input subset J(l). For our linear program, this means that we no longer need a budget vector b ∈ RDin+ to model the element-wise distance |x′d − xd| p in each dimension d. Instead, we can use a smaller budget vector b ∈ RNin+ to model the overall distance within each input subset J(l), i.e. ∑ d∈J(l) |x′d − xd| p. Combined with the quantization of certificate parameters from the previous section, our optimization problem becomes min ∑ i∈{1,...,Nout} ∑ j∈{1,...,Nbins} Ti,j ∣∣∣{n ∈ T ∩K(i) ∣∣∣ξ (i, η(n)) = Ei,j }∣∣∣ (14) s.t. ∀i, j : bTu(i) ≥ Ti,jη(i) + (1− Ti,j)Ei,j , sum{b} ≤ p, (15) b ∈ RNin+ , T ∈ {0, 1}Nout×Nbins . (16) with u ∈ RNin and ∀i ∈ {1, . . . , Nout},∀l ∈ {1, . . . , Nin},∀d ∈ J : ωil = wid. For Nout = Dout, Nin = Din, Nbins = 1 and En,1 = η(n), we recover our general certificate from Section 6. When certifying robustness for binary data, we impose different constraints on b. To model that the adversary can not flip more bits than are present within each subset, we use a budget vector b ∈ NNin0 with ∀l ∈ {1, . . . , Nin} : bl ≤ ∣∣J(l)∣∣, instead of a continuous budget vector b ∈ RNin+ . A.4 LINEAR RELAXATION Combining the previous steps allows us to reduce the number of problem variables and linear constraints from Din + Dout and Dout + 1 to Nin + Nout · Nbins and Nout · Nbins + 1, respectively. Still, finding an optimal solution to the mixed-integer linear program may be too expensive. One can obtain a lower bound on the optimal value and thus a valid, albeit more pessimistic, robustness certificate by relaxing all to be continuous. When using the general certificate from Section 6, the binary vector t ∈ {0, 1}Dout can be relaxed to t ∈ [0, 1]Dout . When using the certificate with quantized base certificate parameters from Section A.2 or Section A.3, the binary matrix T ∈ [0, 1]Nout×Nbins can be relaxed to T ∈ [0, 1]Nout×Nbins . Conceptually, this means that predictions can be partially certified, i.e. tn ∈ (0, 1) or Ti,j ∈ (0, 1). In particular, a prediction can be partially certified even if we know that is impossible to attack under the collective perturbation model Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } . Just like Schuchardt et al. (2021), who encountered the same problem with their collective certificate, we circumvent this issue by first computing a set L ⊆ T of all targeted predictions in T that are guaranteed to always be robust: L = { n ∈ T ∣∣∣∣∣ ( max x∈Bx D∑ d=1 w (n) d · |x ′ d − xd| p ) < η(n) } (17) = { n ∈ T ∣∣∣max(max{w(n)} · p, 0) < η(n)} . (18) The equality follows from the fact that the most effective way of attacking a prediction is to allocate all adversarial budget to the least robust dimension, i.e. the dimension with the largest weight – unless all weights are negative. Because we know that all predictions with indices in L are robust, we do not have to include them in the collective optimization problem and can instead compute |L|+ min x′∈Bx ∑ n∈T\L I [ x′ ∈ H(n) ] . (19) The r.h.s. optimization can be solved using the general collective certificate from Section 6 or any of the more efficient, modified certificates from previous sections. When using the general collective certificate from Section 6 with binary data, the budget variables b ∈ {0, 1}Din can be relaxed to b ∈ [0, 1]Din . When using the modified collective certificate from Section A.3, the budget variables with b ∈ NNin0 can be relaxed to b ∈ R Nin + . The additional constraint ∀l ∈ {1, . . . , Nin} : bl ≤ ∣∣J(l)∣∣ can be kept in order to model that the adversary cannot flip (or partially flip) more bits than are present within each input subset J(l). B BASE CERTIFICATES In the following, we show why the base certificates presented in Section 5 hold and present alternatives for other collective perturbation models. B.1 GAUSSIAN SMOOTHING FOR l2 PERTURBATIONS OF CONTINUOUS DATA Proposition 1. Given an output gn : RDin → Y, let fn(x) = argmaxy∈Y Prz∼N (x,Σ) [gn(z) = y] be the corresponding smoothed output with Σ = diag (σ)2 andσ ∈ RDin+ . Given an inputx ∈ RDin and smoothed prediction yn = fn(x), let q = Prz∼N (x,Σ) [gn(z) = yn]. Then, ∀x′ ∈ H(n) : fn(x ′) = yn with H(n) defined as in Eq. 2, wd = 1σd2 , η = ( Φ(−1)(q) )2 and κ = 2. Proof. Based on the definition of the base certificate interface, we need to show that, ∀x′ ∈ H : fn(x ′) = yn with H = { x′ ∈ RDin ∣∣∣∣∣ Din∑ d=1 1 σ2d · |xd − x′d|2 < ( Φ−1(q) )2} . (20) Eiras et al. (2021) have shown that under the same conditions as above, but with a general covariance matrix Σ ∈ RDin×Din+ , a prediction yn is certifiably robust to a perturbed input x′ if√ (x− x′)Σ−1(x− x′) < 1 2 ( Φ−1(q)− Φ−1(q′) ) , (21) where q′ = maxy′n 6=yn Prz∼N (x,Σ) [gn(z) = y ′ n] is the probability of the second most likely prediction under the smoothing distribution. Because the probabilities of all possible predictions have to sum up to 1, we have q′ ≤ 1 − q. Since Φ−1 is monotonically increasing, we can obtain a lower bound on the r.h.s. of Eq. 21 and thus a more pessimistic certificate by substituting 1 − q for q′ (deriving such a ”binary certificate” from a ”multiclass certificate” is common in randomized smoothing and was already discussed in (Cohen et al., 2019)):√ (x− x′)Σ−1(x− x′) < 1 2 ( Φ−1(q)− Φ−1(1− q) ) , (22) In our case, Σ is a diagonal matrix diag (σ)2 with σ ∈ RDin+ . Thus Eq. 22 is equivalent to√√√√Din∑ d=1 (xd − x′d) 1 σ2d (xd − x′d) < 1 2 ( Φ−1(q)− Φ−1(1− q) ) . (23) Finally, using the fact that Φ−1(q)−Φ−1(1− q) = 2Φ−1(q) and eliminating the square root shows that we are certifiably robust if Din∑ d=1 1 σ2d · |xd − x′d|2 < ( Φ−1(q) )2 . (24) B.1.1 UNIFORM SMOOTHING FOR l1 PERTURBATIONS OF CONTINUOUS DATA An alternative base certificate for l1 perturbations is again due to Eiras et al. (2021). Using uniform instead of Gaussian noise later allows us to collective certify robustness to l1-norm-bound perturbations. In the following U(x,λ) with x ∈ RD, λ ∈ RD+ refers to a vector-valued random distribution in which the d-th element is uniformly distributed in [xd − λd, xd + λd]. Proposition 2. Given an output gn : RDin → Y, let f(x) = argmaxy∈Y Prz∼U(x,λ) [g(z) = y] be the corresponding smoothed classifier with λ ∈ RDin+ . Given an input x ∈ RDin and smoothed prediction y = f(x), let p = Prz∼U(x,λ) [g(z) = y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = 1/λd, η = Φ−1(q) and κ = 1. Proof. Based on the definition of H(n), we need to prove that ∀x′ ∈ H : fn(x′) = yn with H = { x′ ∈ RDin | Din∑ d=1 1 λd · |xd − x′d| < Φ−1(q) } , (25) Eiras et al. (2021) have shown that under the same conditions as above, a prediction yn is certifiably robust to a perturbed input x′ if Din∑ d=1 | 1 λd · (xd − x′d) | < 1 2 ( Φ−1(q)− Φ−1(1− q) ) , (26) where q′ = maxy′n 6=yn Prz∼U(x,λ) [gn(z) = y ′ n] is the probability of the second most likely prediction under the smoothing distribution. As in our previous proof for Gaussian smoothing, we can obtain a more pessimistic certificate by substituting 1−q for q′. Since Φ−1(q)−Φ−1(1−q) = 2Φ−1(q) and all λd are non-negative, we know that our prediction is certifiably robust if Din∑ d=1 1 λd · |xd − x′d| < Φ−1(p). (27) B.2 VARIANCE SMOOTHING We propose variance smoothing as a base certificate for binary data. Variance smoothing certifies predictions based on the mean and variance of the softmax score associated with a predicted label. It is in principle applicable to arbitrary data types. We focus on discrete data, but all results can be generalized from discrete to continuous data by replacing any sum over probability mass functions with integrals over probability density functions. We first derive a general form of variance smoothing before discussing our certificates for binary data in Section B.2.1 and Section B.2.2. Variance smoothing assumes that we make predictions by randomly smoothing a base model’s softmax scores. That is, given base model g : X→ ∆|Y| mapping from an arbitrary discrete input space X to scores from the |Y|-dimensional probability simplex ∆|Y|, we define the smoothed classifier f(x) = argmaxy∈YEz∼Ψ(x) [g(z)y]. Here, Ψ(x) is an arbitrary distribution over X parameterized by x, e.g a Normal distribution with mean x. The smoothed classifier does not return the most likely prediction, but the prediction associated with the highest expected softmax score. Given an input x ∈ X, smoothed prediction y = f(x) and a perturbed input x′ ∈ X, we want to determine whether f(x′) = y. By definition of our smoothed classifier, we know that f(x′) = y if y is the label with the highest expected softmax score. In particular, we know that f(x′) = y if y’s softmax score is larger than all other softmax scores combined, i.e. Ez∼Ψ(x′) [g(z)y] > 0.5 =⇒ f(x′) = y. (28) Computing Ez∼Ψ(x′) [g(z)y] exactly is usually not tractable – especially if we later want to evaluate robustness to many x′ from a whole perturbation model B ⊆ X. Therefore, we compute a lower bound on Ez∼Ψ(x′) [g(z)y]. If even this lower bound is larger than 0.5, we know that prediction y is certainly robust. For this, we define a set of functions H with gy ∈ H and compute the minimum softmax score across all functions from H: min h∈H Ez∼Ψ(x′) [h(z)] > 0.5 =⇒ f(x′) = y. (29) For our variance smoothing approach, we define H to be the set of all functions that have a larger or equal expected value and a smaller or equal variance, compared to our base model g applied to unperturbed input x. Let µ = Ez∼Ψ(x) [g(z)y] be the expected softmax score of our base model g for label y. Let σ2 = Ez∼Ψ(x) [ (g(z)y − ν)2 ] be the expected squared distance of the softmax score from a scalar ν ∈ R. (Choosing ν = µ yields the variance of the softmax score. An arbitrary ν is only needed for technical reasons related to Monte Carlo estimation Section C.2). Then, we define H = { h : X→ R ∣∣∣ Ez∼Ψ(x) [h(z)] ≥ µ ∧ Ez∼Ψ(x) [(h(z)− ν)2] ≤ σ2} (30) Clearly, by the definition of µ and σ2, we have gy ∈ H. Note that we do not restrict functions from H to the domain [0, 1], but allow arbitrary real-valued outputs. By evaluating Eq. 28 with H defined as in Eq. 29, we can determine if our prediciton is robust. To compute the optimal value , we need the following two Lemmata: Lemma 1. Given a discrete set X and the set Π of all probability mass functions over X, any two probability mass functions π1, π2 ∈ Π fulfill∑ z∈X π2(z) π1(z) π2(z) ≥ 1. (31) Proof. For a fixed probability mass function π1, Eq. 31 is lower-bounded by the minimal expected likelihood ratio that can be achieved by another π̃(z) ∈ Π:∑ z∈X π2(z) π1(z) π2(z) ≥ min π̃∈Π ∑ z∈X π̃(z) π1(z) π̃(z). (32) The r.h.s. term can be expressed as the constrained optimization problem min π̃ ∑ z∈X π̃(z) π1(z) π̃(z) s.t. ∑ z∈X π̃(z) = 1 (33) with the corresponding dual problem max λ∈R min π̃ ∑ z∈X π̃(z) π1(z) π̃(z) + λ ( −1 + ∑ z∈X π̃(z) ) . (34) The inner problem is convex in each π̃(z). Taking the gradient w.r.t. to π̃(z) for all z ∈ X shows that it has its minimum at ∀z ∈ X : π̃(z) = −λπ1(z)2 . Substituting into Eq. 34 results in max λ∈R ∑ z∈X λ2π1(z) 2 4π1(z) + λ ( −1− ∑ z∈X λπ1(z) 2 ) (35) = max λ∈R −λ2 ∑ z∈X π1(z) 4 − λ (36) = max λ∈R −λ 2 4 − λ (37) = 1. (38) Eq. 37 follows from the fact that π1(z) is a valid probability mass function. Due to duality, the optimal dual value 1 is a lower bound on the optimal value of our primal problem Eq. 31. Lemma 2. Given a probability distribution D over a R and a scalar ν ∈ R, let µ = Ez∼D and ξ = Ez∼D [ (z − ν)2 ] . Then ξ ≥ (µ− ν)2 Proof. Using the definitions of µ and ξ, as well as some simple algebra, we can show: ξ ≥ (µ− ν)2 (39) ⇐⇒ Ez∼D [ (z − ν)2 ] ≥ µ2 − 2µν + ν2 (40) ⇐⇒ Ez∼D [ z2 − 2zν + ν2 ] ≥ µ2 − 2µν + ν2 (41) ⇐⇒ Ez∼D [ z2 − 2zν + ν2 ] ≥ µ2 − 2µν + ν2 (42) ⇐⇒ Ez∼D [ z2 ] − 2µν + ν2 ≥ µ2 − 2µν + ν2 (43) ⇐⇒ Ez∼D [ z2 ] ≥ µ2 (44) It is well known for the variance that Ez∼D [ (z − µ)2 ] = Ez∼D [ z2 ] − µ2. Because the variance is always non-negative, the above inequality holds. Using the previously described approach and lemmata, we can show the soundness of the following robustness certificate: Theorem 3. Given a model g : X → ∆|Y| mapping from discrete set X to scores from the |Y|-dimensional probability simplex, let f(x) = argmaxy∈YEz∼Ψ(x) [g(z)y] be the corresponding smoothed classifier with smoothing distribution Ψ(x) and probability mass function πx(z) = Prz̃∼Ψ(x) [z̃ = z]. Given an input x ∈ X and smoothed prediction y = f(x), let µ = Ez∼Ψ(x) [g(z)y] and σ2 = Ez∼Ψ(x) [ (g(z)y − ν)2 ] with ν ∈ R. If ν ≤ µ, we know that f(x′) = y if ∑ z∈X πx′(z) 2 πx(z) < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 ) . (45) Proof. Following our discussion above, we know that f(x′) = y if Ez∼Ψ(x′) [g(z)y] > 0.5 with H defined as in Section 5. We can compute a (tight) lower bound on minh∈H Ez∼Ψ(x′) by following the functional optimization approach for randomized smoothing proposed by Zhang et al. (2020). That is, we solve a dual problem in which we optimize the value h(z) for each z ∈ X. By the definition of the set H, our optimization problem is min h:X→R Ez∼Ψ(x′) [h(z)] s.t. Ez∼Ψ(x) [h(z)] ≥ µ, Ez∼Ψ(x) [ (h(z)− ν)2 ] ≤ σ2. The corresponding dual problem with dual variables α, β ≥ 0 is max α,β≥0 min h:X→R Ez∼Ψ(x′) [h(z)] +α ( µ− Ez∼Ψ(x) [h(z)] ) + β ( Ez∼Ψ(x) [ (h(z)− ν)2 ] − σ2 ) . (46) We first move move all terms that don’t involve h out of the inner optimization problem: = max α,β≥0 αµ−βσ2 + min h:X→R Ez∼Ψ(x′) [h(z)]−αEz∼Ψ(x) [h(z)]+βEz∼Ψ(x) [ (h(z)− ν)2 ] (47) Writing out the expectation terms and combining them into one sum (or – in the case of continuous X – one integral), our dual problem becomes = max α,β≥0 αµ− βσ2 + min h:X→R ∑ z∈X h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) (48) (recall that πx′ and πx′ refer to the probability mass functions of the smoothing distributions). The inner optimization problem can be solved by finding the optimal h(z) in each point z: = max α,β≥0 αµ− βσ2 + ∑ z∈X min h(z)∈R h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) (49) Because β ≥ 0, each inner optimization problem is convex in h(z). We can thus find the optimal h∗(z) by setting the derivative to zero: d dh(z) h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) ! = 0 (50) ⇐⇒ πx′(z)− απx(z) + 2β (h(z)− ν)πx(z) ! = 0 (51) =⇒ h∗(z) = − πx ′(z) 2βπx(z) + α 2β + ν. (52) Substituting into Eq. 48 and simplifying leaves us with the dual problem max α,β≥0 αµ− βσ2 − α 2 4β + α 2β − αν + ν − 1 4β ∑ z∈X πx′(z) 2 πx(z) (53) In the following, let us use ρ = ∑ z∈X πx′ (z) 2 πx(z) as a shorthand for the expected likelihood ratio. The problem is concave in α. We can thus find the optimum α∗ by setting the derivative to zero, which gives us α∗ = 2β(µ− ν) + 1. Because β ≥ 0 and ou theorem assumes that ν ≤ µ, α∗ is a feasible solution to the dual problem. Substituting into Eq. 53 and simplifying results in max β≥0 α∗µ− βσ2 − α ∗2 4β + α∗ 2β − α∗ν + ν − 1 4β ρ (54) = max β≥0 β ( (µ− ν)2 − σ2 ) + µ+ 1 4β (1− ρ) . (55) Lemma 1 shows that the expected likelihood ratio ρ is always greater than or equal to 1. Lemma 2 shows that (µ− ν)2 − σ2 ≤ 0. Therefore Eq. 55 is concave in β. The optimal value of β can again be found by setting the derivative to zero: β∗ = √ 1− ρ 4 ((µ− ν)2 − σ2) . (56) Recall that our theorem assumes σ2 ≥ (µ− ν)2 and thus β∗ is real valued. Substituting into Eq. 56 shows that the maximum of our dual problem is µ+ √ (1− p) ((µ− ν)2 − σ2). (57) By duality, this is a lower bound on our primal problem minh∈H Ez∼Ψ(x′) [h(z)]. We know that our prediction is certifiably robust, i.e. f(x) = y, if minh∈H Ez∼Ψ(x′) [h(z)] > 0.5. So, in particular, our prediction is robust if µ+ √ (1− ρ) ((µ− ν)2 − σ2) > 0.5 (58) ⇐⇒ ρ < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 )2 (59) ⇐⇒ ∑ z∈X πx′(z) 2 πx(z) < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 )2 (60) The last equivalence is the result of inserting the definition of the expected likelihood ratio ρ. With Theorem 3 in place, we can certify robustness for arbitrary smoothing distributions, assuming we can compute the expected likelihood ratio. When we are working with discrete data and the smoothing distributions factorize (but are not necessarily i.i.d.), this can be done efficiently, as the two following base certificates for binary data demonstrate. B.2.1 BERNOULLI VARIANCE SMOOTHING FOR PERTURBATIONS OF BINARY DATA We begin by proving the base certificate presented in Section 5. Recall that we we use a smoothing distribution F(x,θ) with θ ∈ [0, 1]Din that independently flips the d’th bit with probability θd, i.e. for x, z ∈ {0, 1}Din and z ∼ F(x,θ) we have Pr[zd 6= xd] = θd. Theorem 1. Given an output gn : {0, 1}Din → ∆|Y| mapping to scores from the |Y|-dimensional probability simplex, let fn(x) = argmaxy∈YEz∼F(x,θ) [gn(z)y] be the corresponding smoothed classifier with θ ∈ [0, 1]Din . Given an input x ∈ {0, 1}Din and smoothed prediction yn = fn(x), let µ = Ez∼F(x,θ) [gn(z)y] and σ2 = Varz∼F(x,θ) [gn(z)y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = ln ( (1−θd)2 θd + (θd) 2 1−θd ) , η = ln ( 1 + 1σ2 ( µ− 12 )2) and κ = 0. Proof. Based on our definition of the base certificate interface from Section 5, we must show that ∀x′ ∈ H : fn(x′) = yn with H = { x′ ∈ {0, 1}Din ∣∣∣∣∣ Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) · |x′d − xd|0 < ln ( 1 + 1 σ2 ( µ− 1 2 )2)} , (61) Because all bits are flipped independently, our probability mass function πx(z) = Prz̃∼Ψ(x) [z̃ = z] factorizes: πx(z) = Din∏ d=1 πxd(zd) (62) with πxd(zd) = { θd if zd 6= xd 1− θd else . (63) Thus, our expected likelihood ratio can be written as ∑ z∈{0,1}Din πx′(z) 2 πx(z) = ∑ z∈{0,1}Din Din∏ d=1 πx′d(zd) 2 πxd(zd) = Din∏ d=1 ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) . (64) For each dimension d, we can distinguish two cases: If both the perturbed and unperturbed input are the same in dimension d, i.e. x′d = xd, then πx′ d (z) πxd (z) = 1 and thus ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = ∑ zd∈{0,1} πx′d(zd) = θd + (1− θd) = 1. (65) If the perturbed and unperturbed input differ in dimension d, then∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = (1− θd)2 θd + (θd) 2 1− θd . (66) Therefore, the expected likelihood ratio is Din∏ d=1 ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = Din∏ d=1 ( (1− θd)2 θd + (θd) 2 1− θd )|x′d−xd| . (67) Due to Theorem 3 (and using ν = µ when computing the variance), we know that our prediction is robust, i.e. fn(x′) = yn, if ∑ z∈{0,1}Din πx′(z) 2 πx(z) < 1 + 1 σ2 ( µ− 1 2 )2 (68) ⇐⇒ Din∏ d=1 ( (1− θd)2 θd + (θd) 2 1− θd )|x′d−xd| < 1 + 1 σ2 ( µ− 1 2 )2 (69) ⇐⇒ Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) |x′d − xd| < ln ( 1 + 1 σ2 ( µ− 1 2 )2) . (70) Because xd and x′d are binary, the last inequality is equivalent to Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) |x′d − xd|0 < ln ( 1 + 1 σ2 ( µ− 1 2 )2) . (71) B.2.2 SPARSITY-AWARE VARIANCE SMOOTHING FOR PERTURBATIONS OF BINARY DATA Sparsity-aware randomized smoothing (Bojchevski et al., 2020) is an alternative smoothing approach for binary data. It uses
1. What are the main contributions of the paper regarding multi-output classifiers? 2. What are the strengths of the proposed approach, particularly in its adaptability and novel techniques? 3. What are the weaknesses of the paper, especially regarding the training process and the lack of alignment between training and testing strategies? 4. How does the reviewer assess the effectiveness of the proposed method based on the provided empirical results?
Summary Of The Paper Review
Summary Of The Paper The paper makes three contributions in particular: A local version of randomized smoothing for multi-output classifiers. The authors suggest using a customized smoothing distribution for certifying each output of the multi-output classifier. The custom distributions allow them to produce tighter guarantees for each output. A new analysis method of variance smoothing for discrete data uses the average softmax value instead of the majority vote as the prediction rule. The authors use the first and second-order statistics (mean and variance) to provide robustness guarantees in this method. A collective certification strategy for multi-output classifiers using a common interface ( ℓ p norm ellipsoids) for base certificates for every output. The authors describe a common way of stating the base certified regions for every output. Then the multi-output certification problem can be expressed as a mixed-integer linear program to find a point inside the perturbation model that lies outside the base certified regions for the maximum number of outputs. Review Strengths The collective certification strategy in the paper can be used with most of the existing literature on randomized smoothing, which allows it to be quite versatile and adapt to new developments in the field. The MILP relaxation and the distribution sharing ideas in the paper also highlight some critical limitations/ future research directions for randomized smoothing. The variance smoothing idea in the paper is quite exciting and a natural next step for getting better certificates. The empirical results in Table 5 that compare the best achievable baseline performance with the performance of the proposed methods provide compelling evidence for the efficacy of the suggested approach. Weaknesses No training counterpart is suggested for the proposed local smoothing strategy. The current results use a model trained with σ min as the base model. It seems a bit counterintuitive to have essentially different prediction strategies during training and testing. Some of the local smoothing ideas should be reflected in training to make the objectives better aligned.
ICLR
Title Localized Randomized Smoothing for Collective Robustness Certification Abstract Models for image segmentation, node classification and many other tasks map a single input to multiple labels. By perturbing this single shared input (e.g. the image) an adversary can manipulate several predictions (e.g. misclassify several pixels). A recent collective robustness certificate provides strong guarantees on the number of predictions that are simultaneously robust. This method is however limited to strictly local models, where each prediction is associated with a small receptive field. We propose a more general collective certificate for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e.g. based on their proximity in the image). The certificate is based on our novel localized randomized smoothing approach, where the random perturbation strength for different input regions is proportional to their importance for the outputs. The resulting locally smoothed model yields strong collective guarantees while maintaining high prediction quality on both image segmentation and node classification tasks. 1 INTRODUCTION There is a wide range of tasks that require models making multiple predictions based on a single input. For example, semantic segmentation requires assigning a label to each pixel in an image. When deploying such multi-output classifiers in practice, their robustness should be a key concern. After all – just like simple classifiers (Szegedy et al., 2014) – they can fall victim to adversarial attacks (Xie et al., 2017; Zügner & Günnemann, 2019; Belinkov & Bisk, 2018). Even without an adversary, random noise or measuring errors could cause one or multiple predictions to unexpectedly change. In the following, we derive a method that provides provable guarantees on how many predictions can be changed by an adversary. Since all outputs operate on the same input, they also have to be attacked simultaneously by choosing a single perturbed input. While attacks on a single prediction may be easy, attacks on different predictions may be mutually exclusive. We have to explicitly account for this fact to obtain a proper collective robustness certificate that provides tight bounds. There already exists a dedicated collective robustness certificate for multi-output classifiers (Schuchardt et al., 2021), but it is only benefical for models we call strictly local, where each output depends only on a small, well-defined subset of the input. One example are graph neural networks that classify each node in a graph based only on its neighborhood. Multi-output classifiers used in practice, however, are often only softly local. While – unlike strictly local models – all of their predictions are in principle dependent on the entire input, each output may assign different importance to different components. For example, deep convolutional networks used for image segmentation can have very small effective receptive fields (Luo et al., 2016; Liu et al., 2018b), i.e. primarily use a small region of the input in labeling each pixel. Many models used in node classification are based on the homophily assumption that connected nodes are mostly of the same class. Thus, they primarily use features from neighboring nodes to classify each node. Even if an architecture is not inherently softly local, a model may learn a softly local mapping through training. For example, a transformer (Vaswani et al., 2017) can in principle attend to any part of an input sequence. However, in practice the learned attention maps may be ”sparse”, with the prediction for each token being determined primarily by a few (not necessarily nearby) tokens (Shi et al., 2021). While an adversarial attack on a single prediction of a softly local model is conceptually no different from that on a single-output classifier, attacking multiple predictions simultaneously can be much more challenging. By definition, adversarial attacks have to be unnoticeable, meaning the adversary only has a limited budget for perturbing the input. When each output is focused on a different part of the input, the adversary has to decide on where to allocate their adversarial budget and may be unable to attack all outputs at once. Our collective robustness certificate explicitly accounts for this budget allocation problem faced by the adversary and can thus provide stronger robustness guarantees. Our certificate is based on randomized smoothing (Liu et al., 2018a; Lécuyer et al., 2019; Cohen et al., 2019). Randomized smoothing is a versatile black-box certification method that has originally been proposed for single-output classifiers. Instead of directly analysing a model, it constructs a smoothed classifier that returns the most likely prediction of the model under random perturbations of its input. One can then use statistical methods to certify the robustness of this smoothed classifier. We discuss more details in Section 2. Randomized smoothing is typically used with i.i.d. noise: Each part of the input (e.g. each pixel) independently undergoes random perturbations sampled from the same noise distribution. One can however also use non-i.i.d. noise (Eiras et al., 2021). This results in a smoothed classifier that is certifiably more robust to parts of the input that are smoothed with higher noise levels (e.g. larger standard deviation). We apply randomized smoothing to softly-local multi-output classifiers in a scheme we call localized randomized smoothing: Instead of using the same smoothing distribution for all outputs, we randomly smooth each output (or set of outputs) using a different non-i.i.d. distribution that matches its inherent soft locality. Using a low noise level for the most relevant parts of the input allows us to retain a high prediction quality (e.g. accuracy). Less relevant parts of the input can be smoothed with a higher noise level. The resulting certificates (one per output) explicitly quantify how robust each prediction is to perturbations of which section of the input – they are certificates of soft locality. After certifying each prediction independently using localized randomized smoothing, we construct a (mixed-integer) linear program that combines these per-prediction base certificates into a collective certificate that provably bounds the number of simultaneously attackable predictions. This linear program explicitly accounts for soft locality and the budget allocation problem it causes for the adversary. This allows us to prove much stronger guarantees of collective robustness than simply certifying each prediction independently. Our core contributions are: • Localized randomized smoothing, a novel smoothing scheme for multi-output classifiers. • A variance smoothing method for efficiently certifying smoothed models on discrete data. • A collective certificate that leverages our identified common interface for base certificates. 2 BACKGROUND AND RELATED WORK Randomized smoothing. Randomized smoothing is a flexible certification technique that can be used for various data types, perturbation models and tasks. For simplicity, we focus on a classification certificate for l2 perturbations (Cohen et al., 2019). Assume we have a continuous D-dimensional input space RD, a label set Y and a classifier g : RD → Y. We can use isotropic Gaussian noise with standard deviation σ ∈ R+ to construct the smoothed classifier f = argmaxy∈Y Prz∼N (x,σ) [g(z) = y] that returns the most likely prediction of base classifier g under the input distribution 1. Given an input x ∈ RD and the smoothed prediction y = f(x), we want to determine whether the prediction is robust to all l2 perturbations of magnitude , i.e. whether ∀x′ : ||x′−x||2 ≤ : f(x′) = y. Let q = Prz∼N (x,σ) [g(x) = y] be the probability of g predicting label y. The prediction of our smoothed classifier is robust if < σΦ−1(q) (Cohen et al., 2019). This result showcases a trade-off we alluded to in the previous section: The certificate can become stronger if the noise-level (here σ) is increased. But doing so could also lower the accuracy of the smoothed classifier or reduce q and thus weaken the certificate. White-box certificates for multi-output classifiers. There are multiple recent methods for certifying the robustness of specific multi-output models (see, for example, (Tran et al., 2021; Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2020; Ko et al., 2019; Ryou et al., 2021; Shi et al., 2020; Bonaert et al., 2021)) by analyzing their specific architecture and weights. They are however not designed to certify collective robustness. They can only determine independently for each prediction whether or not it can be adversarially attacked. Collective robustness certificates. Most directly related to our work is the certificate of Schuchardt et al. (2021). Like ours, it combines many per-prediction certificates into a collective certificate. But, unlike our novel localized smoothing approach, their certification procedure is only beneficial for strictly local models, i.e. models whose outputs operate on small subsets of the input. Furthermore, their certificate assumes binary data, while our certificate defines a common interface for various data types and perturbation models. A more detailed comparison can be found in Section D. Recently, Fischer et al. (2021) proposed a certificate for semantic segmentation. They consider a different notion of collective robustness: They are interested in determining whether all predictions are robust. In Section C.4 we discuss their method in detail and show that, when used for certifying our notion of collective robustness (i.e. the number of robust predictions), their method is no better than certifying each output independently using the certificate of Cohen et al. (2019). Furthermore, our certificate can be used to provide equally strong guarantees for their notion of collective robustness by checking whether the number of certified predictions equals the overall number of predictions. Another method that can be used for certifying collective robustness is center smoothing (Kumar & Goldstein, 2021). Center smoothing bounds how much a vector-valued prediction changes w.r.t to a distance function under adversarial perturbations. With the l0 pseudo-norm as the distance function, center smoothing bounds how many predictions of a classifier can be simultaneously changed. Randomized smoothing with non-i.i.d. noise. While not designed for certifying collective robustness, two recent certificates for non-i.i.d. Gaussian (Fischer et al., 2020) and uniform smoothing (Eiras et al., 2021) can be used as a component of our collective certification approach: They can serve as per-prediction base certificates, which can then be combined into our stronger collective certificate (more details in Section 4) . Note that we do not use the procedure for optimizing the smoothing distribution proposed by Eiras et al. (2021), as this would enable adversarial attacks on the smoothing distribution itself and invalidate the certificate (see discussion by Wang et al. (2021)). 3 COLLECTIVE THREAT MODEL Before certifying robustness, we have to define a threat model, which specifies the type of model that is attacked, the objective of the adversary and which perturbations they are allowed to use. We assume that we have a multi-output classifier f : XDin → YDout , that maps from a Din-dimensional vector space to Dout labels from label set Y. We further assume that this classifier f is the result of randomly smoothing a base classifier g, as discussed in Section 2. To simplify our notation, we write fn to refer to the function x 7→ f(x)n that outputs the n-th label. Given this multi-output classifier f , an input x ∈ XDin and the resulting vector of predictions y = f(x), the objective of the adversary is to cause as many predictions from a set of targeted indices T ⊆ {1, . . . , Dout} to change. That is, their objective is minx′∈Bx ∑ n∈T I [fn(x ′) = yn], where Bx ⊆ XDin is the perturbation model. Importantly, note that the minimization operator is outside the sum, meaning the predictions have to 1In practice, all probabilities have to be estimated using Monte Carlo sampling (see discussion in Section C). be attacked using a single input. As is common in robustness certification, we assume a norm-bound perturbation model. That is, given an input x ∈ XDin , the adversary is only allowed to use perturbed inputs from the set Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } with p, ≥ 0. 4 A RECIPE FOR COLLECTIVE CERTIFICATES Before discussing technical details, we provide a high-level overview of our method. In localized randomized smoothing, we assign each output gn of a base classifier g its own smoothing distribution Ψ(n) that matches our assumptions or knowledge about the base classifier’s soft locality, i.e. for each n ∈ {1, . . . , Dout} choose a Ψ(n) that induces more noise in input components that are less relevant for gn. For example, in Fig. 1, we assume that far-away regions of the image are less relevant and thus perturb pixels in the bottom left with more noise when classifying pixels in the top-right corner. The chosen smoothing distributions can then be used to construct the smoothed classifier f . Given an input x ∈ XDin and the corresponding smoothed prediction y = f(x), randomized smoothing makes it possible to compute per-prediction base certificates. That is, for each yn, one can compute a set H(n) ⊆ XDin of perturbed inputs that the prediction is robust to, i.e. ∀x′ ∈ Hn : fn(x ′) = yn. Our motivation for using non-i.i.d. distributions is that the H(n) will guarantee more robustness for input dimensions smoothed with more noise, i.e. quantify model locality. The objective of our adversary is minx′∈Bx ∑ n∈T I [fn(x ′) = yn] with collective perturbation model Bx ⊆ XDin . That is, they want to change as many predictions from the targeted set T as possible. A trivial lower bound can be obtained by counting how many predictions are – according to the base certificates – provably robust to the collective threat model. This can be expressed as∑ n∈T minx′∈Bx I [ x′ ∈ H(n) ] . In the following, we refer to this as the naı̈ve collective certificate. Thanks to our proposed localized smoothing scheme, we can use the following, tighter bound: min x′∈Bx ∑ n∈T I [fn(x ′) = yn] ≥ min x′∈Bx ∑ n∈T I [ x′ ∈ H(n) ] , (1) which preserves the fact that the adversary has to choose a single perturbed input. Because we use different non-i.i.d. smoothing distributions for different outputs, we provably know that each fn has varying levels of robustness for different parts of the input and that these robustness levels differ among outputs. Thus, in the r.h.s. problem the adversary has to allocate their limited budget across various input dimensions and may be unable to attack all predictions at once, just like when attacking the classifier in the l.h.s. objective (recall Section 1). This makes our collective certificate stronger than the naı̈ve collective certificate, which allows each prediction to be attacked independently. As stated in Section 1, the idea of combining base certificates into stronger collective certificates has already been explored by Schuchardt et al. (2021). But instead of using localized smoothing to capture the (soft) locality of a model, their approach leverages the fact that perturbations outside an output’s receptive field can be ignored. For softly local models, which have receptive fields covering the entire input, their certificate is no better than the naı̈ve certificate. Another novel insight underlying our approach is that various non-i.i.d. randomized smoothing certificates share a common interface, which makes our method applicable to diverse data types and perturbation models. In the next section, we formalize this common interface. We then discuss how it allows us to compute the collective certificate from Eq. 1 using (mixed-integer) linear programming. 5 COMMON INTERFACE FOR BASE CERTIFICATES A base certificate for a prediction yn = fn(x) is a set Hn ⊆ XDin of perturbed inputs that yn is provably robust to, i.e ∀x′ ∈ Hn : fn(x′) = yn. Note that base certificates do not have to be exact, but have to be sound, i.e. they do not have to specify all inputs to which the fn are robust but they must not contain any adversarial examples. As a common interface for base certificates, we propose that the sets Hn are parameterized by a weight vector w(n) ∈ RDin and a scalar η(n) that define a linear constraint on the element-wise distance between perturbed inputs and the clean input: H(n) = { x′ ∈ XDin ∣∣∣∣∣ Din∑ d=1 w (n) d · |x ′ d − xd|κ < η(n) } . (2) The weight vector encodes how robust yn is to perturbations of different components of the input. The scalar κ is important for collective robustness certification, because it encodes which collective perturbation model the base certificate is compatible with. For example, κ = 2 means that the base certificate can be used for certifying collective robustness to l2 perturbations. In the following, we present two base certificates implementing our interface: One for l2 perturbations of continuous data and one for perturbations of binary data. In Section B, we further present a certificate for binary data that can distinguish between adding and deleting bits and a certificate for l1 perturbations of continuous data. All base certificates guarantee more robustness for parts of the input smoothed with a higher noise level. The certificates for continuous data are based on known results (Fischer et al., 2020; Eiras et al., 2021) and merely reformulated to match our proposed interface, so that they can be used as part of our collective certification procedure. The certificates for discrete data however are original and based on the novel concept of variance smoothing. Gaussian smoothing for l2 perturbations of continuous data The first base certificate is a generalization of Gaussian smoothing to anisotropic noise, a corollary of Theorem A.1 from (Fischer et al., 2020). In the following, diag(z) refers to a diagonal matrix with diagonal entries z and Φ−1 : [0, 1]→ R refers to the the standard normal inverse cumulative distribution function. Proposition 1. Given an output gn : RDin → Y, let fn(x) = argmaxy∈Y Prz∼N (x,Σ) [gn(z) = y] be the corresponding smoothed output with Σ = diag (σ)2 andσ ∈ RDin+ . Given an inputx ∈ RDin and smoothed prediction yn = fn(x), let q = Prz∼N (x,Σ) [gn(z) = yn]. Then, ∀x′ ∈ H(n) : fn(x ′) = yn with H(n) defined as in Eq. 2, wd = 1σd2 , η = ( Φ(−1)(q) )2 and κ = 2. Bernoulli variance smoothing for perturbations of binary data For binary data, we use a smoothing distribution F(x,θ) with θ ∈ [0, 1]Din that independently flips the d’th bit with probability θd, i.e. for x, z ∈ {0, 1}Din and z ∼ F(x,θ) we have Pr[zd 6= xd] = θd. A corresponding certificate could be derived by generalizing (Lee et al., 2019), which considers a single shared θ ∈ [0, 1] with ∀d : θd = θ. However, the cost for computing this certificate would be exponential in the number of unique values in θ. We therefore propose a more efficient alternative. Instead of constructing a smoothed classifier that returns the most likely labels of the base classifier (as discussed in Section 2), we construct a smoothed classifier that returns the labels with the highest expected softmax scores (similar to CDF-smoothing (Kumar et al., 2020)). For this smoothed model, we can compute a robustness certificate in constant time. The certificate requires determining both the expected value and variance of softmax scores. We therefore call this method variance smoothing. While we use it for binary data, it is a general-purpose technique that can be applied to arbitrary domains and smoothing distributions (see discussion in Section B.2). In the following, we assume the label set Y to consist of numerical labels {1, . . . , |Y|}, which simplifies our notation. Theorem 1. Given an output gn : {0, 1}Din → ∆|Y| mapping to scores from the |Y|-dimensional probability simplex, let fn(x) = argmaxy∈YEz∼F(x,θ) [gn(z)y] be the corresponding smoothed classifier with θ ∈ [0, 1]Din . Given an input x ∈ {0, 1}Din and smoothed prediction yn = fn(x), let µ = Ez∼F(x,θ) [gn(z)y] and σ2 = Varz∼F(x,θ) [gn(z)y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = ln ( (1−θd)2 θd + (θd) 2 1−θd ) , η = ln ( 1 + 1σ2 ( µ− 12 )2) and κ = 0. 6 COMPUTING THE COLLECTIVE ROBUSTNESS CERTIFICATE With our common interface for base certificates in place, we can discuss how to compute the collective robustness certificate minx′∈Bx ∑ n∈T I [ x′ ∈ H(n) ] from Eq. 1. The result bounds the number of predictions yn with n ∈ {1, . . . , Dout} that can be simultaneously attacked by the adversary. In the following, we assume that the base certificates were obtained by using a smoothing distribution that is compatible with our lp collective perturbation model (i.e. κ = p), for example by using Gaussian noise for p = 2 or Bernoulli noise for p = 0. Inserting the definition of our base certificate interface from Eq. 2 and rewriting our perturbation model Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } as{ x′ ∈ XDin | ∑Din d=1 |x′d − xd|p ≤ p } , our objective from Eq. 1 can be expressed as min x′∈XDin ∑ n∈T I [ Din∑ d=1 w (n) d · |x ′ d − xd|p < η(n) ] s.t. Din∑ d=1 |x′d − xd|p ≤ p. (3) We can see that the perturbed inputx′ only affects the element-wise distances |x′d−xd|p. Rather than optimizing x′, we can instead directly optimize these distances, i.e. determine how much adversarial budget is allocated to each input dimension. For this, we define a vector of variables b ∈ RDin+ (or b ∈ {0, 1}Din for binary data). Replacing sums with inner products, we can restate Eq. 3 as min b∈RDin+ ∑ n∈T I [ bTw(n) < η(n) ] s.t. sum{b} ≤ p. (4) In a final step, we replace the indicator functions in Eq. 4 with a vector of boolean variables t ∈ {0, 1}Dout . Define the constants η(n) = p ·min ( 0,mind w (n) d ) . Then, min b∈RDin+ ,t∈{0,1}Dout ∑ n∈T tn s.t. ∀n : bTw(n) ≥ tnη(n) + (1− tn)η(n), sum{b} ≤ p. (5) is equivalent to Eq. 4. The first constraint guarantees that tn can only be set to 0 if the l.h.s. is greater or equal η(n), i.e. only when the base certificate can no longer guarantee robustness. The term involving η(n) ensures that for tn = 1 the problem is always feasible2. Eq. 5 can be solved using any mixed-integer linear programming solver. While the resulting MILP bears some semblance to that of Schuchardt et al. (2021), it is conceptually different. When evaluating their base certificates, they mask out parts of the budget vector b based on a model’s strict locality, while we weigh the budget vector based on the soft locality guaranteed by the base certificates. In addition, thanks to the interface specified in Section 5, our problem only involves a single linear constraint per prediction, making it much smaller and more efficient to solve. Interestingly, when using randomized smoothing base certificates for binary data, our certificate subsumes theirs, i.e. can provide the same robustness guarantees (see Section D.2). Improving efficiency. Still, the efficiency of our certificate in Eq. 5. certificate can be further improved. In Section A, we show that partitioning the outputs into Nout subsets sharing the same smoothing distribution and the the inputs into Nin subsets sharing the same noise level (for example like in Fig. 1), as well as quantizing the base certificate parameters η(n) into Nbin bins reduces the number of variables and constraints from Din + Dout and Dout + 1 to Nin + Nout · Nbins and Nout · Nbins + 1, respectively.We can thus control the problem size independent of the data’s dimensionality. We further derive a linear relaxation of the mixed-integer problem, which can be more efficiently solved while preserving the soundness of the certificate. 7 LIMITATIONS The main limitation of our approach is that it assumes softly local models. While it can be applied to arbitrary multi-output classifiers, it may not necessarily result in better certificates than randomized smoothing with i.i.d. distributions. Furthermore, choosing the smoothing distributions requires some a-priori knowledge or assumptions about which parts of the input are how relevant to making a prediction. Our experiments show that natural assumptions like homophily can be sufficient for choosing effective smoothing distributions. But doing so in other tasks may be more challenging. A limitation of (most) randomized smoothing certificates is that they use sampling to approximate the smoothed classifier. Because we use different smoothing distributions for different outputs, we can only use a fraction of the samples for each output. As discussed in Section A.1, we can alleviate this problem by sharing smoothing distributions among multiple outputs. Our experiments show that despite this issue, our method outperforms certificates that use a single smoothing distribution. Still, future work should try to improve the sample efficiency of randomized smoothing (for example by developing more methods for de-randomized smoothing (Levine & Feizi, 2020)).Any such advance could then be incorporated into our localized smoothing framework. 8 EXPERIMENTAL EVALUATION Our experimental evaluation has three objectives 1.) Verifying our main claim that localized randomized smoothing offers a better trade-off between accuracy and certifiable robustness than smoothing 2Because η(n) is the smallest value bTw(n) can take on, i.e. min b∈RDin+ bTw (n) d s.t. sum{b} ≤ p. with i.i.d. distributions. 2.) Determining to what extend the linear program underlying the proposed collective certificate strengthens our robustness guarantees. 3.) Assessing the efficacy of our novel variance smoothing certificate for binary data. Any of the used datasets and classifiers only serve as a means of comparing certificates. We thus use well-known and well-established architectures instead of overly focusing on maximizing prediction accuracy by using the latest SOTA models. We use two metrics to quantify certificate strength: Certified accuracy (i.e. the percentage of correct and certifiably robust predictions) and certified ratio (i.e. the percentage of certifiably robust predictions, regardless of correctness)3. As single-number metrics, we report the AUC of the certified accuracy/ratio functions w.r.t. adversarial budget (not to be confused with certifying some AUC metric). For localized smoothing, we evaluate both the naı̈ve collective certificate, i.e. certifying predictions independently (see Section 4), and the proposed LP-based certificate (using the linearly relaxed version from Appendix A.4). We compare our method to two baselines using i.i.d. randomized smoothing: The naı̈ve collective certificate and center smoothing (Kumar & Goldstein, 2021). For softly local models, the certificate of Schuchardt et al. (2021) is equivalent to the naı̈ve baseline. When used to certify the number of robust predictions, the segmentation certificate of Fischer et al. (2021) is at most as strong as the naı̈ve baseline (see Section C.4). Thus, our method is compared to all existing collective certificates listed in Section 2. In all experiments, we use Monte Carlo randomized smoothing. More details on the experimental setup can be found in Section E. 8.1 SEMANTIC SEGMENTATION Dataset and model. We evaluate our certificate for continuous data and l2 perturbations on the Pascal-VOC 2012 segmentation validation set. Training is performed on 10582 pairs of training samples extracted from SBD4 (Hariharan et al., 2011), To increase batch sizes and thus allow a more thorough investigation of different smoothing parameters, all images are downscaled to 50% of their original size. Our base model is a U-Net segmentation model with a ResNet-18 backbone. To obtain accurate and robust smoothed classifiers, base models should be trained on the smoothing distribution. We thus train 51 different instances of our base model, augmenting the training data with a different σtrain ∈ {0, 0.01, . . . , 0.5}. At test time, when evaluating a baseline i.i.d. certificate with smoothing distribution N (0, σ), we load the model trained with σtrain = σ. To perform localized randomized smoothing, we choose parameters σmin, σmax ∈ R+ and partition all images into regular grids of size 4 × 6 (similar to example Fig. 1). To classify pixels in grid cell (i, j), we sample noise for grid cell (k, l) using N (0, σ′), with σ′ ∈ [σmin, σmax] chosen proportional to the distance of (i, j) and (k, l) (more details in Section E.2.1). As the base model, we load the one trained with σtrain = σmin. Using the same distribution at train and test time for the i.i.d. baselines but not for localized smoothing is meant to skew the results in the baseline’s favor. But, in Section E.2.3, we also repeat our experiments using the same base model for i.i.d. and localized smoothing. Evaluation. The main goal of our experiments on segmentation is to verify that localized smoothing can offer a better trade-off between accuracy and certifiable robustness. That is, for all or most σ, there are σmin, σmax such that the locally smoothed model has higher accuracy and certifiable collective robustness than i.i.d. smoothing baselines using N (0, σ). Because σ, σmin, σmax ∈ R+, we can not evaluate all possible combinations. We therefore use the following scheme: We focus on the case σ ∈ [0, 0.5], which covers all distributions used in (Kumar & Goldstein, 2021) and 3In the case of image segmentation, we compute these metrics per image and then average over the dataset. 4Also known as ”Pascal trainaug” (Fischer et al., 2021). First, we evaluate our two baselines for five σ ∈ {0.1, 0.2, 0.3, 0.4, 0.5}. This results in baseline models with diverse levels of accuracy and robustness (e.g. the accuracy of the naı̈ve baseline shrinks from 87.7% to 64.9% and the AUC of its certified accuracy grows from 0.17 to 0.644). We then test whether, for each of the σ, we can find σmin, σmax such that the locally smoothed models attains higher accuracy and is certifiably more robust. Finally, to verify that {0.1, 0.2, 0.3, 0.4, 0.5} were not just a particularly poor choice of baseline parameters, we fix the chosen σmin, σmax. We then perform a fine-grained search over σ ∈ [0, 0.5] with resolution 0.01 to find a baseline model that has at least the same accuracy and certifiable robustness (as measured by certificate AUC) as any of the fixed locally smoothed models. If this is not possible, this provides strong evidence that the proposed smoothing scheme and certificate indeed offer a better trade-off. Fig. 2 shows one example. For σ = 0.4, the naı̈ve i.i.d. baseline has an accuracy of 72.5%. With σmin = 0.25, σmax = 1.5, the proposed localized smoothing certificate yields both a higher accuracy of 76.4% and a higher certified accuracy for all . It can certify robustness for up to 1.825, compared to 1.45 of the baseline and the AUC of its certified accuracy curve is 43.1% larger. Fig. 2 also highlights the usefulness of the linear program we derived in Section 5: Evaluating the localized smoothing base certificates independently, i.e. computing the naı̈ve collective certificate (dotted orange line), is not sufficient for outperforming the baseline. But combining them via the proposed linear program drastically increases the certified accuracy The results for all other combinations of smoothing distribution parameters, both baselines and both metrics of certificate strength can be found in Section E.2.3. Tables 1 and 2 summarize the first part of our evaluation procedure, in which we optimize the localized smoothing parameters. Safe for one exception (with σ = 0.2, center smoothing has a lower accuracy, but slightly larger certified ratio), the locally smoothed models have the same or higher accuracy, but provide stronger robustness guarantees. The difference is particularly large for σ ∈ {0.3, 0.4, 0.5}, where the accuracy of models smoothed with i.i.d. noise drops off, while our localized smoothing distribution preserves the most relevant parts of the image to allow for high accuracy. Table 5 summarizes the second part of our evaluation scheme, in which we perform a fine-grained search over [0, 0.5]. We find that there is no σ such that either of the i.i.d. baselines can outperform any of the chosen locally smoothed models w.r.t. AUC of their certified accuracy or certified ratio curves. This is ample evidence for our claim that localized smoothing offers a better trade-off than i.i.d. smoothing. Also, the collective LPs caused little computational overhead (avg. 0.68 s per LP, more details in Section E.2.3). 8.2 NODE CLASSIFICATION Dataset and model. We evaluate our certificate for binary data on the Cora-ML node classification dataset. We use two different base-models: Approximate Personalized Propagation of Neural Predictions (APPNP) (Klicpera et al., 2019) and a 6-layer Graph Convolutional network (GCN) (Kipf & Welling, 2017). Both models have a receptive field that covers most or all of the graph, meaning they are softly local. For details on model and training parameters, see Section E.3.1. As center smoothing has only been derived for Gaussian smoothing, we only compare to the naı̈ve baseline. For both, the baseline and our localized smoothing certificate, we use sparsity-aware randomized smoothing (Bojchevski et al., 2020) , i.e. flip 1-bits and 0-bits with different probabilities (θ− and θ+, respectively), which allows us to certify different levels of robustness to deletions and additions of bits. With localized randomized smoothing, we use the variance smoothing base certificate derived in Section B.2.2. We choose the distribution parameters for localized smoothing based on an assumption of homophily, i.e. nearby nodes are most relevant for classifying a node. We partition the graph into 5 clusters and define parameters θ±min and θ ± max. When classifying a node in cluster i, we randomly smooth attributes in cluster j with θ+ij , θ − ij that are based on linearly interpolating in [θ−min, θ − max] and [θ − min, θ − max] based on the affinity of the clusters (details in Section E.3.1). Evaluation. We first evaluate the new variance-based certificate and compare it to the certificate derived by Bojchevski et al. (2020). For this, we use only one cluster, meaning we use the same smoothing distribution for both. Fig. 11 in Section E.3 shows that the variance certificate is weaker than the baseline for additions, but better for deletions. It appears sufficiently effective to be used as a base certificate and integrated into a stronger, collective certificate. The parameter space of our smoothing distributions is large. For the localized approach we have four continuous parameters, as we have to specify both the minimal and maximal noise values. Therefore, it is difficult to show that our approach achieves a better accuracy-robustness trade-off over the whole noise space. However, we can investigate the accuracy-robustness trade-off within some areas of this space. For the localized approach we choose a few fixed combinations of the noise parameters θ±min and θ±max. To show our claim, we then optimise the baselines with parameters in an interval around our θ+min and θ − min. This is a smaller space, as the baselines only have two parameters. We select the baseline whose certified accuracy curve has the largest AUC. We perform the search for the best baseline for the addition and deletion scenario independently, i.e., the best baseline model for addition and deletion does not have to be the same. In Fig. 3, we see the certified accuracy of an APPNP model for a varying number of attribute additions and deletions (left and right respectively). To find the best distribution parameters for the baselines, we evaluated combinations of θ+ ∈ {0.04, 0.055, 0.07} and θ− ∈ [0.1, . . . , 0.827], using 11 equally spaced values for the interval. For adversarial additions, the best baseline yields a certified accuracy curve with an AUC of 4.51 compared to our 5.65. The best baseline for deletions has an AUC of 7.76 compared to our 16.26. Our method outperforms these optimized baselines for most adversarial budgets, while maintaining the same clean accuracy (i.e. certified accuracy at = 0). Experiments with different noise parameters and classifiers can be found in Section E.3. In general, we find that we significantly outperform the baseline when certifying robustness to deletions, but often have weaker certificates for additions (which may be inherent to the variance smoothing base certificates). Due to the large continuous parameter space, we cannot claim that localized smoothing outperforms the naı̈ve baseline everywhere. However, our results show that, for the tested parameter regions, localized smoothing can provide a significantly better accuracy-robustness trade-off. We found that using the collective LP instead of naı̈vely combining the base certificates can result in much stronger certificates: The AUC of the certified accuracy curve (averaged over all experiments) increased by 38.8% and 33.6% for addition and deletion, respectively. The collective LPs caused little computational overhead (avg. 10.9 s per LP, more details in Section E.3.3). 9 CONCLUSION In this work, we have proposed the first collective robustness certificate for softly local multi-output classifiers. It is based on localized randomized smoothing, i.e. randomly smoothing different outputs using different non-i.i.d. smoothing distributions matching the model’s locality. We have shown how per-output certificates based on localized smoothing can be computed and that they share a common interface. This interface allows them to be combined into a strong collective robustness certificate. Experiments on image segmentation and node classification tasks demonstrate that localized smoothing can offer a better robustness-accuracy trade-off than existing randomized smoothing techniques. Our results show that locality is linked to robustness, which suggests the research direction of building more effective local models to robustly solve multi-output tasks. 10 REPRODUCIBILITY STATEMENT We prove all theoretic results that were not already derived in the main text in Appendices A to C. To ensure reproducibility of the experimental results we provide detailed descriptions of the evaluation process with the respective parameters in Section E.2 and Section E.3. Code will be made available to reviewers via an anonymous link posted on OpenReview, as suggested by the guidelines. 11 ETHICS STATEMENT In this paper, we propose a method to increase the robustness of machine learning models against adversarial perturbations and to certify their robustness. We see this as an important step towards general usage of models in practice, as many existing methods are brittle to crafted attacks. Through the proposed method, we hope to contribute to the safe usage of machine learning. However, robust models also have to be seen with caution. As they are harder to fool, harmful purposes like mass surveillance are harder to avoid. We believe that it is still necessary to further research robustness of machine learning models as the positive effects can outweigh the negatives, but it is necessary to discuss the ethical implications of the usage in any specific application area. A.1 SHARING SMOOTHING DISTRIBUTIONS AMONG OUTPUTS In principle, our proposed certificate allows a different smoothing distribution Ψ(n) to be used per output gn of our base model. In practice, where we have to estimate properties of the smoothed classifier using Monte Carlo methods, this is problematic: Samples cannot be re-used, each of the many outputs requires its own round of sampling. We can increase the efficiency of our localized smoothing approach by partitioning our Dout outputs into Nout subsets that share the same smoothing distribution. When making smoothed predictions or computing base certificates, we can then reuse the same samples for all outputs within each subsets. More formally, we partition our Dout output dimensions into sets K(1), . . . ,K(Nout) with⋃̇Nout i=1 K(i) = {1, . . . , Dout}. (6) We then associate each set K(i) with a smoothing distribution Ψ(i). For each base model output gn with n ∈ K(i), we then use smoothing distribution Ψ(i) to construct the smoothed output fn, e.g. fn(x) = argmaxy∈Y Prz∼Ψ(i) [f(x+ z) = y] (note that we use a different smoothing paradigm for binary data, see Section 5). A.2 QUANTIZING CERTIFICATE PARAMETERS Recall that our base certificates from Section 5 are defined by a linear inequality: A prediction yn = fn(x) is robust to a perturbed input x′ ∈ XDin if ∑D d=1 w (n) d · |x′d − xd| p < η(n), for some p ≥ 0. The weight vectors w(n) ∈ RDin only depend on the smoothing distributions. A side of effect of sharing the same smoothing Ψ(i) among all outputs from a set K(i), as discussed in the previous section, is that the outputs also share the same weight vector w(i) ∈ RDin with ∀n ∈ K(i) : w(i) = w(n). Thus, for all smoothed outputs fn with n ∈ K(i), the smoothed prediction yn is robust if ∑D d=1 w (i) d · |x′d − xd| p < η(n). Evidently, the base certificates for outputs from a set K(i) only differ in their parameter η(n). Recall that in our collective linear program we use a vector of variables t ∈ {0, 1}Dout to indicate which predictions are robust according to their base certificates (see Section 6). If there are two outputs fn and fm with η(n) = η(m), then fn and fm have the same base certificate and their robustness can be modelled by the same indicator variable. Conversely, for each set of outputs K(i), we only need one indicator variable per unique η(n). By quantizing the η(n) within each subset K(i) (for example by defining equally sized bins between minn∈K(i) η(n) and maxn∈K(i) η(n) ), we can ensure that there is always a fixed number Nbins of indicator variables per subset. This way, we can reduce the number of indicator variables from Dout to Nout ·Nbins. To implement this idea, we define matrix of thresholds E ∈ RNout×Nbins with ∀i : min {Ei,:} ≤ minn∈K(i) ({ η(n) | n ∈ K(i) }) . We then define a function ξ : {1, . . . , Nout} × R→ R with ξ(i, η) = max ({Ei,j | j ∈ {1, . . . , Nbins ∧ Ei,j < η}) (7) that quantizes base certificate parameter η from output subset K(i) by mapping it to the next smallest threshold in Ei,:. For feasibility, like in Section 6 we need to compute the constant η(i) = min b∈RDin+ bTw (i) d s.t. sum{b} ≤ p to ensure feasibility of the problem. Note that, be- cause all outputs from a subset K(i) share the same weight vector w(i), we only have to compute this constant once per subset. We can bound the collective robustness of the targeted dimensions T of our vector of predictions y = f(x) as follows: min ∑ i∈{1,...,Nout} ∑ j∈{1,...,Nbins} Ti,j ∣∣∣{n ∈ T ∩K(i) ∣∣∣ξ (i, η(n)) = Ei,j }∣∣∣ (8) s.t. ∀i, j : bTw(i) ≥ Ti,jη(i) + (1− Ti,j)Ei,j , sum{b} ≤ p (9) b ∈ RDin+ , T ∈ {0, 1}Nout×Nbins . (10) Constraint Eq. 9 ensures that Ti,j is only set to 0 if bTw(i) ≥ Ei,j , i.e. all predictions from subset K(i) whose base certificate parameter η(n) is quantized to Ei,j are no longer robust. When this is the case, the objective function decreases by the number of these predictions. For Nout = Dout, Nbins = 1 and En,1 = η(n), we recover our general certificate from Section 6. Note that, if the quantization maps any parameter η(n) to a smaller number, the set H(n) becomes more restrictive, i.e. yn is considered robust to a smaller set of perturbed inputs. Thus, Eq. 8 is a lower bound on our general certificate from Section 6. A.3 SHARING NOISE LEVELS AMONG INPUTS Similar to how partitioning the output dimensions allows us to control the number of output variables t, partitioning the input dimensions and using the same noise level within each partition allows us to control the number of variables b that model the allocation of adversarial budget. Assume that we have partitioned our output dimensions into Nout subsets K(1), . . . ,K(Nout , with outputs in each subset sharing the same smoothing distribution Ψ(i), as explained in Section A.1. Let us now define Nin input subsets J(1), . . . , J(Nin) with⋃̇Nout i=1 J(i) = {1, . . . , Dout}. (11) Recall that a prediction yn = fn(x) with n ∈ K(i) is robust to a perturbed input x′ ∈ XDin if ∑D d=1 w (i) d · |x′d − xd| p < η(n) and that the weight vectors w(i) only depend on the smoothing distributions. Assume that we choose each smoothing distribution Ψ(i) such that ∀l ∈ {1, . . . , Nin},∀d, d′ ∈ J(l) : w(i)d = w (i) d′ , i.e. all input dimensions within each set J(l) have the same weight. This can be achieved by choosing Ψ(i) so that all dimensions in each input subset Jl are smoothed with the noise level (note that we can still use different Ψ(i), i.e. different noise levels for smoothing different sets of outputs). For example, one could use a Gaussian distribution with covariance matrix Σ = diag (σ)2 with ∀l ∈ {1, . . . , Nin},∀d, d′ ∈ J(l) : σd = σd′ . In this case, the evaluation of our base certificates can be simplified. Prediction yn = fn(x) is robust to a perturbed input x′ ∈ XDin if D∑ d=1 w (i) d · |x ′ d − xd| p < η(n) (12) = Nin∑ l=1 u(i) · ∑ d∈J(l) |x′d − xd| p < η(n), (13) with u ∈ RNin and ∀i ∈ {1, . . . , Nout},∀l ∈ {1, . . . , Nin},∀d ∈ J : uil = wid. That is, we can replace each weight vector w(i) that has one weight w(i)d per input dimension d with a smaller weight vector u(i) with one weight u(i)l per input subset J(l). For our linear program, this means that we no longer need a budget vector b ∈ RDin+ to model the element-wise distance |x′d − xd| p in each dimension d. Instead, we can use a smaller budget vector b ∈ RNin+ to model the overall distance within each input subset J(l), i.e. ∑ d∈J(l) |x′d − xd| p. Combined with the quantization of certificate parameters from the previous section, our optimization problem becomes min ∑ i∈{1,...,Nout} ∑ j∈{1,...,Nbins} Ti,j ∣∣∣{n ∈ T ∩K(i) ∣∣∣ξ (i, η(n)) = Ei,j }∣∣∣ (14) s.t. ∀i, j : bTu(i) ≥ Ti,jη(i) + (1− Ti,j)Ei,j , sum{b} ≤ p, (15) b ∈ RNin+ , T ∈ {0, 1}Nout×Nbins . (16) with u ∈ RNin and ∀i ∈ {1, . . . , Nout},∀l ∈ {1, . . . , Nin},∀d ∈ J : ωil = wid. For Nout = Dout, Nin = Din, Nbins = 1 and En,1 = η(n), we recover our general certificate from Section 6. When certifying robustness for binary data, we impose different constraints on b. To model that the adversary can not flip more bits than are present within each subset, we use a budget vector b ∈ NNin0 with ∀l ∈ {1, . . . , Nin} : bl ≤ ∣∣J(l)∣∣, instead of a continuous budget vector b ∈ RNin+ . A.4 LINEAR RELAXATION Combining the previous steps allows us to reduce the number of problem variables and linear constraints from Din + Dout and Dout + 1 to Nin + Nout · Nbins and Nout · Nbins + 1, respectively. Still, finding an optimal solution to the mixed-integer linear program may be too expensive. One can obtain a lower bound on the optimal value and thus a valid, albeit more pessimistic, robustness certificate by relaxing all to be continuous. When using the general certificate from Section 6, the binary vector t ∈ {0, 1}Dout can be relaxed to t ∈ [0, 1]Dout . When using the certificate with quantized base certificate parameters from Section A.2 or Section A.3, the binary matrix T ∈ [0, 1]Nout×Nbins can be relaxed to T ∈ [0, 1]Nout×Nbins . Conceptually, this means that predictions can be partially certified, i.e. tn ∈ (0, 1) or Ti,j ∈ (0, 1). In particular, a prediction can be partially certified even if we know that is impossible to attack under the collective perturbation model Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } . Just like Schuchardt et al. (2021), who encountered the same problem with their collective certificate, we circumvent this issue by first computing a set L ⊆ T of all targeted predictions in T that are guaranteed to always be robust: L = { n ∈ T ∣∣∣∣∣ ( max x∈Bx D∑ d=1 w (n) d · |x ′ d − xd| p ) < η(n) } (17) = { n ∈ T ∣∣∣max(max{w(n)} · p, 0) < η(n)} . (18) The equality follows from the fact that the most effective way of attacking a prediction is to allocate all adversarial budget to the least robust dimension, i.e. the dimension with the largest weight – unless all weights are negative. Because we know that all predictions with indices in L are robust, we do not have to include them in the collective optimization problem and can instead compute |L|+ min x′∈Bx ∑ n∈T\L I [ x′ ∈ H(n) ] . (19) The r.h.s. optimization can be solved using the general collective certificate from Section 6 or any of the more efficient, modified certificates from previous sections. When using the general collective certificate from Section 6 with binary data, the budget variables b ∈ {0, 1}Din can be relaxed to b ∈ [0, 1]Din . When using the modified collective certificate from Section A.3, the budget variables with b ∈ NNin0 can be relaxed to b ∈ R Nin + . The additional constraint ∀l ∈ {1, . . . , Nin} : bl ≤ ∣∣J(l)∣∣ can be kept in order to model that the adversary cannot flip (or partially flip) more bits than are present within each input subset J(l). B BASE CERTIFICATES In the following, we show why the base certificates presented in Section 5 hold and present alternatives for other collective perturbation models. B.1 GAUSSIAN SMOOTHING FOR l2 PERTURBATIONS OF CONTINUOUS DATA Proposition 1. Given an output gn : RDin → Y, let fn(x) = argmaxy∈Y Prz∼N (x,Σ) [gn(z) = y] be the corresponding smoothed output with Σ = diag (σ)2 andσ ∈ RDin+ . Given an inputx ∈ RDin and smoothed prediction yn = fn(x), let q = Prz∼N (x,Σ) [gn(z) = yn]. Then, ∀x′ ∈ H(n) : fn(x ′) = yn with H(n) defined as in Eq. 2, wd = 1σd2 , η = ( Φ(−1)(q) )2 and κ = 2. Proof. Based on the definition of the base certificate interface, we need to show that, ∀x′ ∈ H : fn(x ′) = yn with H = { x′ ∈ RDin ∣∣∣∣∣ Din∑ d=1 1 σ2d · |xd − x′d|2 < ( Φ−1(q) )2} . (20) Eiras et al. (2021) have shown that under the same conditions as above, but with a general covariance matrix Σ ∈ RDin×Din+ , a prediction yn is certifiably robust to a perturbed input x′ if√ (x− x′)Σ−1(x− x′) < 1 2 ( Φ−1(q)− Φ−1(q′) ) , (21) where q′ = maxy′n 6=yn Prz∼N (x,Σ) [gn(z) = y ′ n] is the probability of the second most likely prediction under the smoothing distribution. Because the probabilities of all possible predictions have to sum up to 1, we have q′ ≤ 1 − q. Since Φ−1 is monotonically increasing, we can obtain a lower bound on the r.h.s. of Eq. 21 and thus a more pessimistic certificate by substituting 1 − q for q′ (deriving such a ”binary certificate” from a ”multiclass certificate” is common in randomized smoothing and was already discussed in (Cohen et al., 2019)):√ (x− x′)Σ−1(x− x′) < 1 2 ( Φ−1(q)− Φ−1(1− q) ) , (22) In our case, Σ is a diagonal matrix diag (σ)2 with σ ∈ RDin+ . Thus Eq. 22 is equivalent to√√√√Din∑ d=1 (xd − x′d) 1 σ2d (xd − x′d) < 1 2 ( Φ−1(q)− Φ−1(1− q) ) . (23) Finally, using the fact that Φ−1(q)−Φ−1(1− q) = 2Φ−1(q) and eliminating the square root shows that we are certifiably robust if Din∑ d=1 1 σ2d · |xd − x′d|2 < ( Φ−1(q) )2 . (24) B.1.1 UNIFORM SMOOTHING FOR l1 PERTURBATIONS OF CONTINUOUS DATA An alternative base certificate for l1 perturbations is again due to Eiras et al. (2021). Using uniform instead of Gaussian noise later allows us to collective certify robustness to l1-norm-bound perturbations. In the following U(x,λ) with x ∈ RD, λ ∈ RD+ refers to a vector-valued random distribution in which the d-th element is uniformly distributed in [xd − λd, xd + λd]. Proposition 2. Given an output gn : RDin → Y, let f(x) = argmaxy∈Y Prz∼U(x,λ) [g(z) = y] be the corresponding smoothed classifier with λ ∈ RDin+ . Given an input x ∈ RDin and smoothed prediction y = f(x), let p = Prz∼U(x,λ) [g(z) = y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = 1/λd, η = Φ−1(q) and κ = 1. Proof. Based on the definition of H(n), we need to prove that ∀x′ ∈ H : fn(x′) = yn with H = { x′ ∈ RDin | Din∑ d=1 1 λd · |xd − x′d| < Φ−1(q) } , (25) Eiras et al. (2021) have shown that under the same conditions as above, a prediction yn is certifiably robust to a perturbed input x′ if Din∑ d=1 | 1 λd · (xd − x′d) | < 1 2 ( Φ−1(q)− Φ−1(1− q) ) , (26) where q′ = maxy′n 6=yn Prz∼U(x,λ) [gn(z) = y ′ n] is the probability of the second most likely prediction under the smoothing distribution. As in our previous proof for Gaussian smoothing, we can obtain a more pessimistic certificate by substituting 1−q for q′. Since Φ−1(q)−Φ−1(1−q) = 2Φ−1(q) and all λd are non-negative, we know that our prediction is certifiably robust if Din∑ d=1 1 λd · |xd − x′d| < Φ−1(p). (27) B.2 VARIANCE SMOOTHING We propose variance smoothing as a base certificate for binary data. Variance smoothing certifies predictions based on the mean and variance of the softmax score associated with a predicted label. It is in principle applicable to arbitrary data types. We focus on discrete data, but all results can be generalized from discrete to continuous data by replacing any sum over probability mass functions with integrals over probability density functions. We first derive a general form of variance smoothing before discussing our certificates for binary data in Section B.2.1 and Section B.2.2. Variance smoothing assumes that we make predictions by randomly smoothing a base model’s softmax scores. That is, given base model g : X→ ∆|Y| mapping from an arbitrary discrete input space X to scores from the |Y|-dimensional probability simplex ∆|Y|, we define the smoothed classifier f(x) = argmaxy∈YEz∼Ψ(x) [g(z)y]. Here, Ψ(x) is an arbitrary distribution over X parameterized by x, e.g a Normal distribution with mean x. The smoothed classifier does not return the most likely prediction, but the prediction associated with the highest expected softmax score. Given an input x ∈ X, smoothed prediction y = f(x) and a perturbed input x′ ∈ X, we want to determine whether f(x′) = y. By definition of our smoothed classifier, we know that f(x′) = y if y is the label with the highest expected softmax score. In particular, we know that f(x′) = y if y’s softmax score is larger than all other softmax scores combined, i.e. Ez∼Ψ(x′) [g(z)y] > 0.5 =⇒ f(x′) = y. (28) Computing Ez∼Ψ(x′) [g(z)y] exactly is usually not tractable – especially if we later want to evaluate robustness to many x′ from a whole perturbation model B ⊆ X. Therefore, we compute a lower bound on Ez∼Ψ(x′) [g(z)y]. If even this lower bound is larger than 0.5, we know that prediction y is certainly robust. For this, we define a set of functions H with gy ∈ H and compute the minimum softmax score across all functions from H: min h∈H Ez∼Ψ(x′) [h(z)] > 0.5 =⇒ f(x′) = y. (29) For our variance smoothing approach, we define H to be the set of all functions that have a larger or equal expected value and a smaller or equal variance, compared to our base model g applied to unperturbed input x. Let µ = Ez∼Ψ(x) [g(z)y] be the expected softmax score of our base model g for label y. Let σ2 = Ez∼Ψ(x) [ (g(z)y − ν)2 ] be the expected squared distance of the softmax score from a scalar ν ∈ R. (Choosing ν = µ yields the variance of the softmax score. An arbitrary ν is only needed for technical reasons related to Monte Carlo estimation Section C.2). Then, we define H = { h : X→ R ∣∣∣ Ez∼Ψ(x) [h(z)] ≥ µ ∧ Ez∼Ψ(x) [(h(z)− ν)2] ≤ σ2} (30) Clearly, by the definition of µ and σ2, we have gy ∈ H. Note that we do not restrict functions from H to the domain [0, 1], but allow arbitrary real-valued outputs. By evaluating Eq. 28 with H defined as in Eq. 29, we can determine if our prediciton is robust. To compute the optimal value , we need the following two Lemmata: Lemma 1. Given a discrete set X and the set Π of all probability mass functions over X, any two probability mass functions π1, π2 ∈ Π fulfill∑ z∈X π2(z) π1(z) π2(z) ≥ 1. (31) Proof. For a fixed probability mass function π1, Eq. 31 is lower-bounded by the minimal expected likelihood ratio that can be achieved by another π̃(z) ∈ Π:∑ z∈X π2(z) π1(z) π2(z) ≥ min π̃∈Π ∑ z∈X π̃(z) π1(z) π̃(z). (32) The r.h.s. term can be expressed as the constrained optimization problem min π̃ ∑ z∈X π̃(z) π1(z) π̃(z) s.t. ∑ z∈X π̃(z) = 1 (33) with the corresponding dual problem max λ∈R min π̃ ∑ z∈X π̃(z) π1(z) π̃(z) + λ ( −1 + ∑ z∈X π̃(z) ) . (34) The inner problem is convex in each π̃(z). Taking the gradient w.r.t. to π̃(z) for all z ∈ X shows that it has its minimum at ∀z ∈ X : π̃(z) = −λπ1(z)2 . Substituting into Eq. 34 results in max λ∈R ∑ z∈X λ2π1(z) 2 4π1(z) + λ ( −1− ∑ z∈X λπ1(z) 2 ) (35) = max λ∈R −λ2 ∑ z∈X π1(z) 4 − λ (36) = max λ∈R −λ 2 4 − λ (37) = 1. (38) Eq. 37 follows from the fact that π1(z) is a valid probability mass function. Due to duality, the optimal dual value 1 is a lower bound on the optimal value of our primal problem Eq. 31. Lemma 2. Given a probability distribution D over a R and a scalar ν ∈ R, let µ = Ez∼D and ξ = Ez∼D [ (z − ν)2 ] . Then ξ ≥ (µ− ν)2 Proof. Using the definitions of µ and ξ, as well as some simple algebra, we can show: ξ ≥ (µ− ν)2 (39) ⇐⇒ Ez∼D [ (z − ν)2 ] ≥ µ2 − 2µν + ν2 (40) ⇐⇒ Ez∼D [ z2 − 2zν + ν2 ] ≥ µ2 − 2µν + ν2 (41) ⇐⇒ Ez∼D [ z2 − 2zν + ν2 ] ≥ µ2 − 2µν + ν2 (42) ⇐⇒ Ez∼D [ z2 ] − 2µν + ν2 ≥ µ2 − 2µν + ν2 (43) ⇐⇒ Ez∼D [ z2 ] ≥ µ2 (44) It is well known for the variance that Ez∼D [ (z − µ)2 ] = Ez∼D [ z2 ] − µ2. Because the variance is always non-negative, the above inequality holds. Using the previously described approach and lemmata, we can show the soundness of the following robustness certificate: Theorem 3. Given a model g : X → ∆|Y| mapping from discrete set X to scores from the |Y|-dimensional probability simplex, let f(x) = argmaxy∈YEz∼Ψ(x) [g(z)y] be the corresponding smoothed classifier with smoothing distribution Ψ(x) and probability mass function πx(z) = Prz̃∼Ψ(x) [z̃ = z]. Given an input x ∈ X and smoothed prediction y = f(x), let µ = Ez∼Ψ(x) [g(z)y] and σ2 = Ez∼Ψ(x) [ (g(z)y − ν)2 ] with ν ∈ R. If ν ≤ µ, we know that f(x′) = y if ∑ z∈X πx′(z) 2 πx(z) < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 ) . (45) Proof. Following our discussion above, we know that f(x′) = y if Ez∼Ψ(x′) [g(z)y] > 0.5 with H defined as in Section 5. We can compute a (tight) lower bound on minh∈H Ez∼Ψ(x′) by following the functional optimization approach for randomized smoothing proposed by Zhang et al. (2020). That is, we solve a dual problem in which we optimize the value h(z) for each z ∈ X. By the definition of the set H, our optimization problem is min h:X→R Ez∼Ψ(x′) [h(z)] s.t. Ez∼Ψ(x) [h(z)] ≥ µ, Ez∼Ψ(x) [ (h(z)− ν)2 ] ≤ σ2. The corresponding dual problem with dual variables α, β ≥ 0 is max α,β≥0 min h:X→R Ez∼Ψ(x′) [h(z)] +α ( µ− Ez∼Ψ(x) [h(z)] ) + β ( Ez∼Ψ(x) [ (h(z)− ν)2 ] − σ2 ) . (46) We first move move all terms that don’t involve h out of the inner optimization problem: = max α,β≥0 αµ−βσ2 + min h:X→R Ez∼Ψ(x′) [h(z)]−αEz∼Ψ(x) [h(z)]+βEz∼Ψ(x) [ (h(z)− ν)2 ] (47) Writing out the expectation terms and combining them into one sum (or – in the case of continuous X – one integral), our dual problem becomes = max α,β≥0 αµ− βσ2 + min h:X→R ∑ z∈X h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) (48) (recall that πx′ and πx′ refer to the probability mass functions of the smoothing distributions). The inner optimization problem can be solved by finding the optimal h(z) in each point z: = max α,β≥0 αµ− βσ2 + ∑ z∈X min h(z)∈R h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) (49) Because β ≥ 0, each inner optimization problem is convex in h(z). We can thus find the optimal h∗(z) by setting the derivative to zero: d dh(z) h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) ! = 0 (50) ⇐⇒ πx′(z)− απx(z) + 2β (h(z)− ν)πx(z) ! = 0 (51) =⇒ h∗(z) = − πx ′(z) 2βπx(z) + α 2β + ν. (52) Substituting into Eq. 48 and simplifying leaves us with the dual problem max α,β≥0 αµ− βσ2 − α 2 4β + α 2β − αν + ν − 1 4β ∑ z∈X πx′(z) 2 πx(z) (53) In the following, let us use ρ = ∑ z∈X πx′ (z) 2 πx(z) as a shorthand for the expected likelihood ratio. The problem is concave in α. We can thus find the optimum α∗ by setting the derivative to zero, which gives us α∗ = 2β(µ− ν) + 1. Because β ≥ 0 and ou theorem assumes that ν ≤ µ, α∗ is a feasible solution to the dual problem. Substituting into Eq. 53 and simplifying results in max β≥0 α∗µ− βσ2 − α ∗2 4β + α∗ 2β − α∗ν + ν − 1 4β ρ (54) = max β≥0 β ( (µ− ν)2 − σ2 ) + µ+ 1 4β (1− ρ) . (55) Lemma 1 shows that the expected likelihood ratio ρ is always greater than or equal to 1. Lemma 2 shows that (µ− ν)2 − σ2 ≤ 0. Therefore Eq. 55 is concave in β. The optimal value of β can again be found by setting the derivative to zero: β∗ = √ 1− ρ 4 ((µ− ν)2 − σ2) . (56) Recall that our theorem assumes σ2 ≥ (µ− ν)2 and thus β∗ is real valued. Substituting into Eq. 56 shows that the maximum of our dual problem is µ+ √ (1− p) ((µ− ν)2 − σ2). (57) By duality, this is a lower bound on our primal problem minh∈H Ez∼Ψ(x′) [h(z)]. We know that our prediction is certifiably robust, i.e. f(x) = y, if minh∈H Ez∼Ψ(x′) [h(z)] > 0.5. So, in particular, our prediction is robust if µ+ √ (1− ρ) ((µ− ν)2 − σ2) > 0.5 (58) ⇐⇒ ρ < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 )2 (59) ⇐⇒ ∑ z∈X πx′(z) 2 πx(z) < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 )2 (60) The last equivalence is the result of inserting the definition of the expected likelihood ratio ρ. With Theorem 3 in place, we can certify robustness for arbitrary smoothing distributions, assuming we can compute the expected likelihood ratio. When we are working with discrete data and the smoothing distributions factorize (but are not necessarily i.i.d.), this can be done efficiently, as the two following base certificates for binary data demonstrate. B.2.1 BERNOULLI VARIANCE SMOOTHING FOR PERTURBATIONS OF BINARY DATA We begin by proving the base certificate presented in Section 5. Recall that we we use a smoothing distribution F(x,θ) with θ ∈ [0, 1]Din that independently flips the d’th bit with probability θd, i.e. for x, z ∈ {0, 1}Din and z ∼ F(x,θ) we have Pr[zd 6= xd] = θd. Theorem 1. Given an output gn : {0, 1}Din → ∆|Y| mapping to scores from the |Y|-dimensional probability simplex, let fn(x) = argmaxy∈YEz∼F(x,θ) [gn(z)y] be the corresponding smoothed classifier with θ ∈ [0, 1]Din . Given an input x ∈ {0, 1}Din and smoothed prediction yn = fn(x), let µ = Ez∼F(x,θ) [gn(z)y] and σ2 = Varz∼F(x,θ) [gn(z)y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = ln ( (1−θd)2 θd + (θd) 2 1−θd ) , η = ln ( 1 + 1σ2 ( µ− 12 )2) and κ = 0. Proof. Based on our definition of the base certificate interface from Section 5, we must show that ∀x′ ∈ H : fn(x′) = yn with H = { x′ ∈ {0, 1}Din ∣∣∣∣∣ Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) · |x′d − xd|0 < ln ( 1 + 1 σ2 ( µ− 1 2 )2)} , (61) Because all bits are flipped independently, our probability mass function πx(z) = Prz̃∼Ψ(x) [z̃ = z] factorizes: πx(z) = Din∏ d=1 πxd(zd) (62) with πxd(zd) = { θd if zd 6= xd 1− θd else . (63) Thus, our expected likelihood ratio can be written as ∑ z∈{0,1}Din πx′(z) 2 πx(z) = ∑ z∈{0,1}Din Din∏ d=1 πx′d(zd) 2 πxd(zd) = Din∏ d=1 ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) . (64) For each dimension d, we can distinguish two cases: If both the perturbed and unperturbed input are the same in dimension d, i.e. x′d = xd, then πx′ d (z) πxd (z) = 1 and thus ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = ∑ zd∈{0,1} πx′d(zd) = θd + (1− θd) = 1. (65) If the perturbed and unperturbed input differ in dimension d, then∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = (1− θd)2 θd + (θd) 2 1− θd . (66) Therefore, the expected likelihood ratio is Din∏ d=1 ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = Din∏ d=1 ( (1− θd)2 θd + (θd) 2 1− θd )|x′d−xd| . (67) Due to Theorem 3 (and using ν = µ when computing the variance), we know that our prediction is robust, i.e. fn(x′) = yn, if ∑ z∈{0,1}Din πx′(z) 2 πx(z) < 1 + 1 σ2 ( µ− 1 2 )2 (68) ⇐⇒ Din∏ d=1 ( (1− θd)2 θd + (θd) 2 1− θd )|x′d−xd| < 1 + 1 σ2 ( µ− 1 2 )2 (69) ⇐⇒ Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) |x′d − xd| < ln ( 1 + 1 σ2 ( µ− 1 2 )2) . (70) Because xd and x′d are binary, the last inequality is equivalent to Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) |x′d − xd|0 < ln ( 1 + 1 σ2 ( µ− 1 2 )2) . (71) B.2.2 SPARSITY-AWARE VARIANCE SMOOTHING FOR PERTURBATIONS OF BINARY DATA Sparsity-aware randomized smoothing (Bojchevski et al., 2020) is an alternative smoothing approach for binary data. It uses
1. What is the focus of the paper regarding tasks that involve multiple outputs? 2. What is the novelty of the proposed method compared to existing works? 3. How does the reviewer assess the strengths and weaknesses of the paper's content? 4. Do you have any questions about the key differences between the proposed method and previous research? 5. What are some concerns regarding the evaluation and comparison with other methods? 6. Are there any issues with the figures and their explanations? 7. Why was the certified ratio chosen, and what are some concerns regarding its usage? 8. Are there any suggestions for improving the image segmentation evaluation? 9. What is the computational overhead of finding the optimal σ min and σ max? 10. Is there a concern about losing semantic relationships between pixels when partitioning an image into grid cells?
Summary Of The Paper Review
Summary Of The Paper The authors consider tasks mapping a single input to multiple outputs and study robustness certificate against input perturbations. To achieve the goal, the authors propose a collective certificate where each output is dependent on the entire input but assigns different levels of importance to different input regions, and then derived the collective certificate based on localized randomized smoothing. The proposed collective certificate is evaluated on both image segmentation and node classification tasks. Review Strengths: The studied problem is important and the paper is easy to follow (before evaluation) in general. The proposed method is somewhat novel. Weaknesses: The key difference between the proposed method and the existing works is unclear. The paper lacks comparison with the existing method. Evaluation is insufficient and some metrics are not reasonable. The main idea is motivated by several existing works and the key difference between the proposed method and these existing works are unclear for me. For instance, what’s the key difference between the proposed theoretical results (e.g., Prop 1&2) and those in Eiras et al.? Is it possible for Fischer et al. to be adapted to the considered setting? No comparison with center smoothing or/and Fischer et al., if possible. Figures are confused for me. For instance, what is the difference between the “dotted line” and “Naive” in Figure 2? Similar to all the other figures. Why using certified ratio? I do not think it meaningful to certify pixels/nodes that are wrongly classified. The image segmentation is only evaluated on Pascal voc2012. I suggest the authors to evaluate on more datasets, e.g., Cityscape, which is used in Fischer et al. The authors use AUC to report the It is not standard to use AUC to show image segmentation results. A commonly used metric is mIoU (mean intersection over union). UNet is an outdated model for semantic segmentation model. I suggest the authors to evaluate on more recent models such as deeplab v3, danet, hrnet, etc. What’s the computational overhead to find the optimal σ min and σ max ? While computation is an issue for localized RS, the way to partition an image into grid cells and certifying grid cells independently already looses semantic relationship between pixels in different cells and thus violates the purpose of semantic segmentation.
ICLR
Title Localized Randomized Smoothing for Collective Robustness Certification Abstract Models for image segmentation, node classification and many other tasks map a single input to multiple labels. By perturbing this single shared input (e.g. the image) an adversary can manipulate several predictions (e.g. misclassify several pixels). A recent collective robustness certificate provides strong guarantees on the number of predictions that are simultaneously robust. This method is however limited to strictly local models, where each prediction is associated with a small receptive field. We propose a more general collective certificate for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e.g. based on their proximity in the image). The certificate is based on our novel localized randomized smoothing approach, where the random perturbation strength for different input regions is proportional to their importance for the outputs. The resulting locally smoothed model yields strong collective guarantees while maintaining high prediction quality on both image segmentation and node classification tasks. 1 INTRODUCTION There is a wide range of tasks that require models making multiple predictions based on a single input. For example, semantic segmentation requires assigning a label to each pixel in an image. When deploying such multi-output classifiers in practice, their robustness should be a key concern. After all – just like simple classifiers (Szegedy et al., 2014) – they can fall victim to adversarial attacks (Xie et al., 2017; Zügner & Günnemann, 2019; Belinkov & Bisk, 2018). Even without an adversary, random noise or measuring errors could cause one or multiple predictions to unexpectedly change. In the following, we derive a method that provides provable guarantees on how many predictions can be changed by an adversary. Since all outputs operate on the same input, they also have to be attacked simultaneously by choosing a single perturbed input. While attacks on a single prediction may be easy, attacks on different predictions may be mutually exclusive. We have to explicitly account for this fact to obtain a proper collective robustness certificate that provides tight bounds. There already exists a dedicated collective robustness certificate for multi-output classifiers (Schuchardt et al., 2021), but it is only benefical for models we call strictly local, where each output depends only on a small, well-defined subset of the input. One example are graph neural networks that classify each node in a graph based only on its neighborhood. Multi-output classifiers used in practice, however, are often only softly local. While – unlike strictly local models – all of their predictions are in principle dependent on the entire input, each output may assign different importance to different components. For example, deep convolutional networks used for image segmentation can have very small effective receptive fields (Luo et al., 2016; Liu et al., 2018b), i.e. primarily use a small region of the input in labeling each pixel. Many models used in node classification are based on the homophily assumption that connected nodes are mostly of the same class. Thus, they primarily use features from neighboring nodes to classify each node. Even if an architecture is not inherently softly local, a model may learn a softly local mapping through training. For example, a transformer (Vaswani et al., 2017) can in principle attend to any part of an input sequence. However, in practice the learned attention maps may be ”sparse”, with the prediction for each token being determined primarily by a few (not necessarily nearby) tokens (Shi et al., 2021). While an adversarial attack on a single prediction of a softly local model is conceptually no different from that on a single-output classifier, attacking multiple predictions simultaneously can be much more challenging. By definition, adversarial attacks have to be unnoticeable, meaning the adversary only has a limited budget for perturbing the input. When each output is focused on a different part of the input, the adversary has to decide on where to allocate their adversarial budget and may be unable to attack all outputs at once. Our collective robustness certificate explicitly accounts for this budget allocation problem faced by the adversary and can thus provide stronger robustness guarantees. Our certificate is based on randomized smoothing (Liu et al., 2018a; Lécuyer et al., 2019; Cohen et al., 2019). Randomized smoothing is a versatile black-box certification method that has originally been proposed for single-output classifiers. Instead of directly analysing a model, it constructs a smoothed classifier that returns the most likely prediction of the model under random perturbations of its input. One can then use statistical methods to certify the robustness of this smoothed classifier. We discuss more details in Section 2. Randomized smoothing is typically used with i.i.d. noise: Each part of the input (e.g. each pixel) independently undergoes random perturbations sampled from the same noise distribution. One can however also use non-i.i.d. noise (Eiras et al., 2021). This results in a smoothed classifier that is certifiably more robust to parts of the input that are smoothed with higher noise levels (e.g. larger standard deviation). We apply randomized smoothing to softly-local multi-output classifiers in a scheme we call localized randomized smoothing: Instead of using the same smoothing distribution for all outputs, we randomly smooth each output (or set of outputs) using a different non-i.i.d. distribution that matches its inherent soft locality. Using a low noise level for the most relevant parts of the input allows us to retain a high prediction quality (e.g. accuracy). Less relevant parts of the input can be smoothed with a higher noise level. The resulting certificates (one per output) explicitly quantify how robust each prediction is to perturbations of which section of the input – they are certificates of soft locality. After certifying each prediction independently using localized randomized smoothing, we construct a (mixed-integer) linear program that combines these per-prediction base certificates into a collective certificate that provably bounds the number of simultaneously attackable predictions. This linear program explicitly accounts for soft locality and the budget allocation problem it causes for the adversary. This allows us to prove much stronger guarantees of collective robustness than simply certifying each prediction independently. Our core contributions are: • Localized randomized smoothing, a novel smoothing scheme for multi-output classifiers. • A variance smoothing method for efficiently certifying smoothed models on discrete data. • A collective certificate that leverages our identified common interface for base certificates. 2 BACKGROUND AND RELATED WORK Randomized smoothing. Randomized smoothing is a flexible certification technique that can be used for various data types, perturbation models and tasks. For simplicity, we focus on a classification certificate for l2 perturbations (Cohen et al., 2019). Assume we have a continuous D-dimensional input space RD, a label set Y and a classifier g : RD → Y. We can use isotropic Gaussian noise with standard deviation σ ∈ R+ to construct the smoothed classifier f = argmaxy∈Y Prz∼N (x,σ) [g(z) = y] that returns the most likely prediction of base classifier g under the input distribution 1. Given an input x ∈ RD and the smoothed prediction y = f(x), we want to determine whether the prediction is robust to all l2 perturbations of magnitude , i.e. whether ∀x′ : ||x′−x||2 ≤ : f(x′) = y. Let q = Prz∼N (x,σ) [g(x) = y] be the probability of g predicting label y. The prediction of our smoothed classifier is robust if < σΦ−1(q) (Cohen et al., 2019). This result showcases a trade-off we alluded to in the previous section: The certificate can become stronger if the noise-level (here σ) is increased. But doing so could also lower the accuracy of the smoothed classifier or reduce q and thus weaken the certificate. White-box certificates for multi-output classifiers. There are multiple recent methods for certifying the robustness of specific multi-output models (see, for example, (Tran et al., 2021; Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2020; Ko et al., 2019; Ryou et al., 2021; Shi et al., 2020; Bonaert et al., 2021)) by analyzing their specific architecture and weights. They are however not designed to certify collective robustness. They can only determine independently for each prediction whether or not it can be adversarially attacked. Collective robustness certificates. Most directly related to our work is the certificate of Schuchardt et al. (2021). Like ours, it combines many per-prediction certificates into a collective certificate. But, unlike our novel localized smoothing approach, their certification procedure is only beneficial for strictly local models, i.e. models whose outputs operate on small subsets of the input. Furthermore, their certificate assumes binary data, while our certificate defines a common interface for various data types and perturbation models. A more detailed comparison can be found in Section D. Recently, Fischer et al. (2021) proposed a certificate for semantic segmentation. They consider a different notion of collective robustness: They are interested in determining whether all predictions are robust. In Section C.4 we discuss their method in detail and show that, when used for certifying our notion of collective robustness (i.e. the number of robust predictions), their method is no better than certifying each output independently using the certificate of Cohen et al. (2019). Furthermore, our certificate can be used to provide equally strong guarantees for their notion of collective robustness by checking whether the number of certified predictions equals the overall number of predictions. Another method that can be used for certifying collective robustness is center smoothing (Kumar & Goldstein, 2021). Center smoothing bounds how much a vector-valued prediction changes w.r.t to a distance function under adversarial perturbations. With the l0 pseudo-norm as the distance function, center smoothing bounds how many predictions of a classifier can be simultaneously changed. Randomized smoothing with non-i.i.d. noise. While not designed for certifying collective robustness, two recent certificates for non-i.i.d. Gaussian (Fischer et al., 2020) and uniform smoothing (Eiras et al., 2021) can be used as a component of our collective certification approach: They can serve as per-prediction base certificates, which can then be combined into our stronger collective certificate (more details in Section 4) . Note that we do not use the procedure for optimizing the smoothing distribution proposed by Eiras et al. (2021), as this would enable adversarial attacks on the smoothing distribution itself and invalidate the certificate (see discussion by Wang et al. (2021)). 3 COLLECTIVE THREAT MODEL Before certifying robustness, we have to define a threat model, which specifies the type of model that is attacked, the objective of the adversary and which perturbations they are allowed to use. We assume that we have a multi-output classifier f : XDin → YDout , that maps from a Din-dimensional vector space to Dout labels from label set Y. We further assume that this classifier f is the result of randomly smoothing a base classifier g, as discussed in Section 2. To simplify our notation, we write fn to refer to the function x 7→ f(x)n that outputs the n-th label. Given this multi-output classifier f , an input x ∈ XDin and the resulting vector of predictions y = f(x), the objective of the adversary is to cause as many predictions from a set of targeted indices T ⊆ {1, . . . , Dout} to change. That is, their objective is minx′∈Bx ∑ n∈T I [fn(x ′) = yn], where Bx ⊆ XDin is the perturbation model. Importantly, note that the minimization operator is outside the sum, meaning the predictions have to 1In practice, all probabilities have to be estimated using Monte Carlo sampling (see discussion in Section C). be attacked using a single input. As is common in robustness certification, we assume a norm-bound perturbation model. That is, given an input x ∈ XDin , the adversary is only allowed to use perturbed inputs from the set Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } with p, ≥ 0. 4 A RECIPE FOR COLLECTIVE CERTIFICATES Before discussing technical details, we provide a high-level overview of our method. In localized randomized smoothing, we assign each output gn of a base classifier g its own smoothing distribution Ψ(n) that matches our assumptions or knowledge about the base classifier’s soft locality, i.e. for each n ∈ {1, . . . , Dout} choose a Ψ(n) that induces more noise in input components that are less relevant for gn. For example, in Fig. 1, we assume that far-away regions of the image are less relevant and thus perturb pixels in the bottom left with more noise when classifying pixels in the top-right corner. The chosen smoothing distributions can then be used to construct the smoothed classifier f . Given an input x ∈ XDin and the corresponding smoothed prediction y = f(x), randomized smoothing makes it possible to compute per-prediction base certificates. That is, for each yn, one can compute a set H(n) ⊆ XDin of perturbed inputs that the prediction is robust to, i.e. ∀x′ ∈ Hn : fn(x ′) = yn. Our motivation for using non-i.i.d. distributions is that the H(n) will guarantee more robustness for input dimensions smoothed with more noise, i.e. quantify model locality. The objective of our adversary is minx′∈Bx ∑ n∈T I [fn(x ′) = yn] with collective perturbation model Bx ⊆ XDin . That is, they want to change as many predictions from the targeted set T as possible. A trivial lower bound can be obtained by counting how many predictions are – according to the base certificates – provably robust to the collective threat model. This can be expressed as∑ n∈T minx′∈Bx I [ x′ ∈ H(n) ] . In the following, we refer to this as the naı̈ve collective certificate. Thanks to our proposed localized smoothing scheme, we can use the following, tighter bound: min x′∈Bx ∑ n∈T I [fn(x ′) = yn] ≥ min x′∈Bx ∑ n∈T I [ x′ ∈ H(n) ] , (1) which preserves the fact that the adversary has to choose a single perturbed input. Because we use different non-i.i.d. smoothing distributions for different outputs, we provably know that each fn has varying levels of robustness for different parts of the input and that these robustness levels differ among outputs. Thus, in the r.h.s. problem the adversary has to allocate their limited budget across various input dimensions and may be unable to attack all predictions at once, just like when attacking the classifier in the l.h.s. objective (recall Section 1). This makes our collective certificate stronger than the naı̈ve collective certificate, which allows each prediction to be attacked independently. As stated in Section 1, the idea of combining base certificates into stronger collective certificates has already been explored by Schuchardt et al. (2021). But instead of using localized smoothing to capture the (soft) locality of a model, their approach leverages the fact that perturbations outside an output’s receptive field can be ignored. For softly local models, which have receptive fields covering the entire input, their certificate is no better than the naı̈ve certificate. Another novel insight underlying our approach is that various non-i.i.d. randomized smoothing certificates share a common interface, which makes our method applicable to diverse data types and perturbation models. In the next section, we formalize this common interface. We then discuss how it allows us to compute the collective certificate from Eq. 1 using (mixed-integer) linear programming. 5 COMMON INTERFACE FOR BASE CERTIFICATES A base certificate for a prediction yn = fn(x) is a set Hn ⊆ XDin of perturbed inputs that yn is provably robust to, i.e ∀x′ ∈ Hn : fn(x′) = yn. Note that base certificates do not have to be exact, but have to be sound, i.e. they do not have to specify all inputs to which the fn are robust but they must not contain any adversarial examples. As a common interface for base certificates, we propose that the sets Hn are parameterized by a weight vector w(n) ∈ RDin and a scalar η(n) that define a linear constraint on the element-wise distance between perturbed inputs and the clean input: H(n) = { x′ ∈ XDin ∣∣∣∣∣ Din∑ d=1 w (n) d · |x ′ d − xd|κ < η(n) } . (2) The weight vector encodes how robust yn is to perturbations of different components of the input. The scalar κ is important for collective robustness certification, because it encodes which collective perturbation model the base certificate is compatible with. For example, κ = 2 means that the base certificate can be used for certifying collective robustness to l2 perturbations. In the following, we present two base certificates implementing our interface: One for l2 perturbations of continuous data and one for perturbations of binary data. In Section B, we further present a certificate for binary data that can distinguish between adding and deleting bits and a certificate for l1 perturbations of continuous data. All base certificates guarantee more robustness for parts of the input smoothed with a higher noise level. The certificates for continuous data are based on known results (Fischer et al., 2020; Eiras et al., 2021) and merely reformulated to match our proposed interface, so that they can be used as part of our collective certification procedure. The certificates for discrete data however are original and based on the novel concept of variance smoothing. Gaussian smoothing for l2 perturbations of continuous data The first base certificate is a generalization of Gaussian smoothing to anisotropic noise, a corollary of Theorem A.1 from (Fischer et al., 2020). In the following, diag(z) refers to a diagonal matrix with diagonal entries z and Φ−1 : [0, 1]→ R refers to the the standard normal inverse cumulative distribution function. Proposition 1. Given an output gn : RDin → Y, let fn(x) = argmaxy∈Y Prz∼N (x,Σ) [gn(z) = y] be the corresponding smoothed output with Σ = diag (σ)2 andσ ∈ RDin+ . Given an inputx ∈ RDin and smoothed prediction yn = fn(x), let q = Prz∼N (x,Σ) [gn(z) = yn]. Then, ∀x′ ∈ H(n) : fn(x ′) = yn with H(n) defined as in Eq. 2, wd = 1σd2 , η = ( Φ(−1)(q) )2 and κ = 2. Bernoulli variance smoothing for perturbations of binary data For binary data, we use a smoothing distribution F(x,θ) with θ ∈ [0, 1]Din that independently flips the d’th bit with probability θd, i.e. for x, z ∈ {0, 1}Din and z ∼ F(x,θ) we have Pr[zd 6= xd] = θd. A corresponding certificate could be derived by generalizing (Lee et al., 2019), which considers a single shared θ ∈ [0, 1] with ∀d : θd = θ. However, the cost for computing this certificate would be exponential in the number of unique values in θ. We therefore propose a more efficient alternative. Instead of constructing a smoothed classifier that returns the most likely labels of the base classifier (as discussed in Section 2), we construct a smoothed classifier that returns the labels with the highest expected softmax scores (similar to CDF-smoothing (Kumar et al., 2020)). For this smoothed model, we can compute a robustness certificate in constant time. The certificate requires determining both the expected value and variance of softmax scores. We therefore call this method variance smoothing. While we use it for binary data, it is a general-purpose technique that can be applied to arbitrary domains and smoothing distributions (see discussion in Section B.2). In the following, we assume the label set Y to consist of numerical labels {1, . . . , |Y|}, which simplifies our notation. Theorem 1. Given an output gn : {0, 1}Din → ∆|Y| mapping to scores from the |Y|-dimensional probability simplex, let fn(x) = argmaxy∈YEz∼F(x,θ) [gn(z)y] be the corresponding smoothed classifier with θ ∈ [0, 1]Din . Given an input x ∈ {0, 1}Din and smoothed prediction yn = fn(x), let µ = Ez∼F(x,θ) [gn(z)y] and σ2 = Varz∼F(x,θ) [gn(z)y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = ln ( (1−θd)2 θd + (θd) 2 1−θd ) , η = ln ( 1 + 1σ2 ( µ− 12 )2) and κ = 0. 6 COMPUTING THE COLLECTIVE ROBUSTNESS CERTIFICATE With our common interface for base certificates in place, we can discuss how to compute the collective robustness certificate minx′∈Bx ∑ n∈T I [ x′ ∈ H(n) ] from Eq. 1. The result bounds the number of predictions yn with n ∈ {1, . . . , Dout} that can be simultaneously attacked by the adversary. In the following, we assume that the base certificates were obtained by using a smoothing distribution that is compatible with our lp collective perturbation model (i.e. κ = p), for example by using Gaussian noise for p = 2 or Bernoulli noise for p = 0. Inserting the definition of our base certificate interface from Eq. 2 and rewriting our perturbation model Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } as{ x′ ∈ XDin | ∑Din d=1 |x′d − xd|p ≤ p } , our objective from Eq. 1 can be expressed as min x′∈XDin ∑ n∈T I [ Din∑ d=1 w (n) d · |x ′ d − xd|p < η(n) ] s.t. Din∑ d=1 |x′d − xd|p ≤ p. (3) We can see that the perturbed inputx′ only affects the element-wise distances |x′d−xd|p. Rather than optimizing x′, we can instead directly optimize these distances, i.e. determine how much adversarial budget is allocated to each input dimension. For this, we define a vector of variables b ∈ RDin+ (or b ∈ {0, 1}Din for binary data). Replacing sums with inner products, we can restate Eq. 3 as min b∈RDin+ ∑ n∈T I [ bTw(n) < η(n) ] s.t. sum{b} ≤ p. (4) In a final step, we replace the indicator functions in Eq. 4 with a vector of boolean variables t ∈ {0, 1}Dout . Define the constants η(n) = p ·min ( 0,mind w (n) d ) . Then, min b∈RDin+ ,t∈{0,1}Dout ∑ n∈T tn s.t. ∀n : bTw(n) ≥ tnη(n) + (1− tn)η(n), sum{b} ≤ p. (5) is equivalent to Eq. 4. The first constraint guarantees that tn can only be set to 0 if the l.h.s. is greater or equal η(n), i.e. only when the base certificate can no longer guarantee robustness. The term involving η(n) ensures that for tn = 1 the problem is always feasible2. Eq. 5 can be solved using any mixed-integer linear programming solver. While the resulting MILP bears some semblance to that of Schuchardt et al. (2021), it is conceptually different. When evaluating their base certificates, they mask out parts of the budget vector b based on a model’s strict locality, while we weigh the budget vector based on the soft locality guaranteed by the base certificates. In addition, thanks to the interface specified in Section 5, our problem only involves a single linear constraint per prediction, making it much smaller and more efficient to solve. Interestingly, when using randomized smoothing base certificates for binary data, our certificate subsumes theirs, i.e. can provide the same robustness guarantees (see Section D.2). Improving efficiency. Still, the efficiency of our certificate in Eq. 5. certificate can be further improved. In Section A, we show that partitioning the outputs into Nout subsets sharing the same smoothing distribution and the the inputs into Nin subsets sharing the same noise level (for example like in Fig. 1), as well as quantizing the base certificate parameters η(n) into Nbin bins reduces the number of variables and constraints from Din + Dout and Dout + 1 to Nin + Nout · Nbins and Nout · Nbins + 1, respectively.We can thus control the problem size independent of the data’s dimensionality. We further derive a linear relaxation of the mixed-integer problem, which can be more efficiently solved while preserving the soundness of the certificate. 7 LIMITATIONS The main limitation of our approach is that it assumes softly local models. While it can be applied to arbitrary multi-output classifiers, it may not necessarily result in better certificates than randomized smoothing with i.i.d. distributions. Furthermore, choosing the smoothing distributions requires some a-priori knowledge or assumptions about which parts of the input are how relevant to making a prediction. Our experiments show that natural assumptions like homophily can be sufficient for choosing effective smoothing distributions. But doing so in other tasks may be more challenging. A limitation of (most) randomized smoothing certificates is that they use sampling to approximate the smoothed classifier. Because we use different smoothing distributions for different outputs, we can only use a fraction of the samples for each output. As discussed in Section A.1, we can alleviate this problem by sharing smoothing distributions among multiple outputs. Our experiments show that despite this issue, our method outperforms certificates that use a single smoothing distribution. Still, future work should try to improve the sample efficiency of randomized smoothing (for example by developing more methods for de-randomized smoothing (Levine & Feizi, 2020)).Any such advance could then be incorporated into our localized smoothing framework. 8 EXPERIMENTAL EVALUATION Our experimental evaluation has three objectives 1.) Verifying our main claim that localized randomized smoothing offers a better trade-off between accuracy and certifiable robustness than smoothing 2Because η(n) is the smallest value bTw(n) can take on, i.e. min b∈RDin+ bTw (n) d s.t. sum{b} ≤ p. with i.i.d. distributions. 2.) Determining to what extend the linear program underlying the proposed collective certificate strengthens our robustness guarantees. 3.) Assessing the efficacy of our novel variance smoothing certificate for binary data. Any of the used datasets and classifiers only serve as a means of comparing certificates. We thus use well-known and well-established architectures instead of overly focusing on maximizing prediction accuracy by using the latest SOTA models. We use two metrics to quantify certificate strength: Certified accuracy (i.e. the percentage of correct and certifiably robust predictions) and certified ratio (i.e. the percentage of certifiably robust predictions, regardless of correctness)3. As single-number metrics, we report the AUC of the certified accuracy/ratio functions w.r.t. adversarial budget (not to be confused with certifying some AUC metric). For localized smoothing, we evaluate both the naı̈ve collective certificate, i.e. certifying predictions independently (see Section 4), and the proposed LP-based certificate (using the linearly relaxed version from Appendix A.4). We compare our method to two baselines using i.i.d. randomized smoothing: The naı̈ve collective certificate and center smoothing (Kumar & Goldstein, 2021). For softly local models, the certificate of Schuchardt et al. (2021) is equivalent to the naı̈ve baseline. When used to certify the number of robust predictions, the segmentation certificate of Fischer et al. (2021) is at most as strong as the naı̈ve baseline (see Section C.4). Thus, our method is compared to all existing collective certificates listed in Section 2. In all experiments, we use Monte Carlo randomized smoothing. More details on the experimental setup can be found in Section E. 8.1 SEMANTIC SEGMENTATION Dataset and model. We evaluate our certificate for continuous data and l2 perturbations on the Pascal-VOC 2012 segmentation validation set. Training is performed on 10582 pairs of training samples extracted from SBD4 (Hariharan et al., 2011), To increase batch sizes and thus allow a more thorough investigation of different smoothing parameters, all images are downscaled to 50% of their original size. Our base model is a U-Net segmentation model with a ResNet-18 backbone. To obtain accurate and robust smoothed classifiers, base models should be trained on the smoothing distribution. We thus train 51 different instances of our base model, augmenting the training data with a different σtrain ∈ {0, 0.01, . . . , 0.5}. At test time, when evaluating a baseline i.i.d. certificate with smoothing distribution N (0, σ), we load the model trained with σtrain = σ. To perform localized randomized smoothing, we choose parameters σmin, σmax ∈ R+ and partition all images into regular grids of size 4 × 6 (similar to example Fig. 1). To classify pixels in grid cell (i, j), we sample noise for grid cell (k, l) using N (0, σ′), with σ′ ∈ [σmin, σmax] chosen proportional to the distance of (i, j) and (k, l) (more details in Section E.2.1). As the base model, we load the one trained with σtrain = σmin. Using the same distribution at train and test time for the i.i.d. baselines but not for localized smoothing is meant to skew the results in the baseline’s favor. But, in Section E.2.3, we also repeat our experiments using the same base model for i.i.d. and localized smoothing. Evaluation. The main goal of our experiments on segmentation is to verify that localized smoothing can offer a better trade-off between accuracy and certifiable robustness. That is, for all or most σ, there are σmin, σmax such that the locally smoothed model has higher accuracy and certifiable collective robustness than i.i.d. smoothing baselines using N (0, σ). Because σ, σmin, σmax ∈ R+, we can not evaluate all possible combinations. We therefore use the following scheme: We focus on the case σ ∈ [0, 0.5], which covers all distributions used in (Kumar & Goldstein, 2021) and 3In the case of image segmentation, we compute these metrics per image and then average over the dataset. 4Also known as ”Pascal trainaug” (Fischer et al., 2021). First, we evaluate our two baselines for five σ ∈ {0.1, 0.2, 0.3, 0.4, 0.5}. This results in baseline models with diverse levels of accuracy and robustness (e.g. the accuracy of the naı̈ve baseline shrinks from 87.7% to 64.9% and the AUC of its certified accuracy grows from 0.17 to 0.644). We then test whether, for each of the σ, we can find σmin, σmax such that the locally smoothed models attains higher accuracy and is certifiably more robust. Finally, to verify that {0.1, 0.2, 0.3, 0.4, 0.5} were not just a particularly poor choice of baseline parameters, we fix the chosen σmin, σmax. We then perform a fine-grained search over σ ∈ [0, 0.5] with resolution 0.01 to find a baseline model that has at least the same accuracy and certifiable robustness (as measured by certificate AUC) as any of the fixed locally smoothed models. If this is not possible, this provides strong evidence that the proposed smoothing scheme and certificate indeed offer a better trade-off. Fig. 2 shows one example. For σ = 0.4, the naı̈ve i.i.d. baseline has an accuracy of 72.5%. With σmin = 0.25, σmax = 1.5, the proposed localized smoothing certificate yields both a higher accuracy of 76.4% and a higher certified accuracy for all . It can certify robustness for up to 1.825, compared to 1.45 of the baseline and the AUC of its certified accuracy curve is 43.1% larger. Fig. 2 also highlights the usefulness of the linear program we derived in Section 5: Evaluating the localized smoothing base certificates independently, i.e. computing the naı̈ve collective certificate (dotted orange line), is not sufficient for outperforming the baseline. But combining them via the proposed linear program drastically increases the certified accuracy The results for all other combinations of smoothing distribution parameters, both baselines and both metrics of certificate strength can be found in Section E.2.3. Tables 1 and 2 summarize the first part of our evaluation procedure, in which we optimize the localized smoothing parameters. Safe for one exception (with σ = 0.2, center smoothing has a lower accuracy, but slightly larger certified ratio), the locally smoothed models have the same or higher accuracy, but provide stronger robustness guarantees. The difference is particularly large for σ ∈ {0.3, 0.4, 0.5}, where the accuracy of models smoothed with i.i.d. noise drops off, while our localized smoothing distribution preserves the most relevant parts of the image to allow for high accuracy. Table 5 summarizes the second part of our evaluation scheme, in which we perform a fine-grained search over [0, 0.5]. We find that there is no σ such that either of the i.i.d. baselines can outperform any of the chosen locally smoothed models w.r.t. AUC of their certified accuracy or certified ratio curves. This is ample evidence for our claim that localized smoothing offers a better trade-off than i.i.d. smoothing. Also, the collective LPs caused little computational overhead (avg. 0.68 s per LP, more details in Section E.2.3). 8.2 NODE CLASSIFICATION Dataset and model. We evaluate our certificate for binary data on the Cora-ML node classification dataset. We use two different base-models: Approximate Personalized Propagation of Neural Predictions (APPNP) (Klicpera et al., 2019) and a 6-layer Graph Convolutional network (GCN) (Kipf & Welling, 2017). Both models have a receptive field that covers most or all of the graph, meaning they are softly local. For details on model and training parameters, see Section E.3.1. As center smoothing has only been derived for Gaussian smoothing, we only compare to the naı̈ve baseline. For both, the baseline and our localized smoothing certificate, we use sparsity-aware randomized smoothing (Bojchevski et al., 2020) , i.e. flip 1-bits and 0-bits with different probabilities (θ− and θ+, respectively), which allows us to certify different levels of robustness to deletions and additions of bits. With localized randomized smoothing, we use the variance smoothing base certificate derived in Section B.2.2. We choose the distribution parameters for localized smoothing based on an assumption of homophily, i.e. nearby nodes are most relevant for classifying a node. We partition the graph into 5 clusters and define parameters θ±min and θ ± max. When classifying a node in cluster i, we randomly smooth attributes in cluster j with θ+ij , θ − ij that are based on linearly interpolating in [θ−min, θ − max] and [θ − min, θ − max] based on the affinity of the clusters (details in Section E.3.1). Evaluation. We first evaluate the new variance-based certificate and compare it to the certificate derived by Bojchevski et al. (2020). For this, we use only one cluster, meaning we use the same smoothing distribution for both. Fig. 11 in Section E.3 shows that the variance certificate is weaker than the baseline for additions, but better for deletions. It appears sufficiently effective to be used as a base certificate and integrated into a stronger, collective certificate. The parameter space of our smoothing distributions is large. For the localized approach we have four continuous parameters, as we have to specify both the minimal and maximal noise values. Therefore, it is difficult to show that our approach achieves a better accuracy-robustness trade-off over the whole noise space. However, we can investigate the accuracy-robustness trade-off within some areas of this space. For the localized approach we choose a few fixed combinations of the noise parameters θ±min and θ±max. To show our claim, we then optimise the baselines with parameters in an interval around our θ+min and θ − min. This is a smaller space, as the baselines only have two parameters. We select the baseline whose certified accuracy curve has the largest AUC. We perform the search for the best baseline for the addition and deletion scenario independently, i.e., the best baseline model for addition and deletion does not have to be the same. In Fig. 3, we see the certified accuracy of an APPNP model for a varying number of attribute additions and deletions (left and right respectively). To find the best distribution parameters for the baselines, we evaluated combinations of θ+ ∈ {0.04, 0.055, 0.07} and θ− ∈ [0.1, . . . , 0.827], using 11 equally spaced values for the interval. For adversarial additions, the best baseline yields a certified accuracy curve with an AUC of 4.51 compared to our 5.65. The best baseline for deletions has an AUC of 7.76 compared to our 16.26. Our method outperforms these optimized baselines for most adversarial budgets, while maintaining the same clean accuracy (i.e. certified accuracy at = 0). Experiments with different noise parameters and classifiers can be found in Section E.3. In general, we find that we significantly outperform the baseline when certifying robustness to deletions, but often have weaker certificates for additions (which may be inherent to the variance smoothing base certificates). Due to the large continuous parameter space, we cannot claim that localized smoothing outperforms the naı̈ve baseline everywhere. However, our results show that, for the tested parameter regions, localized smoothing can provide a significantly better accuracy-robustness trade-off. We found that using the collective LP instead of naı̈vely combining the base certificates can result in much stronger certificates: The AUC of the certified accuracy curve (averaged over all experiments) increased by 38.8% and 33.6% for addition and deletion, respectively. The collective LPs caused little computational overhead (avg. 10.9 s per LP, more details in Section E.3.3). 9 CONCLUSION In this work, we have proposed the first collective robustness certificate for softly local multi-output classifiers. It is based on localized randomized smoothing, i.e. randomly smoothing different outputs using different non-i.i.d. smoothing distributions matching the model’s locality. We have shown how per-output certificates based on localized smoothing can be computed and that they share a common interface. This interface allows them to be combined into a strong collective robustness certificate. Experiments on image segmentation and node classification tasks demonstrate that localized smoothing can offer a better robustness-accuracy trade-off than existing randomized smoothing techniques. Our results show that locality is linked to robustness, which suggests the research direction of building more effective local models to robustly solve multi-output tasks. 10 REPRODUCIBILITY STATEMENT We prove all theoretic results that were not already derived in the main text in Appendices A to C. To ensure reproducibility of the experimental results we provide detailed descriptions of the evaluation process with the respective parameters in Section E.2 and Section E.3. Code will be made available to reviewers via an anonymous link posted on OpenReview, as suggested by the guidelines. 11 ETHICS STATEMENT In this paper, we propose a method to increase the robustness of machine learning models against adversarial perturbations and to certify their robustness. We see this as an important step towards general usage of models in practice, as many existing methods are brittle to crafted attacks. Through the proposed method, we hope to contribute to the safe usage of machine learning. However, robust models also have to be seen with caution. As they are harder to fool, harmful purposes like mass surveillance are harder to avoid. We believe that it is still necessary to further research robustness of machine learning models as the positive effects can outweigh the negatives, but it is necessary to discuss the ethical implications of the usage in any specific application area. A.1 SHARING SMOOTHING DISTRIBUTIONS AMONG OUTPUTS In principle, our proposed certificate allows a different smoothing distribution Ψ(n) to be used per output gn of our base model. In practice, where we have to estimate properties of the smoothed classifier using Monte Carlo methods, this is problematic: Samples cannot be re-used, each of the many outputs requires its own round of sampling. We can increase the efficiency of our localized smoothing approach by partitioning our Dout outputs into Nout subsets that share the same smoothing distribution. When making smoothed predictions or computing base certificates, we can then reuse the same samples for all outputs within each subsets. More formally, we partition our Dout output dimensions into sets K(1), . . . ,K(Nout) with⋃̇Nout i=1 K(i) = {1, . . . , Dout}. (6) We then associate each set K(i) with a smoothing distribution Ψ(i). For each base model output gn with n ∈ K(i), we then use smoothing distribution Ψ(i) to construct the smoothed output fn, e.g. fn(x) = argmaxy∈Y Prz∼Ψ(i) [f(x+ z) = y] (note that we use a different smoothing paradigm for binary data, see Section 5). A.2 QUANTIZING CERTIFICATE PARAMETERS Recall that our base certificates from Section 5 are defined by a linear inequality: A prediction yn = fn(x) is robust to a perturbed input x′ ∈ XDin if ∑D d=1 w (n) d · |x′d − xd| p < η(n), for some p ≥ 0. The weight vectors w(n) ∈ RDin only depend on the smoothing distributions. A side of effect of sharing the same smoothing Ψ(i) among all outputs from a set K(i), as discussed in the previous section, is that the outputs also share the same weight vector w(i) ∈ RDin with ∀n ∈ K(i) : w(i) = w(n). Thus, for all smoothed outputs fn with n ∈ K(i), the smoothed prediction yn is robust if ∑D d=1 w (i) d · |x′d − xd| p < η(n). Evidently, the base certificates for outputs from a set K(i) only differ in their parameter η(n). Recall that in our collective linear program we use a vector of variables t ∈ {0, 1}Dout to indicate which predictions are robust according to their base certificates (see Section 6). If there are two outputs fn and fm with η(n) = η(m), then fn and fm have the same base certificate and their robustness can be modelled by the same indicator variable. Conversely, for each set of outputs K(i), we only need one indicator variable per unique η(n). By quantizing the η(n) within each subset K(i) (for example by defining equally sized bins between minn∈K(i) η(n) and maxn∈K(i) η(n) ), we can ensure that there is always a fixed number Nbins of indicator variables per subset. This way, we can reduce the number of indicator variables from Dout to Nout ·Nbins. To implement this idea, we define matrix of thresholds E ∈ RNout×Nbins with ∀i : min {Ei,:} ≤ minn∈K(i) ({ η(n) | n ∈ K(i) }) . We then define a function ξ : {1, . . . , Nout} × R→ R with ξ(i, η) = max ({Ei,j | j ∈ {1, . . . , Nbins ∧ Ei,j < η}) (7) that quantizes base certificate parameter η from output subset K(i) by mapping it to the next smallest threshold in Ei,:. For feasibility, like in Section 6 we need to compute the constant η(i) = min b∈RDin+ bTw (i) d s.t. sum{b} ≤ p to ensure feasibility of the problem. Note that, be- cause all outputs from a subset K(i) share the same weight vector w(i), we only have to compute this constant once per subset. We can bound the collective robustness of the targeted dimensions T of our vector of predictions y = f(x) as follows: min ∑ i∈{1,...,Nout} ∑ j∈{1,...,Nbins} Ti,j ∣∣∣{n ∈ T ∩K(i) ∣∣∣ξ (i, η(n)) = Ei,j }∣∣∣ (8) s.t. ∀i, j : bTw(i) ≥ Ti,jη(i) + (1− Ti,j)Ei,j , sum{b} ≤ p (9) b ∈ RDin+ , T ∈ {0, 1}Nout×Nbins . (10) Constraint Eq. 9 ensures that Ti,j is only set to 0 if bTw(i) ≥ Ei,j , i.e. all predictions from subset K(i) whose base certificate parameter η(n) is quantized to Ei,j are no longer robust. When this is the case, the objective function decreases by the number of these predictions. For Nout = Dout, Nbins = 1 and En,1 = η(n), we recover our general certificate from Section 6. Note that, if the quantization maps any parameter η(n) to a smaller number, the set H(n) becomes more restrictive, i.e. yn is considered robust to a smaller set of perturbed inputs. Thus, Eq. 8 is a lower bound on our general certificate from Section 6. A.3 SHARING NOISE LEVELS AMONG INPUTS Similar to how partitioning the output dimensions allows us to control the number of output variables t, partitioning the input dimensions and using the same noise level within each partition allows us to control the number of variables b that model the allocation of adversarial budget. Assume that we have partitioned our output dimensions into Nout subsets K(1), . . . ,K(Nout , with outputs in each subset sharing the same smoothing distribution Ψ(i), as explained in Section A.1. Let us now define Nin input subsets J(1), . . . , J(Nin) with⋃̇Nout i=1 J(i) = {1, . . . , Dout}. (11) Recall that a prediction yn = fn(x) with n ∈ K(i) is robust to a perturbed input x′ ∈ XDin if ∑D d=1 w (i) d · |x′d − xd| p < η(n) and that the weight vectors w(i) only depend on the smoothing distributions. Assume that we choose each smoothing distribution Ψ(i) such that ∀l ∈ {1, . . . , Nin},∀d, d′ ∈ J(l) : w(i)d = w (i) d′ , i.e. all input dimensions within each set J(l) have the same weight. This can be achieved by choosing Ψ(i) so that all dimensions in each input subset Jl are smoothed with the noise level (note that we can still use different Ψ(i), i.e. different noise levels for smoothing different sets of outputs). For example, one could use a Gaussian distribution with covariance matrix Σ = diag (σ)2 with ∀l ∈ {1, . . . , Nin},∀d, d′ ∈ J(l) : σd = σd′ . In this case, the evaluation of our base certificates can be simplified. Prediction yn = fn(x) is robust to a perturbed input x′ ∈ XDin if D∑ d=1 w (i) d · |x ′ d − xd| p < η(n) (12) = Nin∑ l=1 u(i) · ∑ d∈J(l) |x′d − xd| p < η(n), (13) with u ∈ RNin and ∀i ∈ {1, . . . , Nout},∀l ∈ {1, . . . , Nin},∀d ∈ J : uil = wid. That is, we can replace each weight vector w(i) that has one weight w(i)d per input dimension d with a smaller weight vector u(i) with one weight u(i)l per input subset J(l). For our linear program, this means that we no longer need a budget vector b ∈ RDin+ to model the element-wise distance |x′d − xd| p in each dimension d. Instead, we can use a smaller budget vector b ∈ RNin+ to model the overall distance within each input subset J(l), i.e. ∑ d∈J(l) |x′d − xd| p. Combined with the quantization of certificate parameters from the previous section, our optimization problem becomes min ∑ i∈{1,...,Nout} ∑ j∈{1,...,Nbins} Ti,j ∣∣∣{n ∈ T ∩K(i) ∣∣∣ξ (i, η(n)) = Ei,j }∣∣∣ (14) s.t. ∀i, j : bTu(i) ≥ Ti,jη(i) + (1− Ti,j)Ei,j , sum{b} ≤ p, (15) b ∈ RNin+ , T ∈ {0, 1}Nout×Nbins . (16) with u ∈ RNin and ∀i ∈ {1, . . . , Nout},∀l ∈ {1, . . . , Nin},∀d ∈ J : ωil = wid. For Nout = Dout, Nin = Din, Nbins = 1 and En,1 = η(n), we recover our general certificate from Section 6. When certifying robustness for binary data, we impose different constraints on b. To model that the adversary can not flip more bits than are present within each subset, we use a budget vector b ∈ NNin0 with ∀l ∈ {1, . . . , Nin} : bl ≤ ∣∣J(l)∣∣, instead of a continuous budget vector b ∈ RNin+ . A.4 LINEAR RELAXATION Combining the previous steps allows us to reduce the number of problem variables and linear constraints from Din + Dout and Dout + 1 to Nin + Nout · Nbins and Nout · Nbins + 1, respectively. Still, finding an optimal solution to the mixed-integer linear program may be too expensive. One can obtain a lower bound on the optimal value and thus a valid, albeit more pessimistic, robustness certificate by relaxing all to be continuous. When using the general certificate from Section 6, the binary vector t ∈ {0, 1}Dout can be relaxed to t ∈ [0, 1]Dout . When using the certificate with quantized base certificate parameters from Section A.2 or Section A.3, the binary matrix T ∈ [0, 1]Nout×Nbins can be relaxed to T ∈ [0, 1]Nout×Nbins . Conceptually, this means that predictions can be partially certified, i.e. tn ∈ (0, 1) or Ti,j ∈ (0, 1). In particular, a prediction can be partially certified even if we know that is impossible to attack under the collective perturbation model Bx = { x′ ∈ XDin | ||x′ − x||p ≤ } . Just like Schuchardt et al. (2021), who encountered the same problem with their collective certificate, we circumvent this issue by first computing a set L ⊆ T of all targeted predictions in T that are guaranteed to always be robust: L = { n ∈ T ∣∣∣∣∣ ( max x∈Bx D∑ d=1 w (n) d · |x ′ d − xd| p ) < η(n) } (17) = { n ∈ T ∣∣∣max(max{w(n)} · p, 0) < η(n)} . (18) The equality follows from the fact that the most effective way of attacking a prediction is to allocate all adversarial budget to the least robust dimension, i.e. the dimension with the largest weight – unless all weights are negative. Because we know that all predictions with indices in L are robust, we do not have to include them in the collective optimization problem and can instead compute |L|+ min x′∈Bx ∑ n∈T\L I [ x′ ∈ H(n) ] . (19) The r.h.s. optimization can be solved using the general collective certificate from Section 6 or any of the more efficient, modified certificates from previous sections. When using the general collective certificate from Section 6 with binary data, the budget variables b ∈ {0, 1}Din can be relaxed to b ∈ [0, 1]Din . When using the modified collective certificate from Section A.3, the budget variables with b ∈ NNin0 can be relaxed to b ∈ R Nin + . The additional constraint ∀l ∈ {1, . . . , Nin} : bl ≤ ∣∣J(l)∣∣ can be kept in order to model that the adversary cannot flip (or partially flip) more bits than are present within each input subset J(l). B BASE CERTIFICATES In the following, we show why the base certificates presented in Section 5 hold and present alternatives for other collective perturbation models. B.1 GAUSSIAN SMOOTHING FOR l2 PERTURBATIONS OF CONTINUOUS DATA Proposition 1. Given an output gn : RDin → Y, let fn(x) = argmaxy∈Y Prz∼N (x,Σ) [gn(z) = y] be the corresponding smoothed output with Σ = diag (σ)2 andσ ∈ RDin+ . Given an inputx ∈ RDin and smoothed prediction yn = fn(x), let q = Prz∼N (x,Σ) [gn(z) = yn]. Then, ∀x′ ∈ H(n) : fn(x ′) = yn with H(n) defined as in Eq. 2, wd = 1σd2 , η = ( Φ(−1)(q) )2 and κ = 2. Proof. Based on the definition of the base certificate interface, we need to show that, ∀x′ ∈ H : fn(x ′) = yn with H = { x′ ∈ RDin ∣∣∣∣∣ Din∑ d=1 1 σ2d · |xd − x′d|2 < ( Φ−1(q) )2} . (20) Eiras et al. (2021) have shown that under the same conditions as above, but with a general covariance matrix Σ ∈ RDin×Din+ , a prediction yn is certifiably robust to a perturbed input x′ if√ (x− x′)Σ−1(x− x′) < 1 2 ( Φ−1(q)− Φ−1(q′) ) , (21) where q′ = maxy′n 6=yn Prz∼N (x,Σ) [gn(z) = y ′ n] is the probability of the second most likely prediction under the smoothing distribution. Because the probabilities of all possible predictions have to sum up to 1, we have q′ ≤ 1 − q. Since Φ−1 is monotonically increasing, we can obtain a lower bound on the r.h.s. of Eq. 21 and thus a more pessimistic certificate by substituting 1 − q for q′ (deriving such a ”binary certificate” from a ”multiclass certificate” is common in randomized smoothing and was already discussed in (Cohen et al., 2019)):√ (x− x′)Σ−1(x− x′) < 1 2 ( Φ−1(q)− Φ−1(1− q) ) , (22) In our case, Σ is a diagonal matrix diag (σ)2 with σ ∈ RDin+ . Thus Eq. 22 is equivalent to√√√√Din∑ d=1 (xd − x′d) 1 σ2d (xd − x′d) < 1 2 ( Φ−1(q)− Φ−1(1− q) ) . (23) Finally, using the fact that Φ−1(q)−Φ−1(1− q) = 2Φ−1(q) and eliminating the square root shows that we are certifiably robust if Din∑ d=1 1 σ2d · |xd − x′d|2 < ( Φ−1(q) )2 . (24) B.1.1 UNIFORM SMOOTHING FOR l1 PERTURBATIONS OF CONTINUOUS DATA An alternative base certificate for l1 perturbations is again due to Eiras et al. (2021). Using uniform instead of Gaussian noise later allows us to collective certify robustness to l1-norm-bound perturbations. In the following U(x,λ) with x ∈ RD, λ ∈ RD+ refers to a vector-valued random distribution in which the d-th element is uniformly distributed in [xd − λd, xd + λd]. Proposition 2. Given an output gn : RDin → Y, let f(x) = argmaxy∈Y Prz∼U(x,λ) [g(z) = y] be the corresponding smoothed classifier with λ ∈ RDin+ . Given an input x ∈ RDin and smoothed prediction y = f(x), let p = Prz∼U(x,λ) [g(z) = y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = 1/λd, η = Φ−1(q) and κ = 1. Proof. Based on the definition of H(n), we need to prove that ∀x′ ∈ H : fn(x′) = yn with H = { x′ ∈ RDin | Din∑ d=1 1 λd · |xd − x′d| < Φ−1(q) } , (25) Eiras et al. (2021) have shown that under the same conditions as above, a prediction yn is certifiably robust to a perturbed input x′ if Din∑ d=1 | 1 λd · (xd − x′d) | < 1 2 ( Φ−1(q)− Φ−1(1− q) ) , (26) where q′ = maxy′n 6=yn Prz∼U(x,λ) [gn(z) = y ′ n] is the probability of the second most likely prediction under the smoothing distribution. As in our previous proof for Gaussian smoothing, we can obtain a more pessimistic certificate by substituting 1−q for q′. Since Φ−1(q)−Φ−1(1−q) = 2Φ−1(q) and all λd are non-negative, we know that our prediction is certifiably robust if Din∑ d=1 1 λd · |xd − x′d| < Φ−1(p). (27) B.2 VARIANCE SMOOTHING We propose variance smoothing as a base certificate for binary data. Variance smoothing certifies predictions based on the mean and variance of the softmax score associated with a predicted label. It is in principle applicable to arbitrary data types. We focus on discrete data, but all results can be generalized from discrete to continuous data by replacing any sum over probability mass functions with integrals over probability density functions. We first derive a general form of variance smoothing before discussing our certificates for binary data in Section B.2.1 and Section B.2.2. Variance smoothing assumes that we make predictions by randomly smoothing a base model’s softmax scores. That is, given base model g : X→ ∆|Y| mapping from an arbitrary discrete input space X to scores from the |Y|-dimensional probability simplex ∆|Y|, we define the smoothed classifier f(x) = argmaxy∈YEz∼Ψ(x) [g(z)y]. Here, Ψ(x) is an arbitrary distribution over X parameterized by x, e.g a Normal distribution with mean x. The smoothed classifier does not return the most likely prediction, but the prediction associated with the highest expected softmax score. Given an input x ∈ X, smoothed prediction y = f(x) and a perturbed input x′ ∈ X, we want to determine whether f(x′) = y. By definition of our smoothed classifier, we know that f(x′) = y if y is the label with the highest expected softmax score. In particular, we know that f(x′) = y if y’s softmax score is larger than all other softmax scores combined, i.e. Ez∼Ψ(x′) [g(z)y] > 0.5 =⇒ f(x′) = y. (28) Computing Ez∼Ψ(x′) [g(z)y] exactly is usually not tractable – especially if we later want to evaluate robustness to many x′ from a whole perturbation model B ⊆ X. Therefore, we compute a lower bound on Ez∼Ψ(x′) [g(z)y]. If even this lower bound is larger than 0.5, we know that prediction y is certainly robust. For this, we define a set of functions H with gy ∈ H and compute the minimum softmax score across all functions from H: min h∈H Ez∼Ψ(x′) [h(z)] > 0.5 =⇒ f(x′) = y. (29) For our variance smoothing approach, we define H to be the set of all functions that have a larger or equal expected value and a smaller or equal variance, compared to our base model g applied to unperturbed input x. Let µ = Ez∼Ψ(x) [g(z)y] be the expected softmax score of our base model g for label y. Let σ2 = Ez∼Ψ(x) [ (g(z)y − ν)2 ] be the expected squared distance of the softmax score from a scalar ν ∈ R. (Choosing ν = µ yields the variance of the softmax score. An arbitrary ν is only needed for technical reasons related to Monte Carlo estimation Section C.2). Then, we define H = { h : X→ R ∣∣∣ Ez∼Ψ(x) [h(z)] ≥ µ ∧ Ez∼Ψ(x) [(h(z)− ν)2] ≤ σ2} (30) Clearly, by the definition of µ and σ2, we have gy ∈ H. Note that we do not restrict functions from H to the domain [0, 1], but allow arbitrary real-valued outputs. By evaluating Eq. 28 with H defined as in Eq. 29, we can determine if our prediciton is robust. To compute the optimal value , we need the following two Lemmata: Lemma 1. Given a discrete set X and the set Π of all probability mass functions over X, any two probability mass functions π1, π2 ∈ Π fulfill∑ z∈X π2(z) π1(z) π2(z) ≥ 1. (31) Proof. For a fixed probability mass function π1, Eq. 31 is lower-bounded by the minimal expected likelihood ratio that can be achieved by another π̃(z) ∈ Π:∑ z∈X π2(z) π1(z) π2(z) ≥ min π̃∈Π ∑ z∈X π̃(z) π1(z) π̃(z). (32) The r.h.s. term can be expressed as the constrained optimization problem min π̃ ∑ z∈X π̃(z) π1(z) π̃(z) s.t. ∑ z∈X π̃(z) = 1 (33) with the corresponding dual problem max λ∈R min π̃ ∑ z∈X π̃(z) π1(z) π̃(z) + λ ( −1 + ∑ z∈X π̃(z) ) . (34) The inner problem is convex in each π̃(z). Taking the gradient w.r.t. to π̃(z) for all z ∈ X shows that it has its minimum at ∀z ∈ X : π̃(z) = −λπ1(z)2 . Substituting into Eq. 34 results in max λ∈R ∑ z∈X λ2π1(z) 2 4π1(z) + λ ( −1− ∑ z∈X λπ1(z) 2 ) (35) = max λ∈R −λ2 ∑ z∈X π1(z) 4 − λ (36) = max λ∈R −λ 2 4 − λ (37) = 1. (38) Eq. 37 follows from the fact that π1(z) is a valid probability mass function. Due to duality, the optimal dual value 1 is a lower bound on the optimal value of our primal problem Eq. 31. Lemma 2. Given a probability distribution D over a R and a scalar ν ∈ R, let µ = Ez∼D and ξ = Ez∼D [ (z − ν)2 ] . Then ξ ≥ (µ− ν)2 Proof. Using the definitions of µ and ξ, as well as some simple algebra, we can show: ξ ≥ (µ− ν)2 (39) ⇐⇒ Ez∼D [ (z − ν)2 ] ≥ µ2 − 2µν + ν2 (40) ⇐⇒ Ez∼D [ z2 − 2zν + ν2 ] ≥ µ2 − 2µν + ν2 (41) ⇐⇒ Ez∼D [ z2 − 2zν + ν2 ] ≥ µ2 − 2µν + ν2 (42) ⇐⇒ Ez∼D [ z2 ] − 2µν + ν2 ≥ µ2 − 2µν + ν2 (43) ⇐⇒ Ez∼D [ z2 ] ≥ µ2 (44) It is well known for the variance that Ez∼D [ (z − µ)2 ] = Ez∼D [ z2 ] − µ2. Because the variance is always non-negative, the above inequality holds. Using the previously described approach and lemmata, we can show the soundness of the following robustness certificate: Theorem 3. Given a model g : X → ∆|Y| mapping from discrete set X to scores from the |Y|-dimensional probability simplex, let f(x) = argmaxy∈YEz∼Ψ(x) [g(z)y] be the corresponding smoothed classifier with smoothing distribution Ψ(x) and probability mass function πx(z) = Prz̃∼Ψ(x) [z̃ = z]. Given an input x ∈ X and smoothed prediction y = f(x), let µ = Ez∼Ψ(x) [g(z)y] and σ2 = Ez∼Ψ(x) [ (g(z)y − ν)2 ] with ν ∈ R. If ν ≤ µ, we know that f(x′) = y if ∑ z∈X πx′(z) 2 πx(z) < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 ) . (45) Proof. Following our discussion above, we know that f(x′) = y if Ez∼Ψ(x′) [g(z)y] > 0.5 with H defined as in Section 5. We can compute a (tight) lower bound on minh∈H Ez∼Ψ(x′) by following the functional optimization approach for randomized smoothing proposed by Zhang et al. (2020). That is, we solve a dual problem in which we optimize the value h(z) for each z ∈ X. By the definition of the set H, our optimization problem is min h:X→R Ez∼Ψ(x′) [h(z)] s.t. Ez∼Ψ(x) [h(z)] ≥ µ, Ez∼Ψ(x) [ (h(z)− ν)2 ] ≤ σ2. The corresponding dual problem with dual variables α, β ≥ 0 is max α,β≥0 min h:X→R Ez∼Ψ(x′) [h(z)] +α ( µ− Ez∼Ψ(x) [h(z)] ) + β ( Ez∼Ψ(x) [ (h(z)− ν)2 ] − σ2 ) . (46) We first move move all terms that don’t involve h out of the inner optimization problem: = max α,β≥0 αµ−βσ2 + min h:X→R Ez∼Ψ(x′) [h(z)]−αEz∼Ψ(x) [h(z)]+βEz∼Ψ(x) [ (h(z)− ν)2 ] (47) Writing out the expectation terms and combining them into one sum (or – in the case of continuous X – one integral), our dual problem becomes = max α,β≥0 αµ− βσ2 + min h:X→R ∑ z∈X h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) (48) (recall that πx′ and πx′ refer to the probability mass functions of the smoothing distributions). The inner optimization problem can be solved by finding the optimal h(z) in each point z: = max α,β≥0 αµ− βσ2 + ∑ z∈X min h(z)∈R h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) (49) Because β ≥ 0, each inner optimization problem is convex in h(z). We can thus find the optimal h∗(z) by setting the derivative to zero: d dh(z) h(z)πx′(z)− αh(z)πx(z) + β (h(z)− ν)2 πx(z) ! = 0 (50) ⇐⇒ πx′(z)− απx(z) + 2β (h(z)− ν)πx(z) ! = 0 (51) =⇒ h∗(z) = − πx ′(z) 2βπx(z) + α 2β + ν. (52) Substituting into Eq. 48 and simplifying leaves us with the dual problem max α,β≥0 αµ− βσ2 − α 2 4β + α 2β − αν + ν − 1 4β ∑ z∈X πx′(z) 2 πx(z) (53) In the following, let us use ρ = ∑ z∈X πx′ (z) 2 πx(z) as a shorthand for the expected likelihood ratio. The problem is concave in α. We can thus find the optimum α∗ by setting the derivative to zero, which gives us α∗ = 2β(µ− ν) + 1. Because β ≥ 0 and ou theorem assumes that ν ≤ µ, α∗ is a feasible solution to the dual problem. Substituting into Eq. 53 and simplifying results in max β≥0 α∗µ− βσ2 − α ∗2 4β + α∗ 2β − α∗ν + ν − 1 4β ρ (54) = max β≥0 β ( (µ− ν)2 − σ2 ) + µ+ 1 4β (1− ρ) . (55) Lemma 1 shows that the expected likelihood ratio ρ is always greater than or equal to 1. Lemma 2 shows that (µ− ν)2 − σ2 ≤ 0. Therefore Eq. 55 is concave in β. The optimal value of β can again be found by setting the derivative to zero: β∗ = √ 1− ρ 4 ((µ− ν)2 − σ2) . (56) Recall that our theorem assumes σ2 ≥ (µ− ν)2 and thus β∗ is real valued. Substituting into Eq. 56 shows that the maximum of our dual problem is µ+ √ (1− p) ((µ− ν)2 − σ2). (57) By duality, this is a lower bound on our primal problem minh∈H Ez∼Ψ(x′) [h(z)]. We know that our prediction is certifiably robust, i.e. f(x) = y, if minh∈H Ez∼Ψ(x′) [h(z)] > 0.5. So, in particular, our prediction is robust if µ+ √ (1− ρ) ((µ− ν)2 − σ2) > 0.5 (58) ⇐⇒ ρ < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 )2 (59) ⇐⇒ ∑ z∈X πx′(z) 2 πx(z) < 1 + 1 σ2 − (µ− ν)2 ( µ− 1 2 )2 (60) The last equivalence is the result of inserting the definition of the expected likelihood ratio ρ. With Theorem 3 in place, we can certify robustness for arbitrary smoothing distributions, assuming we can compute the expected likelihood ratio. When we are working with discrete data and the smoothing distributions factorize (but are not necessarily i.i.d.), this can be done efficiently, as the two following base certificates for binary data demonstrate. B.2.1 BERNOULLI VARIANCE SMOOTHING FOR PERTURBATIONS OF BINARY DATA We begin by proving the base certificate presented in Section 5. Recall that we we use a smoothing distribution F(x,θ) with θ ∈ [0, 1]Din that independently flips the d’th bit with probability θd, i.e. for x, z ∈ {0, 1}Din and z ∼ F(x,θ) we have Pr[zd 6= xd] = θd. Theorem 1. Given an output gn : {0, 1}Din → ∆|Y| mapping to scores from the |Y|-dimensional probability simplex, let fn(x) = argmaxy∈YEz∼F(x,θ) [gn(z)y] be the corresponding smoothed classifier with θ ∈ [0, 1]Din . Given an input x ∈ {0, 1}Din and smoothed prediction yn = fn(x), let µ = Ez∼F(x,θ) [gn(z)y] and σ2 = Varz∼F(x,θ) [gn(z)y]. Then, ∀x′ ∈ H(n) : fn(x′) = yn with H(n) defined as in Eq. 2, wd = ln ( (1−θd)2 θd + (θd) 2 1−θd ) , η = ln ( 1 + 1σ2 ( µ− 12 )2) and κ = 0. Proof. Based on our definition of the base certificate interface from Section 5, we must show that ∀x′ ∈ H : fn(x′) = yn with H = { x′ ∈ {0, 1}Din ∣∣∣∣∣ Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) · |x′d − xd|0 < ln ( 1 + 1 σ2 ( µ− 1 2 )2)} , (61) Because all bits are flipped independently, our probability mass function πx(z) = Prz̃∼Ψ(x) [z̃ = z] factorizes: πx(z) = Din∏ d=1 πxd(zd) (62) with πxd(zd) = { θd if zd 6= xd 1− θd else . (63) Thus, our expected likelihood ratio can be written as ∑ z∈{0,1}Din πx′(z) 2 πx(z) = ∑ z∈{0,1}Din Din∏ d=1 πx′d(zd) 2 πxd(zd) = Din∏ d=1 ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) . (64) For each dimension d, we can distinguish two cases: If both the perturbed and unperturbed input are the same in dimension d, i.e. x′d = xd, then πx′ d (z) πxd (z) = 1 and thus ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = ∑ zd∈{0,1} πx′d(zd) = θd + (1− θd) = 1. (65) If the perturbed and unperturbed input differ in dimension d, then∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = (1− θd)2 θd + (θd) 2 1− θd . (66) Therefore, the expected likelihood ratio is Din∏ d=1 ∑ zd∈{0,1} πx′d(zd) 2 πxd(zd) = Din∏ d=1 ( (1− θd)2 θd + (θd) 2 1− θd )|x′d−xd| . (67) Due to Theorem 3 (and using ν = µ when computing the variance), we know that our prediction is robust, i.e. fn(x′) = yn, if ∑ z∈{0,1}Din πx′(z) 2 πx(z) < 1 + 1 σ2 ( µ− 1 2 )2 (68) ⇐⇒ Din∏ d=1 ( (1− θd)2 θd + (θd) 2 1− θd )|x′d−xd| < 1 + 1 σ2 ( µ− 1 2 )2 (69) ⇐⇒ Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) |x′d − xd| < ln ( 1 + 1 σ2 ( µ− 1 2 )2) . (70) Because xd and x′d are binary, the last inequality is equivalent to Din∑ d=1 ln ( (1− θd)2 θd + (θd) 2 1− θd ) |x′d − xd|0 < ln ( 1 + 1 σ2 ( µ− 1 2 )2) . (71) B.2.2 SPARSITY-AWARE VARIANCE SMOOTHING FOR PERTURBATIONS OF BINARY DATA Sparsity-aware randomized smoothing (Bojchevski et al., 2020) is an alternative smoothing approach for binary data. It uses
1. Why is Equation 1 not an equality? Is not the set H for every pixel output exactly tight? Or is the assumption here that H can be an over-approximation to the true certified region? 2. In the case where H is given by localized smoothing using Propositions 1 or 2, will this result in an exact equality for (1)? 3. Can the authors comment on the complexity of solving (5) with the proposed splits in the inputs? How does the method compare against the Naive approach? 4. It is not clear how the Naive classifier in Fgiure 2 works. Is it simply performing RS certification for every pixel independently and counting the number of certifiable functions with a shared sigma over all inputs? If so, why does the dashed line perform worse than this Naive approach? 5. It is also not clear why solving the linear program reformulation works better than direct RS with anisotropic certificates (dashed line vs solid orange). Since the linear program lower bounds the original objective, shouldn't it provide a pessimistic certified accuracy compared to the dash orange? That is to say, it tends to produce more adversaries that flip predictions under the lower bound but do not necessarily flip prediction of the original binary objective? 6. What is the key motivation/intuition behind believing that solving (3) is better than directly certifying the anisotropic certificates? This link is very much unclear to me. 7. One of the key weaknesses that I see so far is the small certification experiments. Perhaps a different dataset experiment following Fischer et al would make the submission stronger. 8. It is not clear to me if showing state-of-the-art certification results is not the objective of the paper; what is the key objective/insight of this reformulation? Showing that localized RS has a better trade-off between accuracy and robustness is not sufficiently novel for several reasons. (1) The per-pixel sigma certification was derived in prior art which the paper faithfully states. (2) The proposed method of nonshared sigma over all methods (without linear relaxation) is expected to be better intuitively as it overparameterizes the smoothing distribution holding the global shared sigma RS as a special case. So, I do not
Summary Of The Paper Review
Summary Of The Paper Summary The paper proposes a localized smoothing approach for certifying structured output models. The threat model of interest in this setting is the bounded input perturbation that results in the highest number of prediction flips per pixel. The paper proposes to utilize the idea that when certifying pixel predictions at location (i,j), one can smooth the input pixels (k,l) far away from (i,j) by larger noise magnitude (standard deviation) resulting in a higher certified radii as they perhaps may not play a significant role in predicting the label of (i,j). The paper formulates the problem of finding worst cases adversaries that result in misprediction to the task of finding adversaries outside the certified region with anisotropic smoothing over the input. The paper then lower bounds the binary objective with a box constraints and solve the problem using linear programs. Experiments are conducted on image segmentation tasks along with node classification. Review Strengths I very much appreciate the extensive experiments in tuning in the baseline with several sigma to guarantee (1) best robust accuracy of baseline when compared to localized randomized smoothing (2) when comparing certified ratios, the accuracies are comparable. The paper's proposed methodology is very well written and the formulation is beautifully simple and intuitive. General Questions Why is Equation 1 not an equality? Is not the set H for every pixel output is exactly tight? Or is the assumption here that H can be an over-approximation to the true certified region? In the case where H is given by the localized smoothing using Propositions 1 or 2, will this result in an exact equality for (1)? Can the authors comment of the compleixty for solving (5) with the proposed splits in the inputs? How does the method compare against the Naive approach. I believe it should be much more expensive as it requires the computation of the lower bound to the probability of success and then followed by solving the high dimensional linear program. There are no comparisons against Fischer et al on certifying semantic segmentation. It is not clear to how does the Naive classifier in Fgiure 2 work. Is it simply to perform RS certification for every pixel independently and counting the number of certifiable functions with a shared sigma over all inputs? If so, it is not clear to me why does the dashed line perform worse than this Naive approach? It is also not clear to me why does solving the linear program reformulation work better than direct RS with anisotropic certificates (dashed line vs solid orange). Since the linear program lower bounds the original objective should not it be providing a pessimistic certified accuracy compared to the dash orange? That is to say, it tends to produce more adversaries that flip predictions under the lower bound but does not necessarily flip prediction of the original binary objective? This is related to (6). What is the key motivation/intuition behind believing that solving (3) is better than directly certifying the anisotropic certificates. This link is very much un clear to me. One of the key weaknesses that I see so far is the small certification experiments. For example, on the segmentation task, the certified accuracy is reported only on 50 images of the validation dataset. Perhaps a different dataset experiments (Cityscapes) following Fischer et al would make the submission stronger. It is not clear to me if showing state-of-the-art certification results is not the objective of the paper, what is the key objective/insight of this reformulation? Showing that localized RS has a better trade-off between accuracy and robustness is not sufficiently novel for several reasons. (1) The per pixel sigma certification was derived in prior art which the paper faithfully state. (2) The proposed method of nonshared sigma over all method (without linear relaxation) is expected to be better intuitively as it overparameterizes the smoothing distribution holding the global shared sigma RS as a special case. So, I do not follow: what are the key new insights that I am missing here (please correct me if I am wrong here). (3) The certification time is not reported which is expected to be much larger. I generally enjoyed reading the variance smoothing part (section B.2 in the appendix). The authors' are to be commended on it. Can the authors comment on how much of improved radius can the incorporation of the variance bring compared to classical RS certification. Does it actually tighten the radius for other certificates beyond gaussian? If so, this should be highlighted more in the paper as I believe this is significant. If not, then why doing variance smoothing and not directly with the expectation fixed and lower bounding over functions with a fixed expected prediction with Bernoulli distribution? So in short, why variance smoothing? Minor comments Page 3, line 9: "be the probability of g y" >> "of predicting y under g". Caption of Figure 2. Sigma_min is given two values, I believe the second should be Sigma_max. Page 8, line 11: sigma_min assigned twice. Page 8, line 16: "Figure Fig. 2". Proposition 4 in page 17. The radius shown by Eiras et al is without inverse CDF. It is only the difference in predictions between the top and runner up class. Figure 1. I generally advise to have 4x6 grid in the figure aligning with the text. This is particularly the case as when Figure 1 was referenced in the experiments, it was after describing the 4x6 splits of the input.
ICLR
Title Revisiting Domain Randomization Via Relaxed State-Adversarial Policy Optimization Abstract Domain randomization (DR) is widely used in reinforcement learning (RL) to bridge the gap between simulation and reality through maximizing its average returns under the perturbation of environmental parameters. Although effective, the methods have two limitations: (1) Even the most complex simulators cannot capture all details in reality due to finite domain parameters and simplified physical models. (2) Previous methods often assume that the distribution of domain parameters is a specific family of probability functions, such as a normal or a uniform distribution, which may not be correct. To enable robust RL via DR without the aforementioned limitations, we rethink DR from the perspective of adversarial state perturbation, without the need for re-configuring the simulator or relying on prior knowledge about the environment. We point out that perturbing agents to the worst states during training is naı̈ve and could make the agents over-conservative. Hence, we present a Relaxed State-Adversarial Algorithm to tackle the over-conservatism issue by simultaneously maximizing the average-case and worst-case performance of policies. We compared our method to the state-of-the-art methods for evaluation. Experimental results and theoretical proofs verified the effectiveness of our method. 1 INTRODUCTION Most reinforcement learning (RL) agents are trained in simulated environments due to the difficulties of collecting data in real environments. However, the domain shift, where the simulated and real environments are different, could significantly reduce the agents’ performance. To bridge this “reality gap”, domain randomization (DR) methods perturb environmental parameters (Tobin et al., 2017; Rajeswaran et al., 2016; Jiang et al., 2021), such as the mass or the friction coefficient, to simulate the uncertainty in state transition probabilities and expect the agents to maximize the return over the perturbed environments. Despite its wide applicability, DR suffers from two practical limitations: (i) DR requires direct access to the underlying parameters of the simulator, and this could be infeasible if only off-the-shelf simulation platforms are available. (ii) To enable sampling of environmental parameters, DR requires a prior distribution over the feasible environmental parameters. However, the design of such a prior typically relies on domain knowledge and could significantly affect the performance in real environments. To enable robust RL via DR without the above limitations, we rethink DR from the perspective of adversarial state perturbation, without the need for re-configuring the simulator or relying on prior knowledge about the environment. The idea is that perturbing the transition probabilities can be equivalently achieved by imposing perturbations upon the states after nominal state transitions. To substantiate the idea of state perturbations, a simple and generic approach from the robust optimization literature (Ben-Tal & Nemirovski, 1998) is taking a worst-case viewpoint and perturbing the states to nearby states that have the lowest long-term expected return under the current policy (Kuang et al., 2021). While being a natural solution, such a worst-case strategy could suffer from severe over-conservatism. We identify that the over-conservative behavior results from the tight coupling between the need for temporal difference (TD) learning in robust RL and the worst-case operation of state perturbation. Specifically: (1) In robust RL, the value functions are learned with the help of bootstrapping in TD methods since finding nearby worst-case states via Monte-Carlo sampling is NP-hard (Ho et al., 2018; Chow et al., 2015; Behzadian et al., 2021). (2) Under the worst-case state perturbations, TD methods would update the value function based on the local minimum within a neighborhood of the nominal next state and is, therefore, completely unaware of the value of the nominal next state. As a result, the learner could fail to identify or explore those states with potentially high returns. To further illustrate this phenomenon, we consider a toy grid world example of finding the shortest path toward the goal, as shown in Figure 1(a). Although the goal state has a high value, the TD updates cannot propagate the value to other states since all nominal state transitions toward the goal state are perturbed away under the worst-case state-adversarial method. What’s even worse, the agent ultimately learns to move toward the trap state due to the compounding effect of TD updates and worst-case state-adversarial perturbations. Notably, in addition to the grid world environment, such trap terminal states also commonly exist in various RL problems, such as the locomotion tasks in MuJoCo. As a result, there remains one critical unanswered question in robust RL: how to fully unleash the power of the state-adversarial model in robustifying RL algorithms without suffering from over-conservatism? To answer this question, we introduce relaxed state-adversarial perturbations. Specifically: (1) Instead of taking a pure worst-case perspective, we simultaneously consider both the average-case and worst-case scenarios during training. By incorporating the average-case scenarios, the TD updates can successfully propagate the values of those potentially high-return states to other states and thereby prevent the over-conservative behavior (Figure 1(b)). (2) To substantiate the above idea, we introduce a relaxed state-adversarial transition kernel, where the average-case environment can be easily represented by the interpolation of the nominal and the worst-case environments. Under this new formulation of DR, each interpolation coefficient corresponds to a distribution of state adversaries. (3) Besides, based on this formulation, we theoretically quantify the performance gap between the average-case and the worst-case environments; and prove that maximizing the averagecase performance can also benefit the worst-case performance. (4) Accordingly, we present Relaxed state-adversarial policy optimization, a bi-level framework that optimizes the rewards of the two cases alternatively and iteratively. One level updates the policy to maximize the average-case performance, and the other updates the interpolation coefficient of the relaxed state-adversarial transition kernel to increase the lower bound of the return of the worst-case environment. 2 RELATED WORK Robust Markov Decision Process (MDP) and Robust RL. Robust MDP aims to maximize rewards in the worst situations if the testing environment deviates from the training environment (Nilim & El Ghaoui, 2005; Iyengar, 2005; Wiesemann et al., 2013). Due to the large searching space, the complexity of robust MDP grows rapidly when the dimensionality increases. Therefore, Tamar et al. (2014) developed an approximated dynamic programming to scale up the robust MDPs paradigm. Roy et al. (2017) extended the method to nonlinear estimation and guaranteed the convergence to a regional minimum. Afterward, the works of (Wang & Zou, 2021; Badrinath & Kalathil, 2021) study the convergence rate when applying function approximations under assumptions. Derman et al. (2021) showed that the regularized MDPs are a particular instance of robust MDPs with uncertain rewards. They solved regularized MDPs rather than robust MDPs to reduce computation complexity. Grand-Clément & Kroer (2020) developed efficient proximal updates to solve the distributionally robust MDP via gradient descent and improved the convergence rate. However, although several approximations were presented, such model environments are still too restrictive, and they cannot be used to solve real-world problems. Adversary in Observations. Even a small perturbation to observations may significantly degrade agents’ performance because deep neural networks are vulnerable to inputs constructed by adversaries (Huang et al., 2017). Therefore, methods were presented to train agents under environments with adversarial attacks to improve their robustness (Kos & Song, 2017; Pattanaik et al., 2018). To guarantee a lower-bound performance, the works of (Lütjens et al., 2020; Wang et al., 2019) adopted the idea of certified defense used in classification problems. When making discrete actions, agents are certifiably robust to adversaries in observation within the ✏ distance (Lp-norm). Since most real-world problems are continuous, there were also methods (Weng et al., 2019; Zhang et al., 2020; Oikarinen et al., 2021; Zhang et al., 2021) presented to improve agents’ robustness for continuous actions. Domain Randomization. Environments can induce the uncertainty of transition probabilities. To simulate this circumstance, one can perturb the environmental parameters of a simulator to reasonably change transition probabilities when training agents (Huang et al., 2021; Tobin et al., 2017; Jiang et al., 2021; Igl et al., 2019; Cobbe et al., 2019). Specifically, Tobin et al. (2017) randomly sampled environmental variables and optimized the agents’ average reward. Given that a significant perturbation may fail the training, Cobbe et al. (2019) increased the level of difficulty step by step when training agents to improve their average rewards. Jiang et al. (2021) further considered the expected return in the optimal case and introduced monotonic robust policy optimization to maximize the average-case and worst-case returns simultaneously. Since perturbing transition probabilities through environmental parameters demands prior knowledge, Kuang et al. (2021) transferred states to the nearby local minimum based on gradients obtained from the value function to imitate environmental disturbance. Igl et al. (2019) injected selective noise based on a variational information bottleneck and value networks to prevent models from overfitting the training environment. The regularization helps agents resist the uncertainty of state transition probabilities. Our method perturbs states through the gradients of the value function, as Kuang et al. (2021) did. However, pushing states toward the nearby local minimum will make agents over-conservative because they consider only the worst-case scenarios. We present the relaxed state adversarial perturbation and optimize both the average-case and worst-case environments to overcome this problem. 3 PRELIMINARIES A robust Markov decision process (robust MDP) is characterized by a tuple (S,A,P, R, µ, ), where S is the state space, A is action space, P is the uncertainty set that contains all possible transition kernels, R : S ⇥A ! [ Rmax, Rmax] is the reward function, µ is the initial state distribution, and 2 (0, 1) is the discount factor. Let P0 2 P denote the nominal transition kernel, which characterizes the transition dynamics of the nominal environment without perturbation. We define the total expected return under a policy ⇡ and a transition kernel P 2 P as J(⇡|P ) := Es0⇠µ,at⇠⇡(·|st),st+1⇠P (·|st,at) 1X t=0 tR(st, at) . (1) For ease of exposition, we also define the value function under policy ⇡ and transition kernel P as V ⇡P (s) := Eat⇠⇡(·|st),st+1⇠P (·|st,at) hP1 at=0 tR(st, at)|s0 = s i . To learn a policy in a robust MDP, the DR approaches are built on two major design principles: (1) Construction of uncertainty set: DR presumes that one could have access to the environment parameters of the simulator. The uncertainty set P is constructed by specifying the possible range of one or multiple environment parameters, typically based on some domain knowledge. (2) Average-case perspective: DR resorts to maximizing the average performance with respect to some pre-configured distribution D over the uncertainty set P , i.e., EP⇠D[J(⇡|P )]. 4 DOMAIN RANDOMIZATION VIA RELAXED STATE-ADVERSARY 4.1 CONNECTING DOMAIN RANDOMIZATION AND STATE PERTURBATION Conventional DR methods enforce attacks on state transitions by perturbing the environment parameters of a simulator. This goal can be achieved by perturbing the state after each nominal transition (Kuang et al., 2021): Let (s, a) be some state-action pair, and : S ! S be a state perturbation function. In a nominal environment, the probability of the transition to some state s0 under s, a is P (s0|s, a). Under the state perturbation , the probability becomes P ( (s0)|s, a). However, this state adversarial attack is too effective since a value function considers the expected future return, and a perturbation to an early state may significantly influence the later states. The over-conservatism problem therefore occurs. We present a relaxed state-adversarial policy optimization to overcome the problem. We also prove that the relaxed MDP enjoys two main properties under relaxation: (1) it stands for the average performance of the uncertainty set; (2) it guarantees the improvement the performance of the worst-case MDP. Further, we prove that a specific average-case MDP corresponds to a relaxation parameter. Hence, we propose an algorithm for adapting the relaxation parameters during training. 4.2 STATE-ADVERSARIAL MDPS AND UNCERTAINTY SETS State-adversarial attacks perturb the current states to neighboring states with the lowest values. This perturbation process can be captured by a state-adversarial transition kernel, which connects the nominal MDP and the resulting state-adversarial MDP. For ease of exposition, for each state s 2 S , we define N (s) := {s0|d(s, s0) } to be the -neighborhood of s, where d(s, s0) can be any distance metric. In this study, we use L1-norm. Definition 1 (State Perturbation Matrix). Given a policy ⇡ and a perturbation parameter 0, the state perturbation matrix Z⇡ with respect to ⇡ is defined as follows: for each pair of states i, j 2 S , Z⇡ (i, j) := ⇢ 1, if j = argmins2N (i) V ⇡(s), 0, otherwise. (2) The justifications for choosing the above surrogate perturbation model are two-fold: (1) The model can be interpreted as constructing adversarial examples for the true states. (2) The perturbation model is closely related to the perturbation of environment parameters, which serve as the standard machinery in the canonical DR formulation, as described in (Kuang et al., 2021). Remark 1. In continuous state space, the argmin in Equation 2 can be computed by adapting the fast gradient sign method (FGSM) (Goodfellow et al., 2014). Let V be a value function (i.e., network) with parameter , s be a state, and ✏ be the strength of perturbation. FGSM finds the perturbed state (s) = s ✏ · sign(rsV ( , s)) that has the minimum value, where ||s (s)||1 ✏, and the gradient at s is computed using back-propagation. Definition 2 (State-Adversarial MDP). For any policy ⇡, the corresponding state-adversarial MDP with respect to ⇡ is defined as a tuple (S,A, P⇡ , R, µ, ), where the state-adversarial transition kernel P⇡ is defined as P⇡ (·|s, a) := [Z⇡ ]>P0(·|s, a), 8(s, a) 2 S ⇥A . (3) Recall that P0 is the nominal transition kernel. We use the notation P⇡ = [Z⇡ ]>P0 in the later paragraphs for simplicity. Note that the state adversarial transition matrix Z⇡ depends on the strength of perturbation . Each perturbation radius results in a unique state-adversarial MDP P⇡ . Remark 2. The state-adversarial MDP defined in Definition 2 involves perturbation of the true states, which is fundamentally different from the perturbation of observations (Zhang et al., 2020). Definition 3 (Uncertainty Set). Given a radius ✏ > 0, the uncertainty set induced by state-adversarial perturbations, denoted by P⇡✏ , is defined as P⇡✏ := {P⇡ : P⇡ = [Z⇡ ]>P0 and ✏}. (4) The adversarial attack transits agents toward low-value states. Agents trained using this state adversarial MDP would prevent themselves from falling into the worst situation (Kuang et al., 2021). However, a large ✏ will make agents too conservative and fail to reach any goal state because its value cannot be propagated to neighboring states by the TD updates (Figure 1). Although using a small ✏ can ease the problem, agents would completely omit the risks outside the bounding area. Besides, this strategy is unachievable in a discrete environment due to the lower-bound value of ✏. For example, the agent’s movement in the grid world is one hop and cannot be reduced. Lemma 1 (Monotonicity of Average Value in Perturbation Strength). Under the setting of state adversarial MDP, the value of the local minimum monotonically decreases as the bounded radius increases. Let x be a positive real number. The reward function J satisfies J(⇡|P⇡ ) J(⇡|P⇡ +x), 8⇡. (5) The proof is in Appendix A.3. Notably, Lemma 1 indicates that among the transition kernels in the uncertainty set P⇡✏ , the worst-case occurs when = ✏. 4.3 RELAXED STATE-ADVERSARIAL MDPS We present a relaxation framework to address the over-conservatism issue. To begin with, we consider a relaxation on the state-adversarial transition kernel as follows: Relaxed state-adversarial transition kernel. Given ✏ > 0 and ↵ 2 [0, 1], the ↵-relaxed stateadversarial transition kernel is defined as a convex combination of the nominal and the stateadversarial transition kernels, i.e., P⇡,↵✏ (·|s, a) = ↵P0(·|s, a) + (1 ↵)P⇡✏ (·|s, a). (6) Connecting relaxed state-adversarial MDPs with domain randomization. DR methods demand a prior distribution for computing the average case performance. Let D be a distribution over the uncertainty set P⇡✏ . In the following, we show that applying DR with respect to D is equivalently cast optimizing an objective under a relaxed state-adversarial transition kernel. Lemma 2 (Relaxation parameter ↵ as a prior distribution D in domain randomization). For any distribution D over the state-adversarial uncertainty set P⇡✏ , there must be an ↵ 2 [0, 1] such that EP⇠D[J(⇡|P )] = J(⇡|P⇡,↵✏ ). The proof is in Appendix A.4. It is worth noting that different values of ↵ represent different prior assumptions. For example, ↵ = 1 implies that the prior probability of nominal MDP is 1, whereas ↵ = 0 indicates that the prior probability of the worst-case MDP is 1. In other words, we can control the value of ↵ to represent different distributions D and train the policies under various environments. To achieve this goal, we quantify the gap between the average performance EP⇠D[J(⇡̃|P )] and the worst case performance J(⇡̃|P⇡✏ ) when updating the current policy ⇡ to a new policy ⇡̃, and then apply an optimization technique to maximize both of them. One naı̈ve bound is as follows. Theorem 1 (A naı̈ve connection between the average-case and the worst-case returns). Given a nominal MDP with state adversaries, when updating the current policy ⇡ to a new policy ⇡̃, the following bound holds (Jiang et al., 2021): J(⇡̃|P⇡✏ ) EP⇠D[J(⇡̃|P )] 2Rmax EP⇠D[dTV(P⇡✏ kP )] (1 )2 4Rmax dTV(⇡, ⇡̃) (1 )2 , (7) where Rmax is the maximum reward, dTV (⇡, ⇡̃) indicates the total variation divergence between ⇡ and ⇡̃, and P⇡✏ is the worst state-adversarial transition kernels. Theorem 1 indicates that the gap between the average- and the worst- case performance can be expressed using the MDP shift EP⇠D[dTV(P⇡✏ kP )] and the policy evolution dTV (⇡, ⇡̃). The proof is in Appendix A.5. Note that the bound in Theorem 1 is loose because the value on the right hand side (RHS) of Equation 7 can be tiny. Specifically, the transition kernel probability shift EP⇠D[dTV(P⇡✏ kP )] is multiplied by the total maximum return Rmax1 , and the additional denominator 1 makes the value even smaller since is usually set to 0.99 in RL applications. As a result, the bound can be meaningless unless the worst-case MDP P⇡✏ is very close to the average MDP. Since state perturbation only perturbs states to nearby states, we consider the smoothness of the reward function and transition property to build a tight connection between the average-case and the worst-case returns. Specifically, Lipschitz continuity in reward function has been widely used in the theory of RL (Fehr et al., 2018; Asadi et al., 2018; Ling et al., 2016). The smoothness of the transition kernel also holds in most of the environments (Shen et al., 2020; Lakshmanan et al., 2015). For example, in grid-world, the next state must be adjacent to the current state; and in MuJoCo, the poses of consecutive periods are similar, no matter what the state-action pairs are considered. Formally, we define this smoothness property of transition kernels as: Definition 4 ( -Smooth Transition Kernel in State). Let P be a transition kernel and be a positive constant. P is a -smooth transition kernel in state if ks s0k , (8) for all a and for all s, s0 with P (s0|s, a) > 0. With the assumption of Lipschitz continuity in reward function and smoothness of transition kernel, we arrive at the following bound: Theorem 2 (Connecting Worst-Case and Average-Case Returns). Given a nominal MDP with two properties: (1) Reward function of corresponding Markov Reward Process (MRP) with respect to any policy is an Lr-Lipschitz function. (2) Nominal transition kernel P0 has the smooth transition property , where ks s0k2 , 8a and 8P0(s0|s, a) > 0. Then, after updating the current policy ⇡ to a new policy ⇡̃, the following bound holds: J(⇡̃|P⇡✏ ) J(⇡̃|P⇡,↵✏ ) 4 (✏+ )Lr↵ (1 )3 4( (✏+ )Lr + (1 )2Rmax)dTV(⇡, ⇡̃) (1 )3 , (9) where dTV (⇡, ⇡̃) is total variation divergence between ⇡ and ⇡̃, P⇡,↵✏ is a relaxed state-adversarial transition kernel, and P⇡✏ is a worst-case state-adversarial transition kernel. The proof is provided in Appendix A.6. Notably, Theorem 2 holds for any relaxation parameter ↵ 2 [0, 1]. We now briefly discuss the technical challenges in the proof: (1) Propagation of state perturbations across time: The main difficulty lies in the fact that the difference of trajectories under different MDPs would increase in a rather nonlinear and complex manner as time evolves. (2) Quantifying the difference in rewards among trajectories generated under different transition kernels: To measure the difference in rewards under different MDPs, it is necessary to consider not only the probability difference at time t but also the difference in rewards at different states. Despite the above challenges, our proof uses the finding that the difference of initial probability of state under two MDPs P⇡✏ and P⇡,↵✏ at time step t can be quantified as ↵ t, where 0 t 1. Then under the smoothness conditions of the reward function and the transition matrix, we are able to characterize a tight bound between the average-case and the worst-case performance. The intuition of Theorem 2 can be expressed using the terms on the RHS of Equation 9. The first term is the average performance of all MDPs in the uncertainty set. The second term penalizes the large value of ↵ because it implies that the relaxed MDP is close to the nominal environment. In other words, we expect the average case performance to be high while pushing the uncertainty set close to the worst-case MDP. Finally, the third term prevents a significant update in a single step by reducing the total variation divergence dTV (⇡, ⇡̃). 4.4 ONLINE ADAPTATION OF THE RELAXATION PARAMETER We leverage Theorem 2 to address both the average-case and the worst-case performance. Specifically, we present a bi-level approach to maximize the lower-bound of the worst-case performance (i.e., RHS of Theorem 2) since the unknowns ↵ and ⇡ are correlated. The two tasks are optimized alternatively and iteratively. Details are as follows: • Lower-level task for average-case return: On the lower level, we improve the policy by optimizing the objective J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) under a fixed relaxation parameter ↵t. This can be done by using any off-the-shelf RL algorithm (e.g., PPO with a clipped objective). • Upper-level task for worst-case return: On the upper level, we design a meta objective Jmeta(↵t) to represent the lower bound of the worst case performance (RHS of Equation 9). Hence, the task aims to find a relaxation parameter ↵t that can maximize Jmeta(↵t). To enable a stable training, we iteratively update ↵t by applying the online cross-validation algorithm (Sutton, 1992). Both the lower and upper level tasks aim to increase the lower bound of the worst-case performance J(⇡✓t |P ⇡✓t 1 ✏ ) (Equation 9). In the lower-level, a constant relaxation parameter ↵t represents a specific distribution D. It seeks to maximize the average return over all environments in the uncertainty set following distribution D. In the upper-level, the optimization adjusts ↵ to maximize this lower bound. On one hand, increasing ↵t improves the average performance J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) since the average-case moves toward a nominal environment, yet the price is increasing the MDP shift (i.e., the second term of RHS in Equation 9). On the other hand, decreasing ↵ changes the performance and the penalty oppositely. Since ⇡ is weak initially and its performance gradually improves, the meta objective optimization tends to decrease and then increase ↵ during training. Algorithm 1 illustrates our implementation. We first update the policy ⇡✓t to maximize the averagecase return J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) using the proximal policy optimization (PPO). Afterward, we update the relaxation parameter ↵ to ensure that the worst-case return is higher than a specific bound (Equation 9). Note that samples used in the two steps are different (Lines 3 and 6 of Algorithm 1) because the meta objective optimization is an online method. In addition, we chose PPO as a base algorithm since it prevents the model from being updated significantly in a single step. It helps to control the penalty term dTV (⇡, ⇡̃) in Theorem 2. The implementation details are provided in Appendix A.7. Algorithm 1: Relaxed State-Adversarial Policy Optimization Input :MDP (S,A, P0, r, ), Objective function L, step size parameter ⌘, number of iterations T , P0 is the nominal transition kernel, ✏-Neighborhood 1 Initialize the policy ⇡✓0 for t = 0, . . . , T 1 do 2 Sample the tuple {si, ai, ri, s0i} Tupd i=1, where a 0 i ⇠ ⇡✓t(·|s0i), and s0i ⇠ P0(·|si, ai) 3 Evaluate J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) 4 Update the policy to ⇡✓t+1 by applying multi-step SGD to the objective function as PPO 5 Sample the tuple {si, ai, ri, s0i} T 0upd i=1, where a 0 i ⇠ ⇡✓t+1(·|s0i), and s0i ⇠ P0(·|si, ai) 6 Update the relaxation parameter to ↵t+1 via one SGD update with respect to the meta-objective 7 end 5 EXPERIMENTAL RESULTS AND EVALUATIONS We conducted two experiments on Mujoco (Todorov et al., 2012) to evaluate the performance of our relaxed state adversarial policy optimization (RAPPO). All the baselines and our method were implemented on the PPO (Schulman et al., 2017), and the default training parameters were used. In addition, the results were averaged from five different runs/seeds. Robustness against Environmental Adversaries. We compared our RAPPO with the latest DR method, MRPO (Jiang et al., 2021), to evaluate its robustness against the uncertainty of environmental parameters1. Agents trained using the two methods were evaluated in the environments, in which the size and gravity were drifted in the range of 0.6 - 1.4. To simulate the situation that domain knowledge is unavailable, during training, MRPO perturbed mass and friction in the range of 0.8 - 1.2, and our RAPPO attacked the states by its value function. Figure 2 shows the subtractions of the rewards of the two methods. As can be seen, our RAPPO outperformed MRPO since state adversaries were more general than environmental adversaries. Agents trained by MRPO could perform poorly when the perturbations in the training and testing environments were different. Robustness Against States Adversaries. We compared our RAPPO with SCPPO (Kuang et al., 2021) to evaluate its robustness against state adversaries. Both of the methods perturb states to improve agents’ robustness. We also included vanilla PPO in the experiment because it is the base algorithm of RAPPO and SCPPO. To achieve a fair comparison, the parameters used in RAPPO and SCPPO were the same. Specifically, we set ✏ to 0.015, 0.002, 0.03, 0.001, and 0.005 to the environments of HalfCheetah-v2, Hopper-v2, Ant-v2, Walker-v2, and Humanoid2d-v2, respectively. The parameters were chosen according to the variance of actions in the environments. 1We obtained the official implementation of MRPO from http://proceedings.mlr.press/v139/jiang21c.html and used their default parameter setting. Table 1 shows the testing results. We attacked the agents using their respective value functions under multiple strengths. Specifically, we repeated the experiments from 5 different seeds and generated 50 trajectories for each seed from different initial states for evaluation. The means and standard deviations of the rewards were reported. Clearly, the results fulfilled Lemma 1, where agents’ performance decreased as the strength of attack increased. In addition, our RAPPO was competitive to PPO and SCPPO in nominal environments, and its performance decreased the slowest as the strength of attack increased. It deserves noting that the attacks in the last two columns of Table 1 were stronger than that of the worst-case. Our RAPPO performed the best in the environments. Extending SAPPO Using Relaxed State Adversaries. While our RAPPO successfully improves the robustness of agents against state adversaries, a classical method, SAPPO (Zhang et al., 2020), can help agents against the perturbation of state observations. We thus extended SAPPO by adopting our relaxed state adversarial attacks during training and evaluated its effectiveness. Similarly, we compared the methods on the trajectories of 5 seeds and 50 initial states. Table 2 shows the results. As indicated, the extended RA SAPPO outperformed SAPPO in most of the environments, particularly under strong attacks. Steady Improvements of the Average and Worst Case Environments. We apply a bi-level approach to optimize the average and worst-case environments during training. To verify the feasibility of this approach, we evaluated the agents’ performance under these two cases during training. To determine the worst-case result, we generated 50 trajectories from different initial states, perturbed states with the same strength as the training ✏, and then averaged the rewards. In contrast, the average-case result was determined from 50 initial states and 10 different perturbation strengths, which were uniformly distributed between 0 and ✏. In total, the rewards of 50 ⇥ 10 trajectories were averaged. Figure 3 shows that our RAPPO can steadily improve the average-case performance without sacrificing the worst-case performance. Note that the high variance of the average-case rewards is reasonable because of different adversarial strengths. The value of the relaxation parameter ↵. Our meta-objective optimization determines the relaxation parameter ↵ (Equation 6) to control the strengths of state adversaries during training. While ↵ is unknown, an intuitive idea is to consider ↵ a hyper-parameter and let users specify the value. However, we point out that the value of ↵ should vary at different training stages since agents are weak initially and can perform well after training. To verify that a dynamic ↵ is over a constant ↵ (i.e., RAPPO-C), we evaluated the performance of agents under state perturbed environments. In the experiments, we set ↵ = 0.5 for RAPPO-C since it is in the middle of nominal and worst-case environments. The remaining parameters between the methods were exactly the same. As indicated in Table 1, RAPPO outperformed RAPPO-C without a doubt. We also refer readers to Appendix A.8 for the dynamics of ↵ during training. 6 CONCLUSIONS We have presented a relaxed state adversarial policy optimization to improve the robustness of agents against the uncertainty of environments. Compared to the methods in DR, we perturbed states using the adversarial attack so as to decouple randomization from simulators. Neither prior knowledge of selecting environmental parameters nor prior assumption of parameter distribution are needed. In addition, we introduced a relaxation strategy to tackle the over-conservative problem caused by state adversarial attacks. Our policy optimization maximizes rewards in the average-case while holding the lower-bound rewards in the worst-case environments simultaneously. Experiment results and theoretical proofs demonstrate the effectiveness of our method. Limitations and Future Work. Our relaxation method is state-independent, in which the value of ↵ is adjusted according to the overall performance of policy. Since the degrees of difficulty vary from states to states, it will be interesting to investigate the state-dependent relaxation method. In addition, we currently assume that each dimension of states is equally important, which may not be the case. We will also explore the weight of each dimension when perturbing states in the future.
1. What is the focus and contribution of the paper regarding robust RL? 2. What are the strengths of the proposed approach, particularly in its connection to domain randomization and average-case robustness metric? 3. What are the weaknesses of the paper, especially regarding the proofs and the dependence on the current policy? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the effectiveness of the upper-level task in improving performance?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposed an adversarial training approach for robust RL. The approach is a bi-level method where the lower-level task minimizes the average-case return against state adversarial attacks while the upper-level task handle the worst-case adversarial return via minimizing a lower bound. Empirical experiments show improved performance from baselines on Mujoco tasks with environmental perturbations and state adversarial attacks. Strengths And Weaknesses Strength: The connection of relaxed state-adversarial MDPs with domain randomization is very interesting, and it provides an average-case robustness metric which might help mitigate over-conservative in adversarial trainings. Numerical evaluations in Mujoco tasks show that the proposed method outperforms prior methods in cases with perturbation of state observations as well as cases with adversarial attacks on their value functions. Weaknesses: The proofs of Theorem 1 and 2 may be correct, but the bounds are a bit strange and they may not correspond the desired performance of the algorithm. In particular, the current policy π does not appear in both the LHS of (7) and (9). This means that one can set π ~ = π in (7) and (9) and obtain tighter bounds without their last terms. This lacks of dependence on the current policy is likely different from what has been implemented in the algorithm. One can also observe that, although Theorem 1 is mostly the same as Theorem 1 in Jiang et al., 2021, the LHS of the associated equation in Jiang et al., 2021 corresponds to the adversary for the current policy π , while the LHS of (7) only corresponds to the adversary of the new policy π ~ . No discussion and/or numerical evaluations on how good the lower bound of Theorem 2 is. As mentioned in the previous point, the last term in (9) is likely redundant by setting π ~ = π . The tightness of the bounds is questionable. It's not clear if step of minimizing the lower bound of the worst-case return really contributes to better performance. What if one just fixed α without the upper-level task? Clarity, Quality, Novelty And Reproducibility The paper is easy to follow with proper explanations in general. There are some inconsistent capitalization mixing $P^{\pi}\epsilon a n d p^{\pi}\epsilon$. In the proof of Theorem 1 in A.5, the notation for the reward function is incorrect.
ICLR
Title Revisiting Domain Randomization Via Relaxed State-Adversarial Policy Optimization Abstract Domain randomization (DR) is widely used in reinforcement learning (RL) to bridge the gap between simulation and reality through maximizing its average returns under the perturbation of environmental parameters. Although effective, the methods have two limitations: (1) Even the most complex simulators cannot capture all details in reality due to finite domain parameters and simplified physical models. (2) Previous methods often assume that the distribution of domain parameters is a specific family of probability functions, such as a normal or a uniform distribution, which may not be correct. To enable robust RL via DR without the aforementioned limitations, we rethink DR from the perspective of adversarial state perturbation, without the need for re-configuring the simulator or relying on prior knowledge about the environment. We point out that perturbing agents to the worst states during training is naı̈ve and could make the agents over-conservative. Hence, we present a Relaxed State-Adversarial Algorithm to tackle the over-conservatism issue by simultaneously maximizing the average-case and worst-case performance of policies. We compared our method to the state-of-the-art methods for evaluation. Experimental results and theoretical proofs verified the effectiveness of our method. 1 INTRODUCTION Most reinforcement learning (RL) agents are trained in simulated environments due to the difficulties of collecting data in real environments. However, the domain shift, where the simulated and real environments are different, could significantly reduce the agents’ performance. To bridge this “reality gap”, domain randomization (DR) methods perturb environmental parameters (Tobin et al., 2017; Rajeswaran et al., 2016; Jiang et al., 2021), such as the mass or the friction coefficient, to simulate the uncertainty in state transition probabilities and expect the agents to maximize the return over the perturbed environments. Despite its wide applicability, DR suffers from two practical limitations: (i) DR requires direct access to the underlying parameters of the simulator, and this could be infeasible if only off-the-shelf simulation platforms are available. (ii) To enable sampling of environmental parameters, DR requires a prior distribution over the feasible environmental parameters. However, the design of such a prior typically relies on domain knowledge and could significantly affect the performance in real environments. To enable robust RL via DR without the above limitations, we rethink DR from the perspective of adversarial state perturbation, without the need for re-configuring the simulator or relying on prior knowledge about the environment. The idea is that perturbing the transition probabilities can be equivalently achieved by imposing perturbations upon the states after nominal state transitions. To substantiate the idea of state perturbations, a simple and generic approach from the robust optimization literature (Ben-Tal & Nemirovski, 1998) is taking a worst-case viewpoint and perturbing the states to nearby states that have the lowest long-term expected return under the current policy (Kuang et al., 2021). While being a natural solution, such a worst-case strategy could suffer from severe over-conservatism. We identify that the over-conservative behavior results from the tight coupling between the need for temporal difference (TD) learning in robust RL and the worst-case operation of state perturbation. Specifically: (1) In robust RL, the value functions are learned with the help of bootstrapping in TD methods since finding nearby worst-case states via Monte-Carlo sampling is NP-hard (Ho et al., 2018; Chow et al., 2015; Behzadian et al., 2021). (2) Under the worst-case state perturbations, TD methods would update the value function based on the local minimum within a neighborhood of the nominal next state and is, therefore, completely unaware of the value of the nominal next state. As a result, the learner could fail to identify or explore those states with potentially high returns. To further illustrate this phenomenon, we consider a toy grid world example of finding the shortest path toward the goal, as shown in Figure 1(a). Although the goal state has a high value, the TD updates cannot propagate the value to other states since all nominal state transitions toward the goal state are perturbed away under the worst-case state-adversarial method. What’s even worse, the agent ultimately learns to move toward the trap state due to the compounding effect of TD updates and worst-case state-adversarial perturbations. Notably, in addition to the grid world environment, such trap terminal states also commonly exist in various RL problems, such as the locomotion tasks in MuJoCo. As a result, there remains one critical unanswered question in robust RL: how to fully unleash the power of the state-adversarial model in robustifying RL algorithms without suffering from over-conservatism? To answer this question, we introduce relaxed state-adversarial perturbations. Specifically: (1) Instead of taking a pure worst-case perspective, we simultaneously consider both the average-case and worst-case scenarios during training. By incorporating the average-case scenarios, the TD updates can successfully propagate the values of those potentially high-return states to other states and thereby prevent the over-conservative behavior (Figure 1(b)). (2) To substantiate the above idea, we introduce a relaxed state-adversarial transition kernel, where the average-case environment can be easily represented by the interpolation of the nominal and the worst-case environments. Under this new formulation of DR, each interpolation coefficient corresponds to a distribution of state adversaries. (3) Besides, based on this formulation, we theoretically quantify the performance gap between the average-case and the worst-case environments; and prove that maximizing the averagecase performance can also benefit the worst-case performance. (4) Accordingly, we present Relaxed state-adversarial policy optimization, a bi-level framework that optimizes the rewards of the two cases alternatively and iteratively. One level updates the policy to maximize the average-case performance, and the other updates the interpolation coefficient of the relaxed state-adversarial transition kernel to increase the lower bound of the return of the worst-case environment. 2 RELATED WORK Robust Markov Decision Process (MDP) and Robust RL. Robust MDP aims to maximize rewards in the worst situations if the testing environment deviates from the training environment (Nilim & El Ghaoui, 2005; Iyengar, 2005; Wiesemann et al., 2013). Due to the large searching space, the complexity of robust MDP grows rapidly when the dimensionality increases. Therefore, Tamar et al. (2014) developed an approximated dynamic programming to scale up the robust MDPs paradigm. Roy et al. (2017) extended the method to nonlinear estimation and guaranteed the convergence to a regional minimum. Afterward, the works of (Wang & Zou, 2021; Badrinath & Kalathil, 2021) study the convergence rate when applying function approximations under assumptions. Derman et al. (2021) showed that the regularized MDPs are a particular instance of robust MDPs with uncertain rewards. They solved regularized MDPs rather than robust MDPs to reduce computation complexity. Grand-Clément & Kroer (2020) developed efficient proximal updates to solve the distributionally robust MDP via gradient descent and improved the convergence rate. However, although several approximations were presented, such model environments are still too restrictive, and they cannot be used to solve real-world problems. Adversary in Observations. Even a small perturbation to observations may significantly degrade agents’ performance because deep neural networks are vulnerable to inputs constructed by adversaries (Huang et al., 2017). Therefore, methods were presented to train agents under environments with adversarial attacks to improve their robustness (Kos & Song, 2017; Pattanaik et al., 2018). To guarantee a lower-bound performance, the works of (Lütjens et al., 2020; Wang et al., 2019) adopted the idea of certified defense used in classification problems. When making discrete actions, agents are certifiably robust to adversaries in observation within the ✏ distance (Lp-norm). Since most real-world problems are continuous, there were also methods (Weng et al., 2019; Zhang et al., 2020; Oikarinen et al., 2021; Zhang et al., 2021) presented to improve agents’ robustness for continuous actions. Domain Randomization. Environments can induce the uncertainty of transition probabilities. To simulate this circumstance, one can perturb the environmental parameters of a simulator to reasonably change transition probabilities when training agents (Huang et al., 2021; Tobin et al., 2017; Jiang et al., 2021; Igl et al., 2019; Cobbe et al., 2019). Specifically, Tobin et al. (2017) randomly sampled environmental variables and optimized the agents’ average reward. Given that a significant perturbation may fail the training, Cobbe et al. (2019) increased the level of difficulty step by step when training agents to improve their average rewards. Jiang et al. (2021) further considered the expected return in the optimal case and introduced monotonic robust policy optimization to maximize the average-case and worst-case returns simultaneously. Since perturbing transition probabilities through environmental parameters demands prior knowledge, Kuang et al. (2021) transferred states to the nearby local minimum based on gradients obtained from the value function to imitate environmental disturbance. Igl et al. (2019) injected selective noise based on a variational information bottleneck and value networks to prevent models from overfitting the training environment. The regularization helps agents resist the uncertainty of state transition probabilities. Our method perturbs states through the gradients of the value function, as Kuang et al. (2021) did. However, pushing states toward the nearby local minimum will make agents over-conservative because they consider only the worst-case scenarios. We present the relaxed state adversarial perturbation and optimize both the average-case and worst-case environments to overcome this problem. 3 PRELIMINARIES A robust Markov decision process (robust MDP) is characterized by a tuple (S,A,P, R, µ, ), where S is the state space, A is action space, P is the uncertainty set that contains all possible transition kernels, R : S ⇥A ! [ Rmax, Rmax] is the reward function, µ is the initial state distribution, and 2 (0, 1) is the discount factor. Let P0 2 P denote the nominal transition kernel, which characterizes the transition dynamics of the nominal environment without perturbation. We define the total expected return under a policy ⇡ and a transition kernel P 2 P as J(⇡|P ) := Es0⇠µ,at⇠⇡(·|st),st+1⇠P (·|st,at) 1X t=0 tR(st, at) . (1) For ease of exposition, we also define the value function under policy ⇡ and transition kernel P as V ⇡P (s) := Eat⇠⇡(·|st),st+1⇠P (·|st,at) hP1 at=0 tR(st, at)|s0 = s i . To learn a policy in a robust MDP, the DR approaches are built on two major design principles: (1) Construction of uncertainty set: DR presumes that one could have access to the environment parameters of the simulator. The uncertainty set P is constructed by specifying the possible range of one or multiple environment parameters, typically based on some domain knowledge. (2) Average-case perspective: DR resorts to maximizing the average performance with respect to some pre-configured distribution D over the uncertainty set P , i.e., EP⇠D[J(⇡|P )]. 4 DOMAIN RANDOMIZATION VIA RELAXED STATE-ADVERSARY 4.1 CONNECTING DOMAIN RANDOMIZATION AND STATE PERTURBATION Conventional DR methods enforce attacks on state transitions by perturbing the environment parameters of a simulator. This goal can be achieved by perturbing the state after each nominal transition (Kuang et al., 2021): Let (s, a) be some state-action pair, and : S ! S be a state perturbation function. In a nominal environment, the probability of the transition to some state s0 under s, a is P (s0|s, a). Under the state perturbation , the probability becomes P ( (s0)|s, a). However, this state adversarial attack is too effective since a value function considers the expected future return, and a perturbation to an early state may significantly influence the later states. The over-conservatism problem therefore occurs. We present a relaxed state-adversarial policy optimization to overcome the problem. We also prove that the relaxed MDP enjoys two main properties under relaxation: (1) it stands for the average performance of the uncertainty set; (2) it guarantees the improvement the performance of the worst-case MDP. Further, we prove that a specific average-case MDP corresponds to a relaxation parameter. Hence, we propose an algorithm for adapting the relaxation parameters during training. 4.2 STATE-ADVERSARIAL MDPS AND UNCERTAINTY SETS State-adversarial attacks perturb the current states to neighboring states with the lowest values. This perturbation process can be captured by a state-adversarial transition kernel, which connects the nominal MDP and the resulting state-adversarial MDP. For ease of exposition, for each state s 2 S , we define N (s) := {s0|d(s, s0) } to be the -neighborhood of s, where d(s, s0) can be any distance metric. In this study, we use L1-norm. Definition 1 (State Perturbation Matrix). Given a policy ⇡ and a perturbation parameter 0, the state perturbation matrix Z⇡ with respect to ⇡ is defined as follows: for each pair of states i, j 2 S , Z⇡ (i, j) := ⇢ 1, if j = argmins2N (i) V ⇡(s), 0, otherwise. (2) The justifications for choosing the above surrogate perturbation model are two-fold: (1) The model can be interpreted as constructing adversarial examples for the true states. (2) The perturbation model is closely related to the perturbation of environment parameters, which serve as the standard machinery in the canonical DR formulation, as described in (Kuang et al., 2021). Remark 1. In continuous state space, the argmin in Equation 2 can be computed by adapting the fast gradient sign method (FGSM) (Goodfellow et al., 2014). Let V be a value function (i.e., network) with parameter , s be a state, and ✏ be the strength of perturbation. FGSM finds the perturbed state (s) = s ✏ · sign(rsV ( , s)) that has the minimum value, where ||s (s)||1 ✏, and the gradient at s is computed using back-propagation. Definition 2 (State-Adversarial MDP). For any policy ⇡, the corresponding state-adversarial MDP with respect to ⇡ is defined as a tuple (S,A, P⇡ , R, µ, ), where the state-adversarial transition kernel P⇡ is defined as P⇡ (·|s, a) := [Z⇡ ]>P0(·|s, a), 8(s, a) 2 S ⇥A . (3) Recall that P0 is the nominal transition kernel. We use the notation P⇡ = [Z⇡ ]>P0 in the later paragraphs for simplicity. Note that the state adversarial transition matrix Z⇡ depends on the strength of perturbation . Each perturbation radius results in a unique state-adversarial MDP P⇡ . Remark 2. The state-adversarial MDP defined in Definition 2 involves perturbation of the true states, which is fundamentally different from the perturbation of observations (Zhang et al., 2020). Definition 3 (Uncertainty Set). Given a radius ✏ > 0, the uncertainty set induced by state-adversarial perturbations, denoted by P⇡✏ , is defined as P⇡✏ := {P⇡ : P⇡ = [Z⇡ ]>P0 and ✏}. (4) The adversarial attack transits agents toward low-value states. Agents trained using this state adversarial MDP would prevent themselves from falling into the worst situation (Kuang et al., 2021). However, a large ✏ will make agents too conservative and fail to reach any goal state because its value cannot be propagated to neighboring states by the TD updates (Figure 1). Although using a small ✏ can ease the problem, agents would completely omit the risks outside the bounding area. Besides, this strategy is unachievable in a discrete environment due to the lower-bound value of ✏. For example, the agent’s movement in the grid world is one hop and cannot be reduced. Lemma 1 (Monotonicity of Average Value in Perturbation Strength). Under the setting of state adversarial MDP, the value of the local minimum monotonically decreases as the bounded radius increases. Let x be a positive real number. The reward function J satisfies J(⇡|P⇡ ) J(⇡|P⇡ +x), 8⇡. (5) The proof is in Appendix A.3. Notably, Lemma 1 indicates that among the transition kernels in the uncertainty set P⇡✏ , the worst-case occurs when = ✏. 4.3 RELAXED STATE-ADVERSARIAL MDPS We present a relaxation framework to address the over-conservatism issue. To begin with, we consider a relaxation on the state-adversarial transition kernel as follows: Relaxed state-adversarial transition kernel. Given ✏ > 0 and ↵ 2 [0, 1], the ↵-relaxed stateadversarial transition kernel is defined as a convex combination of the nominal and the stateadversarial transition kernels, i.e., P⇡,↵✏ (·|s, a) = ↵P0(·|s, a) + (1 ↵)P⇡✏ (·|s, a). (6) Connecting relaxed state-adversarial MDPs with domain randomization. DR methods demand a prior distribution for computing the average case performance. Let D be a distribution over the uncertainty set P⇡✏ . In the following, we show that applying DR with respect to D is equivalently cast optimizing an objective under a relaxed state-adversarial transition kernel. Lemma 2 (Relaxation parameter ↵ as a prior distribution D in domain randomization). For any distribution D over the state-adversarial uncertainty set P⇡✏ , there must be an ↵ 2 [0, 1] such that EP⇠D[J(⇡|P )] = J(⇡|P⇡,↵✏ ). The proof is in Appendix A.4. It is worth noting that different values of ↵ represent different prior assumptions. For example, ↵ = 1 implies that the prior probability of nominal MDP is 1, whereas ↵ = 0 indicates that the prior probability of the worst-case MDP is 1. In other words, we can control the value of ↵ to represent different distributions D and train the policies under various environments. To achieve this goal, we quantify the gap between the average performance EP⇠D[J(⇡̃|P )] and the worst case performance J(⇡̃|P⇡✏ ) when updating the current policy ⇡ to a new policy ⇡̃, and then apply an optimization technique to maximize both of them. One naı̈ve bound is as follows. Theorem 1 (A naı̈ve connection between the average-case and the worst-case returns). Given a nominal MDP with state adversaries, when updating the current policy ⇡ to a new policy ⇡̃, the following bound holds (Jiang et al., 2021): J(⇡̃|P⇡✏ ) EP⇠D[J(⇡̃|P )] 2Rmax EP⇠D[dTV(P⇡✏ kP )] (1 )2 4Rmax dTV(⇡, ⇡̃) (1 )2 , (7) where Rmax is the maximum reward, dTV (⇡, ⇡̃) indicates the total variation divergence between ⇡ and ⇡̃, and P⇡✏ is the worst state-adversarial transition kernels. Theorem 1 indicates that the gap between the average- and the worst- case performance can be expressed using the MDP shift EP⇠D[dTV(P⇡✏ kP )] and the policy evolution dTV (⇡, ⇡̃). The proof is in Appendix A.5. Note that the bound in Theorem 1 is loose because the value on the right hand side (RHS) of Equation 7 can be tiny. Specifically, the transition kernel probability shift EP⇠D[dTV(P⇡✏ kP )] is multiplied by the total maximum return Rmax1 , and the additional denominator 1 makes the value even smaller since is usually set to 0.99 in RL applications. As a result, the bound can be meaningless unless the worst-case MDP P⇡✏ is very close to the average MDP. Since state perturbation only perturbs states to nearby states, we consider the smoothness of the reward function and transition property to build a tight connection between the average-case and the worst-case returns. Specifically, Lipschitz continuity in reward function has been widely used in the theory of RL (Fehr et al., 2018; Asadi et al., 2018; Ling et al., 2016). The smoothness of the transition kernel also holds in most of the environments (Shen et al., 2020; Lakshmanan et al., 2015). For example, in grid-world, the next state must be adjacent to the current state; and in MuJoCo, the poses of consecutive periods are similar, no matter what the state-action pairs are considered. Formally, we define this smoothness property of transition kernels as: Definition 4 ( -Smooth Transition Kernel in State). Let P be a transition kernel and be a positive constant. P is a -smooth transition kernel in state if ks s0k , (8) for all a and for all s, s0 with P (s0|s, a) > 0. With the assumption of Lipschitz continuity in reward function and smoothness of transition kernel, we arrive at the following bound: Theorem 2 (Connecting Worst-Case and Average-Case Returns). Given a nominal MDP with two properties: (1) Reward function of corresponding Markov Reward Process (MRP) with respect to any policy is an Lr-Lipschitz function. (2) Nominal transition kernel P0 has the smooth transition property , where ks s0k2 , 8a and 8P0(s0|s, a) > 0. Then, after updating the current policy ⇡ to a new policy ⇡̃, the following bound holds: J(⇡̃|P⇡✏ ) J(⇡̃|P⇡,↵✏ ) 4 (✏+ )Lr↵ (1 )3 4( (✏+ )Lr + (1 )2Rmax)dTV(⇡, ⇡̃) (1 )3 , (9) where dTV (⇡, ⇡̃) is total variation divergence between ⇡ and ⇡̃, P⇡,↵✏ is a relaxed state-adversarial transition kernel, and P⇡✏ is a worst-case state-adversarial transition kernel. The proof is provided in Appendix A.6. Notably, Theorem 2 holds for any relaxation parameter ↵ 2 [0, 1]. We now briefly discuss the technical challenges in the proof: (1) Propagation of state perturbations across time: The main difficulty lies in the fact that the difference of trajectories under different MDPs would increase in a rather nonlinear and complex manner as time evolves. (2) Quantifying the difference in rewards among trajectories generated under different transition kernels: To measure the difference in rewards under different MDPs, it is necessary to consider not only the probability difference at time t but also the difference in rewards at different states. Despite the above challenges, our proof uses the finding that the difference of initial probability of state under two MDPs P⇡✏ and P⇡,↵✏ at time step t can be quantified as ↵ t, where 0 t 1. Then under the smoothness conditions of the reward function and the transition matrix, we are able to characterize a tight bound between the average-case and the worst-case performance. The intuition of Theorem 2 can be expressed using the terms on the RHS of Equation 9. The first term is the average performance of all MDPs in the uncertainty set. The second term penalizes the large value of ↵ because it implies that the relaxed MDP is close to the nominal environment. In other words, we expect the average case performance to be high while pushing the uncertainty set close to the worst-case MDP. Finally, the third term prevents a significant update in a single step by reducing the total variation divergence dTV (⇡, ⇡̃). 4.4 ONLINE ADAPTATION OF THE RELAXATION PARAMETER We leverage Theorem 2 to address both the average-case and the worst-case performance. Specifically, we present a bi-level approach to maximize the lower-bound of the worst-case performance (i.e., RHS of Theorem 2) since the unknowns ↵ and ⇡ are correlated. The two tasks are optimized alternatively and iteratively. Details are as follows: • Lower-level task for average-case return: On the lower level, we improve the policy by optimizing the objective J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) under a fixed relaxation parameter ↵t. This can be done by using any off-the-shelf RL algorithm (e.g., PPO with a clipped objective). • Upper-level task for worst-case return: On the upper level, we design a meta objective Jmeta(↵t) to represent the lower bound of the worst case performance (RHS of Equation 9). Hence, the task aims to find a relaxation parameter ↵t that can maximize Jmeta(↵t). To enable a stable training, we iteratively update ↵t by applying the online cross-validation algorithm (Sutton, 1992). Both the lower and upper level tasks aim to increase the lower bound of the worst-case performance J(⇡✓t |P ⇡✓t 1 ✏ ) (Equation 9). In the lower-level, a constant relaxation parameter ↵t represents a specific distribution D. It seeks to maximize the average return over all environments in the uncertainty set following distribution D. In the upper-level, the optimization adjusts ↵ to maximize this lower bound. On one hand, increasing ↵t improves the average performance J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) since the average-case moves toward a nominal environment, yet the price is increasing the MDP shift (i.e., the second term of RHS in Equation 9). On the other hand, decreasing ↵ changes the performance and the penalty oppositely. Since ⇡ is weak initially and its performance gradually improves, the meta objective optimization tends to decrease and then increase ↵ during training. Algorithm 1 illustrates our implementation. We first update the policy ⇡✓t to maximize the averagecase return J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) using the proximal policy optimization (PPO). Afterward, we update the relaxation parameter ↵ to ensure that the worst-case return is higher than a specific bound (Equation 9). Note that samples used in the two steps are different (Lines 3 and 6 of Algorithm 1) because the meta objective optimization is an online method. In addition, we chose PPO as a base algorithm since it prevents the model from being updated significantly in a single step. It helps to control the penalty term dTV (⇡, ⇡̃) in Theorem 2. The implementation details are provided in Appendix A.7. Algorithm 1: Relaxed State-Adversarial Policy Optimization Input :MDP (S,A, P0, r, ), Objective function L, step size parameter ⌘, number of iterations T , P0 is the nominal transition kernel, ✏-Neighborhood 1 Initialize the policy ⇡✓0 for t = 0, . . . , T 1 do 2 Sample the tuple {si, ai, ri, s0i} Tupd i=1, where a 0 i ⇠ ⇡✓t(·|s0i), and s0i ⇠ P0(·|si, ai) 3 Evaluate J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) 4 Update the policy to ⇡✓t+1 by applying multi-step SGD to the objective function as PPO 5 Sample the tuple {si, ai, ri, s0i} T 0upd i=1, where a 0 i ⇠ ⇡✓t+1(·|s0i), and s0i ⇠ P0(·|si, ai) 6 Update the relaxation parameter to ↵t+1 via one SGD update with respect to the meta-objective 7 end 5 EXPERIMENTAL RESULTS AND EVALUATIONS We conducted two experiments on Mujoco (Todorov et al., 2012) to evaluate the performance of our relaxed state adversarial policy optimization (RAPPO). All the baselines and our method were implemented on the PPO (Schulman et al., 2017), and the default training parameters were used. In addition, the results were averaged from five different runs/seeds. Robustness against Environmental Adversaries. We compared our RAPPO with the latest DR method, MRPO (Jiang et al., 2021), to evaluate its robustness against the uncertainty of environmental parameters1. Agents trained using the two methods were evaluated in the environments, in which the size and gravity were drifted in the range of 0.6 - 1.4. To simulate the situation that domain knowledge is unavailable, during training, MRPO perturbed mass and friction in the range of 0.8 - 1.2, and our RAPPO attacked the states by its value function. Figure 2 shows the subtractions of the rewards of the two methods. As can be seen, our RAPPO outperformed MRPO since state adversaries were more general than environmental adversaries. Agents trained by MRPO could perform poorly when the perturbations in the training and testing environments were different. Robustness Against States Adversaries. We compared our RAPPO with SCPPO (Kuang et al., 2021) to evaluate its robustness against state adversaries. Both of the methods perturb states to improve agents’ robustness. We also included vanilla PPO in the experiment because it is the base algorithm of RAPPO and SCPPO. To achieve a fair comparison, the parameters used in RAPPO and SCPPO were the same. Specifically, we set ✏ to 0.015, 0.002, 0.03, 0.001, and 0.005 to the environments of HalfCheetah-v2, Hopper-v2, Ant-v2, Walker-v2, and Humanoid2d-v2, respectively. The parameters were chosen according to the variance of actions in the environments. 1We obtained the official implementation of MRPO from http://proceedings.mlr.press/v139/jiang21c.html and used their default parameter setting. Table 1 shows the testing results. We attacked the agents using their respective value functions under multiple strengths. Specifically, we repeated the experiments from 5 different seeds and generated 50 trajectories for each seed from different initial states for evaluation. The means and standard deviations of the rewards were reported. Clearly, the results fulfilled Lemma 1, where agents’ performance decreased as the strength of attack increased. In addition, our RAPPO was competitive to PPO and SCPPO in nominal environments, and its performance decreased the slowest as the strength of attack increased. It deserves noting that the attacks in the last two columns of Table 1 were stronger than that of the worst-case. Our RAPPO performed the best in the environments. Extending SAPPO Using Relaxed State Adversaries. While our RAPPO successfully improves the robustness of agents against state adversaries, a classical method, SAPPO (Zhang et al., 2020), can help agents against the perturbation of state observations. We thus extended SAPPO by adopting our relaxed state adversarial attacks during training and evaluated its effectiveness. Similarly, we compared the methods on the trajectories of 5 seeds and 50 initial states. Table 2 shows the results. As indicated, the extended RA SAPPO outperformed SAPPO in most of the environments, particularly under strong attacks. Steady Improvements of the Average and Worst Case Environments. We apply a bi-level approach to optimize the average and worst-case environments during training. To verify the feasibility of this approach, we evaluated the agents’ performance under these two cases during training. To determine the worst-case result, we generated 50 trajectories from different initial states, perturbed states with the same strength as the training ✏, and then averaged the rewards. In contrast, the average-case result was determined from 50 initial states and 10 different perturbation strengths, which were uniformly distributed between 0 and ✏. In total, the rewards of 50 ⇥ 10 trajectories were averaged. Figure 3 shows that our RAPPO can steadily improve the average-case performance without sacrificing the worst-case performance. Note that the high variance of the average-case rewards is reasonable because of different adversarial strengths. The value of the relaxation parameter ↵. Our meta-objective optimization determines the relaxation parameter ↵ (Equation 6) to control the strengths of state adversaries during training. While ↵ is unknown, an intuitive idea is to consider ↵ a hyper-parameter and let users specify the value. However, we point out that the value of ↵ should vary at different training stages since agents are weak initially and can perform well after training. To verify that a dynamic ↵ is over a constant ↵ (i.e., RAPPO-C), we evaluated the performance of agents under state perturbed environments. In the experiments, we set ↵ = 0.5 for RAPPO-C since it is in the middle of nominal and worst-case environments. The remaining parameters between the methods were exactly the same. As indicated in Table 1, RAPPO outperformed RAPPO-C without a doubt. We also refer readers to Appendix A.8 for the dynamics of ↵ during training. 6 CONCLUSIONS We have presented a relaxed state adversarial policy optimization to improve the robustness of agents against the uncertainty of environments. Compared to the methods in DR, we perturbed states using the adversarial attack so as to decouple randomization from simulators. Neither prior knowledge of selecting environmental parameters nor prior assumption of parameter distribution are needed. In addition, we introduced a relaxation strategy to tackle the over-conservative problem caused by state adversarial attacks. Our policy optimization maximizes rewards in the average-case while holding the lower-bound rewards in the worst-case environments simultaneously. Experiment results and theoretical proofs demonstrate the effectiveness of our method. Limitations and Future Work. Our relaxation method is state-independent, in which the value of ↵ is adjusted according to the overall performance of policy. Since the degrees of difficulty vary from states to states, it will be interesting to investigate the state-dependent relaxation method. In addition, we currently assume that each dimension of states is equally important, which may not be the case. We will also explore the weight of each dimension when perturbing states in the future.
1. What is the main issue addressed in the paper regarding adversarial training methods? 2. What are the strengths and weaknesses of the proposed approach to address the identified issue? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the specific questions raised by the reviewer regarding the paper's presentation and methodology?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper identifies an issue they call "over-conservatism" with adversarial training methods, which causes more conservative behavior than is desirable. This is adressed by instead optimizing for a mixture of average case and worst case performance. Strengths And Weaknesses Strengths: The conservatism (If I've understood it) that they identified can be an bottleneck to performance in some settings The approach is relatively simple The theoretical results appear correct Weaknesses: It is unclear what is meant exactly by over- conservatism A) Could mean a specific sort of feed-back effect between the TD updates and the Q-Values as shown in Figure 1. (that is, the policy training doesn't find the policy the designer intended) B) Could mean that it is "conservative" to a point that it is bad. (that is, the policy training worked but conservative policy the designer intended to find was too conservative so does not perform well) Other than Figure 1, the rest of the paper can be viewed as B, and it seems that A is not mentioned again apart from Figure 1. If A is the point, then I would want to see empirical evidence that this happens by default in the experimental domains. Figure 1 is difficult to understand and needs a much more mechanistic explanation if it going to be a core part of the argument. Once I understood the claim, it felt like I had to rederive the result on my own rather than it being explained. It is not initially clear how errors would form in the first place, or why those errors would get bigger over time. Even now, it is not clear why the worst point would be exempt from those errors making its valuation even worse. The evaluation is empirically weak The policy was only compared to other policies which were not robustified along all of the dimensions their method was able to be made robust to. This is justified by saying that the other methods would require instrumentation of the simulator in order to modify all dimensions, but it seems that their method would also require a slightly different sort of instrumented simulator, and that the other methods could be trivially modified to use that sort of instrumentation. More specifically, it seems that the authors want to disallow things that modify the transition function, but allow for modifications of the state. Since many simulators don't have a reason to allow arbitrary state modifications, it is an awkward line to draw. But given that we are drawing that line, it would make the most sense to benchmark against methods which can also modify the state in the same way. Since adding a perturbation to the state is a special case of modifying the transition function, all the other considered methods should already work in this setting, if configured properly. Thus allowing the other methods to make state perturbations would be the appropriate baseline The results, as given in Table 1 are very difficult to interpret given the highlighting. Results are highlighted as ``best'' which are not statistically distinguishable from the others, and so it makes the results at a glance look more impressive than they are. The theoretical results are straightforward. While they are nice to have, and good due diligence, I would not consider them as grounds for acceptance on their own. Clarity, Quality, Novelty And Reproducibility The presentation is unclear in a few places, for example: The definition of MDP in this paper is redefined to let P be a set of transition functions rather than a fixed transition function. It is weird to call this an MDP, since it is not, since it is closer to a Robust MDP I don't know what is meant by the phrase "it stands for the average performance of the uncertainty set" it guarantees the improvement of the worst-case MDP. What is the "worst case MDP" and what would it mean to "improve" it?
ICLR
Title Revisiting Domain Randomization Via Relaxed State-Adversarial Policy Optimization Abstract Domain randomization (DR) is widely used in reinforcement learning (RL) to bridge the gap between simulation and reality through maximizing its average returns under the perturbation of environmental parameters. Although effective, the methods have two limitations: (1) Even the most complex simulators cannot capture all details in reality due to finite domain parameters and simplified physical models. (2) Previous methods often assume that the distribution of domain parameters is a specific family of probability functions, such as a normal or a uniform distribution, which may not be correct. To enable robust RL via DR without the aforementioned limitations, we rethink DR from the perspective of adversarial state perturbation, without the need for re-configuring the simulator or relying on prior knowledge about the environment. We point out that perturbing agents to the worst states during training is naı̈ve and could make the agents over-conservative. Hence, we present a Relaxed State-Adversarial Algorithm to tackle the over-conservatism issue by simultaneously maximizing the average-case and worst-case performance of policies. We compared our method to the state-of-the-art methods for evaluation. Experimental results and theoretical proofs verified the effectiveness of our method. 1 INTRODUCTION Most reinforcement learning (RL) agents are trained in simulated environments due to the difficulties of collecting data in real environments. However, the domain shift, where the simulated and real environments are different, could significantly reduce the agents’ performance. To bridge this “reality gap”, domain randomization (DR) methods perturb environmental parameters (Tobin et al., 2017; Rajeswaran et al., 2016; Jiang et al., 2021), such as the mass or the friction coefficient, to simulate the uncertainty in state transition probabilities and expect the agents to maximize the return over the perturbed environments. Despite its wide applicability, DR suffers from two practical limitations: (i) DR requires direct access to the underlying parameters of the simulator, and this could be infeasible if only off-the-shelf simulation platforms are available. (ii) To enable sampling of environmental parameters, DR requires a prior distribution over the feasible environmental parameters. However, the design of such a prior typically relies on domain knowledge and could significantly affect the performance in real environments. To enable robust RL via DR without the above limitations, we rethink DR from the perspective of adversarial state perturbation, without the need for re-configuring the simulator or relying on prior knowledge about the environment. The idea is that perturbing the transition probabilities can be equivalently achieved by imposing perturbations upon the states after nominal state transitions. To substantiate the idea of state perturbations, a simple and generic approach from the robust optimization literature (Ben-Tal & Nemirovski, 1998) is taking a worst-case viewpoint and perturbing the states to nearby states that have the lowest long-term expected return under the current policy (Kuang et al., 2021). While being a natural solution, such a worst-case strategy could suffer from severe over-conservatism. We identify that the over-conservative behavior results from the tight coupling between the need for temporal difference (TD) learning in robust RL and the worst-case operation of state perturbation. Specifically: (1) In robust RL, the value functions are learned with the help of bootstrapping in TD methods since finding nearby worst-case states via Monte-Carlo sampling is NP-hard (Ho et al., 2018; Chow et al., 2015; Behzadian et al., 2021). (2) Under the worst-case state perturbations, TD methods would update the value function based on the local minimum within a neighborhood of the nominal next state and is, therefore, completely unaware of the value of the nominal next state. As a result, the learner could fail to identify or explore those states with potentially high returns. To further illustrate this phenomenon, we consider a toy grid world example of finding the shortest path toward the goal, as shown in Figure 1(a). Although the goal state has a high value, the TD updates cannot propagate the value to other states since all nominal state transitions toward the goal state are perturbed away under the worst-case state-adversarial method. What’s even worse, the agent ultimately learns to move toward the trap state due to the compounding effect of TD updates and worst-case state-adversarial perturbations. Notably, in addition to the grid world environment, such trap terminal states also commonly exist in various RL problems, such as the locomotion tasks in MuJoCo. As a result, there remains one critical unanswered question in robust RL: how to fully unleash the power of the state-adversarial model in robustifying RL algorithms without suffering from over-conservatism? To answer this question, we introduce relaxed state-adversarial perturbations. Specifically: (1) Instead of taking a pure worst-case perspective, we simultaneously consider both the average-case and worst-case scenarios during training. By incorporating the average-case scenarios, the TD updates can successfully propagate the values of those potentially high-return states to other states and thereby prevent the over-conservative behavior (Figure 1(b)). (2) To substantiate the above idea, we introduce a relaxed state-adversarial transition kernel, where the average-case environment can be easily represented by the interpolation of the nominal and the worst-case environments. Under this new formulation of DR, each interpolation coefficient corresponds to a distribution of state adversaries. (3) Besides, based on this formulation, we theoretically quantify the performance gap between the average-case and the worst-case environments; and prove that maximizing the averagecase performance can also benefit the worst-case performance. (4) Accordingly, we present Relaxed state-adversarial policy optimization, a bi-level framework that optimizes the rewards of the two cases alternatively and iteratively. One level updates the policy to maximize the average-case performance, and the other updates the interpolation coefficient of the relaxed state-adversarial transition kernel to increase the lower bound of the return of the worst-case environment. 2 RELATED WORK Robust Markov Decision Process (MDP) and Robust RL. Robust MDP aims to maximize rewards in the worst situations if the testing environment deviates from the training environment (Nilim & El Ghaoui, 2005; Iyengar, 2005; Wiesemann et al., 2013). Due to the large searching space, the complexity of robust MDP grows rapidly when the dimensionality increases. Therefore, Tamar et al. (2014) developed an approximated dynamic programming to scale up the robust MDPs paradigm. Roy et al. (2017) extended the method to nonlinear estimation and guaranteed the convergence to a regional minimum. Afterward, the works of (Wang & Zou, 2021; Badrinath & Kalathil, 2021) study the convergence rate when applying function approximations under assumptions. Derman et al. (2021) showed that the regularized MDPs are a particular instance of robust MDPs with uncertain rewards. They solved regularized MDPs rather than robust MDPs to reduce computation complexity. Grand-Clément & Kroer (2020) developed efficient proximal updates to solve the distributionally robust MDP via gradient descent and improved the convergence rate. However, although several approximations were presented, such model environments are still too restrictive, and they cannot be used to solve real-world problems. Adversary in Observations. Even a small perturbation to observations may significantly degrade agents’ performance because deep neural networks are vulnerable to inputs constructed by adversaries (Huang et al., 2017). Therefore, methods were presented to train agents under environments with adversarial attacks to improve their robustness (Kos & Song, 2017; Pattanaik et al., 2018). To guarantee a lower-bound performance, the works of (Lütjens et al., 2020; Wang et al., 2019) adopted the idea of certified defense used in classification problems. When making discrete actions, agents are certifiably robust to adversaries in observation within the ✏ distance (Lp-norm). Since most real-world problems are continuous, there were also methods (Weng et al., 2019; Zhang et al., 2020; Oikarinen et al., 2021; Zhang et al., 2021) presented to improve agents’ robustness for continuous actions. Domain Randomization. Environments can induce the uncertainty of transition probabilities. To simulate this circumstance, one can perturb the environmental parameters of a simulator to reasonably change transition probabilities when training agents (Huang et al., 2021; Tobin et al., 2017; Jiang et al., 2021; Igl et al., 2019; Cobbe et al., 2019). Specifically, Tobin et al. (2017) randomly sampled environmental variables and optimized the agents’ average reward. Given that a significant perturbation may fail the training, Cobbe et al. (2019) increased the level of difficulty step by step when training agents to improve their average rewards. Jiang et al. (2021) further considered the expected return in the optimal case and introduced monotonic robust policy optimization to maximize the average-case and worst-case returns simultaneously. Since perturbing transition probabilities through environmental parameters demands prior knowledge, Kuang et al. (2021) transferred states to the nearby local minimum based on gradients obtained from the value function to imitate environmental disturbance. Igl et al. (2019) injected selective noise based on a variational information bottleneck and value networks to prevent models from overfitting the training environment. The regularization helps agents resist the uncertainty of state transition probabilities. Our method perturbs states through the gradients of the value function, as Kuang et al. (2021) did. However, pushing states toward the nearby local minimum will make agents over-conservative because they consider only the worst-case scenarios. We present the relaxed state adversarial perturbation and optimize both the average-case and worst-case environments to overcome this problem. 3 PRELIMINARIES A robust Markov decision process (robust MDP) is characterized by a tuple (S,A,P, R, µ, ), where S is the state space, A is action space, P is the uncertainty set that contains all possible transition kernels, R : S ⇥A ! [ Rmax, Rmax] is the reward function, µ is the initial state distribution, and 2 (0, 1) is the discount factor. Let P0 2 P denote the nominal transition kernel, which characterizes the transition dynamics of the nominal environment without perturbation. We define the total expected return under a policy ⇡ and a transition kernel P 2 P as J(⇡|P ) := Es0⇠µ,at⇠⇡(·|st),st+1⇠P (·|st,at) 1X t=0 tR(st, at) . (1) For ease of exposition, we also define the value function under policy ⇡ and transition kernel P as V ⇡P (s) := Eat⇠⇡(·|st),st+1⇠P (·|st,at) hP1 at=0 tR(st, at)|s0 = s i . To learn a policy in a robust MDP, the DR approaches are built on two major design principles: (1) Construction of uncertainty set: DR presumes that one could have access to the environment parameters of the simulator. The uncertainty set P is constructed by specifying the possible range of one or multiple environment parameters, typically based on some domain knowledge. (2) Average-case perspective: DR resorts to maximizing the average performance with respect to some pre-configured distribution D over the uncertainty set P , i.e., EP⇠D[J(⇡|P )]. 4 DOMAIN RANDOMIZATION VIA RELAXED STATE-ADVERSARY 4.1 CONNECTING DOMAIN RANDOMIZATION AND STATE PERTURBATION Conventional DR methods enforce attacks on state transitions by perturbing the environment parameters of a simulator. This goal can be achieved by perturbing the state after each nominal transition (Kuang et al., 2021): Let (s, a) be some state-action pair, and : S ! S be a state perturbation function. In a nominal environment, the probability of the transition to some state s0 under s, a is P (s0|s, a). Under the state perturbation , the probability becomes P ( (s0)|s, a). However, this state adversarial attack is too effective since a value function considers the expected future return, and a perturbation to an early state may significantly influence the later states. The over-conservatism problem therefore occurs. We present a relaxed state-adversarial policy optimization to overcome the problem. We also prove that the relaxed MDP enjoys two main properties under relaxation: (1) it stands for the average performance of the uncertainty set; (2) it guarantees the improvement the performance of the worst-case MDP. Further, we prove that a specific average-case MDP corresponds to a relaxation parameter. Hence, we propose an algorithm for adapting the relaxation parameters during training. 4.2 STATE-ADVERSARIAL MDPS AND UNCERTAINTY SETS State-adversarial attacks perturb the current states to neighboring states with the lowest values. This perturbation process can be captured by a state-adversarial transition kernel, which connects the nominal MDP and the resulting state-adversarial MDP. For ease of exposition, for each state s 2 S , we define N (s) := {s0|d(s, s0) } to be the -neighborhood of s, where d(s, s0) can be any distance metric. In this study, we use L1-norm. Definition 1 (State Perturbation Matrix). Given a policy ⇡ and a perturbation parameter 0, the state perturbation matrix Z⇡ with respect to ⇡ is defined as follows: for each pair of states i, j 2 S , Z⇡ (i, j) := ⇢ 1, if j = argmins2N (i) V ⇡(s), 0, otherwise. (2) The justifications for choosing the above surrogate perturbation model are two-fold: (1) The model can be interpreted as constructing adversarial examples for the true states. (2) The perturbation model is closely related to the perturbation of environment parameters, which serve as the standard machinery in the canonical DR formulation, as described in (Kuang et al., 2021). Remark 1. In continuous state space, the argmin in Equation 2 can be computed by adapting the fast gradient sign method (FGSM) (Goodfellow et al., 2014). Let V be a value function (i.e., network) with parameter , s be a state, and ✏ be the strength of perturbation. FGSM finds the perturbed state (s) = s ✏ · sign(rsV ( , s)) that has the minimum value, where ||s (s)||1 ✏, and the gradient at s is computed using back-propagation. Definition 2 (State-Adversarial MDP). For any policy ⇡, the corresponding state-adversarial MDP with respect to ⇡ is defined as a tuple (S,A, P⇡ , R, µ, ), where the state-adversarial transition kernel P⇡ is defined as P⇡ (·|s, a) := [Z⇡ ]>P0(·|s, a), 8(s, a) 2 S ⇥A . (3) Recall that P0 is the nominal transition kernel. We use the notation P⇡ = [Z⇡ ]>P0 in the later paragraphs for simplicity. Note that the state adversarial transition matrix Z⇡ depends on the strength of perturbation . Each perturbation radius results in a unique state-adversarial MDP P⇡ . Remark 2. The state-adversarial MDP defined in Definition 2 involves perturbation of the true states, which is fundamentally different from the perturbation of observations (Zhang et al., 2020). Definition 3 (Uncertainty Set). Given a radius ✏ > 0, the uncertainty set induced by state-adversarial perturbations, denoted by P⇡✏ , is defined as P⇡✏ := {P⇡ : P⇡ = [Z⇡ ]>P0 and ✏}. (4) The adversarial attack transits agents toward low-value states. Agents trained using this state adversarial MDP would prevent themselves from falling into the worst situation (Kuang et al., 2021). However, a large ✏ will make agents too conservative and fail to reach any goal state because its value cannot be propagated to neighboring states by the TD updates (Figure 1). Although using a small ✏ can ease the problem, agents would completely omit the risks outside the bounding area. Besides, this strategy is unachievable in a discrete environment due to the lower-bound value of ✏. For example, the agent’s movement in the grid world is one hop and cannot be reduced. Lemma 1 (Monotonicity of Average Value in Perturbation Strength). Under the setting of state adversarial MDP, the value of the local minimum monotonically decreases as the bounded radius increases. Let x be a positive real number. The reward function J satisfies J(⇡|P⇡ ) J(⇡|P⇡ +x), 8⇡. (5) The proof is in Appendix A.3. Notably, Lemma 1 indicates that among the transition kernels in the uncertainty set P⇡✏ , the worst-case occurs when = ✏. 4.3 RELAXED STATE-ADVERSARIAL MDPS We present a relaxation framework to address the over-conservatism issue. To begin with, we consider a relaxation on the state-adversarial transition kernel as follows: Relaxed state-adversarial transition kernel. Given ✏ > 0 and ↵ 2 [0, 1], the ↵-relaxed stateadversarial transition kernel is defined as a convex combination of the nominal and the stateadversarial transition kernels, i.e., P⇡,↵✏ (·|s, a) = ↵P0(·|s, a) + (1 ↵)P⇡✏ (·|s, a). (6) Connecting relaxed state-adversarial MDPs with domain randomization. DR methods demand a prior distribution for computing the average case performance. Let D be a distribution over the uncertainty set P⇡✏ . In the following, we show that applying DR with respect to D is equivalently cast optimizing an objective under a relaxed state-adversarial transition kernel. Lemma 2 (Relaxation parameter ↵ as a prior distribution D in domain randomization). For any distribution D over the state-adversarial uncertainty set P⇡✏ , there must be an ↵ 2 [0, 1] such that EP⇠D[J(⇡|P )] = J(⇡|P⇡,↵✏ ). The proof is in Appendix A.4. It is worth noting that different values of ↵ represent different prior assumptions. For example, ↵ = 1 implies that the prior probability of nominal MDP is 1, whereas ↵ = 0 indicates that the prior probability of the worst-case MDP is 1. In other words, we can control the value of ↵ to represent different distributions D and train the policies under various environments. To achieve this goal, we quantify the gap between the average performance EP⇠D[J(⇡̃|P )] and the worst case performance J(⇡̃|P⇡✏ ) when updating the current policy ⇡ to a new policy ⇡̃, and then apply an optimization technique to maximize both of them. One naı̈ve bound is as follows. Theorem 1 (A naı̈ve connection between the average-case and the worst-case returns). Given a nominal MDP with state adversaries, when updating the current policy ⇡ to a new policy ⇡̃, the following bound holds (Jiang et al., 2021): J(⇡̃|P⇡✏ ) EP⇠D[J(⇡̃|P )] 2Rmax EP⇠D[dTV(P⇡✏ kP )] (1 )2 4Rmax dTV(⇡, ⇡̃) (1 )2 , (7) where Rmax is the maximum reward, dTV (⇡, ⇡̃) indicates the total variation divergence between ⇡ and ⇡̃, and P⇡✏ is the worst state-adversarial transition kernels. Theorem 1 indicates that the gap between the average- and the worst- case performance can be expressed using the MDP shift EP⇠D[dTV(P⇡✏ kP )] and the policy evolution dTV (⇡, ⇡̃). The proof is in Appendix A.5. Note that the bound in Theorem 1 is loose because the value on the right hand side (RHS) of Equation 7 can be tiny. Specifically, the transition kernel probability shift EP⇠D[dTV(P⇡✏ kP )] is multiplied by the total maximum return Rmax1 , and the additional denominator 1 makes the value even smaller since is usually set to 0.99 in RL applications. As a result, the bound can be meaningless unless the worst-case MDP P⇡✏ is very close to the average MDP. Since state perturbation only perturbs states to nearby states, we consider the smoothness of the reward function and transition property to build a tight connection between the average-case and the worst-case returns. Specifically, Lipschitz continuity in reward function has been widely used in the theory of RL (Fehr et al., 2018; Asadi et al., 2018; Ling et al., 2016). The smoothness of the transition kernel also holds in most of the environments (Shen et al., 2020; Lakshmanan et al., 2015). For example, in grid-world, the next state must be adjacent to the current state; and in MuJoCo, the poses of consecutive periods are similar, no matter what the state-action pairs are considered. Formally, we define this smoothness property of transition kernels as: Definition 4 ( -Smooth Transition Kernel in State). Let P be a transition kernel and be a positive constant. P is a -smooth transition kernel in state if ks s0k , (8) for all a and for all s, s0 with P (s0|s, a) > 0. With the assumption of Lipschitz continuity in reward function and smoothness of transition kernel, we arrive at the following bound: Theorem 2 (Connecting Worst-Case and Average-Case Returns). Given a nominal MDP with two properties: (1) Reward function of corresponding Markov Reward Process (MRP) with respect to any policy is an Lr-Lipschitz function. (2) Nominal transition kernel P0 has the smooth transition property , where ks s0k2 , 8a and 8P0(s0|s, a) > 0. Then, after updating the current policy ⇡ to a new policy ⇡̃, the following bound holds: J(⇡̃|P⇡✏ ) J(⇡̃|P⇡,↵✏ ) 4 (✏+ )Lr↵ (1 )3 4( (✏+ )Lr + (1 )2Rmax)dTV(⇡, ⇡̃) (1 )3 , (9) where dTV (⇡, ⇡̃) is total variation divergence between ⇡ and ⇡̃, P⇡,↵✏ is a relaxed state-adversarial transition kernel, and P⇡✏ is a worst-case state-adversarial transition kernel. The proof is provided in Appendix A.6. Notably, Theorem 2 holds for any relaxation parameter ↵ 2 [0, 1]. We now briefly discuss the technical challenges in the proof: (1) Propagation of state perturbations across time: The main difficulty lies in the fact that the difference of trajectories under different MDPs would increase in a rather nonlinear and complex manner as time evolves. (2) Quantifying the difference in rewards among trajectories generated under different transition kernels: To measure the difference in rewards under different MDPs, it is necessary to consider not only the probability difference at time t but also the difference in rewards at different states. Despite the above challenges, our proof uses the finding that the difference of initial probability of state under two MDPs P⇡✏ and P⇡,↵✏ at time step t can be quantified as ↵ t, where 0 t 1. Then under the smoothness conditions of the reward function and the transition matrix, we are able to characterize a tight bound between the average-case and the worst-case performance. The intuition of Theorem 2 can be expressed using the terms on the RHS of Equation 9. The first term is the average performance of all MDPs in the uncertainty set. The second term penalizes the large value of ↵ because it implies that the relaxed MDP is close to the nominal environment. In other words, we expect the average case performance to be high while pushing the uncertainty set close to the worst-case MDP. Finally, the third term prevents a significant update in a single step by reducing the total variation divergence dTV (⇡, ⇡̃). 4.4 ONLINE ADAPTATION OF THE RELAXATION PARAMETER We leverage Theorem 2 to address both the average-case and the worst-case performance. Specifically, we present a bi-level approach to maximize the lower-bound of the worst-case performance (i.e., RHS of Theorem 2) since the unknowns ↵ and ⇡ are correlated. The two tasks are optimized alternatively and iteratively. Details are as follows: • Lower-level task for average-case return: On the lower level, we improve the policy by optimizing the objective J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) under a fixed relaxation parameter ↵t. This can be done by using any off-the-shelf RL algorithm (e.g., PPO with a clipped objective). • Upper-level task for worst-case return: On the upper level, we design a meta objective Jmeta(↵t) to represent the lower bound of the worst case performance (RHS of Equation 9). Hence, the task aims to find a relaxation parameter ↵t that can maximize Jmeta(↵t). To enable a stable training, we iteratively update ↵t by applying the online cross-validation algorithm (Sutton, 1992). Both the lower and upper level tasks aim to increase the lower bound of the worst-case performance J(⇡✓t |P ⇡✓t 1 ✏ ) (Equation 9). In the lower-level, a constant relaxation parameter ↵t represents a specific distribution D. It seeks to maximize the average return over all environments in the uncertainty set following distribution D. In the upper-level, the optimization adjusts ↵ to maximize this lower bound. On one hand, increasing ↵t improves the average performance J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) since the average-case moves toward a nominal environment, yet the price is increasing the MDP shift (i.e., the second term of RHS in Equation 9). On the other hand, decreasing ↵ changes the performance and the penalty oppositely. Since ⇡ is weak initially and its performance gradually improves, the meta objective optimization tends to decrease and then increase ↵ during training. Algorithm 1 illustrates our implementation. We first update the policy ⇡✓t to maximize the averagecase return J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) using the proximal policy optimization (PPO). Afterward, we update the relaxation parameter ↵ to ensure that the worst-case return is higher than a specific bound (Equation 9). Note that samples used in the two steps are different (Lines 3 and 6 of Algorithm 1) because the meta objective optimization is an online method. In addition, we chose PPO as a base algorithm since it prevents the model from being updated significantly in a single step. It helps to control the penalty term dTV (⇡, ⇡̃) in Theorem 2. The implementation details are provided in Appendix A.7. Algorithm 1: Relaxed State-Adversarial Policy Optimization Input :MDP (S,A, P0, r, ), Objective function L, step size parameter ⌘, number of iterations T , P0 is the nominal transition kernel, ✏-Neighborhood 1 Initialize the policy ⇡✓0 for t = 0, . . . , T 1 do 2 Sample the tuple {si, ai, ri, s0i} Tupd i=1, where a 0 i ⇠ ⇡✓t(·|s0i), and s0i ⇠ P0(·|si, ai) 3 Evaluate J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) 4 Update the policy to ⇡✓t+1 by applying multi-step SGD to the objective function as PPO 5 Sample the tuple {si, ai, ri, s0i} T 0upd i=1, where a 0 i ⇠ ⇡✓t+1(·|s0i), and s0i ⇠ P0(·|si, ai) 6 Update the relaxation parameter to ↵t+1 via one SGD update with respect to the meta-objective 7 end 5 EXPERIMENTAL RESULTS AND EVALUATIONS We conducted two experiments on Mujoco (Todorov et al., 2012) to evaluate the performance of our relaxed state adversarial policy optimization (RAPPO). All the baselines and our method were implemented on the PPO (Schulman et al., 2017), and the default training parameters were used. In addition, the results were averaged from five different runs/seeds. Robustness against Environmental Adversaries. We compared our RAPPO with the latest DR method, MRPO (Jiang et al., 2021), to evaluate its robustness against the uncertainty of environmental parameters1. Agents trained using the two methods were evaluated in the environments, in which the size and gravity were drifted in the range of 0.6 - 1.4. To simulate the situation that domain knowledge is unavailable, during training, MRPO perturbed mass and friction in the range of 0.8 - 1.2, and our RAPPO attacked the states by its value function. Figure 2 shows the subtractions of the rewards of the two methods. As can be seen, our RAPPO outperformed MRPO since state adversaries were more general than environmental adversaries. Agents trained by MRPO could perform poorly when the perturbations in the training and testing environments were different. Robustness Against States Adversaries. We compared our RAPPO with SCPPO (Kuang et al., 2021) to evaluate its robustness against state adversaries. Both of the methods perturb states to improve agents’ robustness. We also included vanilla PPO in the experiment because it is the base algorithm of RAPPO and SCPPO. To achieve a fair comparison, the parameters used in RAPPO and SCPPO were the same. Specifically, we set ✏ to 0.015, 0.002, 0.03, 0.001, and 0.005 to the environments of HalfCheetah-v2, Hopper-v2, Ant-v2, Walker-v2, and Humanoid2d-v2, respectively. The parameters were chosen according to the variance of actions in the environments. 1We obtained the official implementation of MRPO from http://proceedings.mlr.press/v139/jiang21c.html and used their default parameter setting. Table 1 shows the testing results. We attacked the agents using their respective value functions under multiple strengths. Specifically, we repeated the experiments from 5 different seeds and generated 50 trajectories for each seed from different initial states for evaluation. The means and standard deviations of the rewards were reported. Clearly, the results fulfilled Lemma 1, where agents’ performance decreased as the strength of attack increased. In addition, our RAPPO was competitive to PPO and SCPPO in nominal environments, and its performance decreased the slowest as the strength of attack increased. It deserves noting that the attacks in the last two columns of Table 1 were stronger than that of the worst-case. Our RAPPO performed the best in the environments. Extending SAPPO Using Relaxed State Adversaries. While our RAPPO successfully improves the robustness of agents against state adversaries, a classical method, SAPPO (Zhang et al., 2020), can help agents against the perturbation of state observations. We thus extended SAPPO by adopting our relaxed state adversarial attacks during training and evaluated its effectiveness. Similarly, we compared the methods on the trajectories of 5 seeds and 50 initial states. Table 2 shows the results. As indicated, the extended RA SAPPO outperformed SAPPO in most of the environments, particularly under strong attacks. Steady Improvements of the Average and Worst Case Environments. We apply a bi-level approach to optimize the average and worst-case environments during training. To verify the feasibility of this approach, we evaluated the agents’ performance under these two cases during training. To determine the worst-case result, we generated 50 trajectories from different initial states, perturbed states with the same strength as the training ✏, and then averaged the rewards. In contrast, the average-case result was determined from 50 initial states and 10 different perturbation strengths, which were uniformly distributed between 0 and ✏. In total, the rewards of 50 ⇥ 10 trajectories were averaged. Figure 3 shows that our RAPPO can steadily improve the average-case performance without sacrificing the worst-case performance. Note that the high variance of the average-case rewards is reasonable because of different adversarial strengths. The value of the relaxation parameter ↵. Our meta-objective optimization determines the relaxation parameter ↵ (Equation 6) to control the strengths of state adversaries during training. While ↵ is unknown, an intuitive idea is to consider ↵ a hyper-parameter and let users specify the value. However, we point out that the value of ↵ should vary at different training stages since agents are weak initially and can perform well after training. To verify that a dynamic ↵ is over a constant ↵ (i.e., RAPPO-C), we evaluated the performance of agents under state perturbed environments. In the experiments, we set ↵ = 0.5 for RAPPO-C since it is in the middle of nominal and worst-case environments. The remaining parameters between the methods were exactly the same. As indicated in Table 1, RAPPO outperformed RAPPO-C without a doubt. We also refer readers to Appendix A.8 for the dynamics of ↵ during training. 6 CONCLUSIONS We have presented a relaxed state adversarial policy optimization to improve the robustness of agents against the uncertainty of environments. Compared to the methods in DR, we perturbed states using the adversarial attack so as to decouple randomization from simulators. Neither prior knowledge of selecting environmental parameters nor prior assumption of parameter distribution are needed. In addition, we introduced a relaxation strategy to tackle the over-conservative problem caused by state adversarial attacks. Our policy optimization maximizes rewards in the average-case while holding the lower-bound rewards in the worst-case environments simultaneously. Experiment results and theoretical proofs demonstrate the effectiveness of our method. Limitations and Future Work. Our relaxation method is state-independent, in which the value of ↵ is adjusted according to the overall performance of policy. Since the degrees of difficulty vary from states to states, it will be interesting to investigate the state-dependent relaxation method. In addition, we currently assume that each dimension of states is equally important, which may not be the case. We will also explore the weight of each dimension when perturbing states in the future.
1. What is the focus and contribution of the paper regarding mitigating over-conservative policies? 2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical support and experimental results? 3. Do you have any concerns or questions about the algorithm's use of a learned approximation of Jmet, or the necessity of state adversary? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any potential issues with the algorithm's optimization strategy, such as prioritizing average-case objectives over worst-case ones, or relying on shrinking the uncertainty set rather than improving policy performance?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a method to mitigate the problem of over-conservative policy when optimizing under the worst-case state adversary. Under certain conditions, it proves a tighter bound of the average-case return and worst-case return under state adversary. A two-level update algorithm is proposed to automatically tune the optimization on the worst-case and average-case objective. Experiments on MuJoCo show the learned policies to be more robust against different levels of state adversary. Strengths And Weaknesses Strength: The paper has good writing and clear delivery of the idea. The theoretical support of the proposed method seems to be solid. The proposed algorithm is novel and interesting. Experiments support the effectiveness of the method well. Weakness: What’s J m e t a ? I checked the appendix and it seems the algorithm uses a learned approximation of the J m e t a , please explain and justify this in the main paper. Is state adversary really necessary? I agree that the state adversary is more general than other types of environment adversary like parameter adversary, etc. However, is state adversary really capturing the difference between the simulation and reality? Given that the state adversary in the paper is independently generated for each state, there is no consecutive dependency along different steps, while I guess most dynamics mismatch between simulation and reality preserve this dependency. Confusion about Lemma 2. It is true in Lemma 2 that a certain α exists to make the relaxed state-adversarial equal to the expectation of uncertainty set in terms of objective value. But when using the relaxed state-adversarial in the algorithm, the relaxed state-adversarial is actually not corresponding to the average-case return, right? Because (1) the α is changing and (2) the true α satisfying Lemma 2 is unknown. So why optimizing over the relaxed state-adversarial corresponds to optimizing over the average-case return? From the results in the paper, the α is decreasing first and increasing later, which indicates the algorithm is optimizing over the average-case objective more at first and later more on the worst-case objective. And the optimization over the worst-case objective is achieved essentially through shrinking the uncertainty set ( α gets larger), but not necessarily improving the policy performance on the full uncertainty set (corresponding to α = 1 ), so how does this really help with the policy facing the worst case? For experiments, although the RAPPO outperforms SCPPO, it may not be a fair comparison since RAPPO is designed with state adversary while SCPPO is not, but both of them are evaluated under state adversary. Clarity, Quality, Novelty And Reproducibility The paper has good clarity except for a few details. The proposed method is novel. It’s reproducible.
ICLR
Title Revisiting Domain Randomization Via Relaxed State-Adversarial Policy Optimization Abstract Domain randomization (DR) is widely used in reinforcement learning (RL) to bridge the gap between simulation and reality through maximizing its average returns under the perturbation of environmental parameters. Although effective, the methods have two limitations: (1) Even the most complex simulators cannot capture all details in reality due to finite domain parameters and simplified physical models. (2) Previous methods often assume that the distribution of domain parameters is a specific family of probability functions, such as a normal or a uniform distribution, which may not be correct. To enable robust RL via DR without the aforementioned limitations, we rethink DR from the perspective of adversarial state perturbation, without the need for re-configuring the simulator or relying on prior knowledge about the environment. We point out that perturbing agents to the worst states during training is naı̈ve and could make the agents over-conservative. Hence, we present a Relaxed State-Adversarial Algorithm to tackle the over-conservatism issue by simultaneously maximizing the average-case and worst-case performance of policies. We compared our method to the state-of-the-art methods for evaluation. Experimental results and theoretical proofs verified the effectiveness of our method. 1 INTRODUCTION Most reinforcement learning (RL) agents are trained in simulated environments due to the difficulties of collecting data in real environments. However, the domain shift, where the simulated and real environments are different, could significantly reduce the agents’ performance. To bridge this “reality gap”, domain randomization (DR) methods perturb environmental parameters (Tobin et al., 2017; Rajeswaran et al., 2016; Jiang et al., 2021), such as the mass or the friction coefficient, to simulate the uncertainty in state transition probabilities and expect the agents to maximize the return over the perturbed environments. Despite its wide applicability, DR suffers from two practical limitations: (i) DR requires direct access to the underlying parameters of the simulator, and this could be infeasible if only off-the-shelf simulation platforms are available. (ii) To enable sampling of environmental parameters, DR requires a prior distribution over the feasible environmental parameters. However, the design of such a prior typically relies on domain knowledge and could significantly affect the performance in real environments. To enable robust RL via DR without the above limitations, we rethink DR from the perspective of adversarial state perturbation, without the need for re-configuring the simulator or relying on prior knowledge about the environment. The idea is that perturbing the transition probabilities can be equivalently achieved by imposing perturbations upon the states after nominal state transitions. To substantiate the idea of state perturbations, a simple and generic approach from the robust optimization literature (Ben-Tal & Nemirovski, 1998) is taking a worst-case viewpoint and perturbing the states to nearby states that have the lowest long-term expected return under the current policy (Kuang et al., 2021). While being a natural solution, such a worst-case strategy could suffer from severe over-conservatism. We identify that the over-conservative behavior results from the tight coupling between the need for temporal difference (TD) learning in robust RL and the worst-case operation of state perturbation. Specifically: (1) In robust RL, the value functions are learned with the help of bootstrapping in TD methods since finding nearby worst-case states via Monte-Carlo sampling is NP-hard (Ho et al., 2018; Chow et al., 2015; Behzadian et al., 2021). (2) Under the worst-case state perturbations, TD methods would update the value function based on the local minimum within a neighborhood of the nominal next state and is, therefore, completely unaware of the value of the nominal next state. As a result, the learner could fail to identify or explore those states with potentially high returns. To further illustrate this phenomenon, we consider a toy grid world example of finding the shortest path toward the goal, as shown in Figure 1(a). Although the goal state has a high value, the TD updates cannot propagate the value to other states since all nominal state transitions toward the goal state are perturbed away under the worst-case state-adversarial method. What’s even worse, the agent ultimately learns to move toward the trap state due to the compounding effect of TD updates and worst-case state-adversarial perturbations. Notably, in addition to the grid world environment, such trap terminal states also commonly exist in various RL problems, such as the locomotion tasks in MuJoCo. As a result, there remains one critical unanswered question in robust RL: how to fully unleash the power of the state-adversarial model in robustifying RL algorithms without suffering from over-conservatism? To answer this question, we introduce relaxed state-adversarial perturbations. Specifically: (1) Instead of taking a pure worst-case perspective, we simultaneously consider both the average-case and worst-case scenarios during training. By incorporating the average-case scenarios, the TD updates can successfully propagate the values of those potentially high-return states to other states and thereby prevent the over-conservative behavior (Figure 1(b)). (2) To substantiate the above idea, we introduce a relaxed state-adversarial transition kernel, where the average-case environment can be easily represented by the interpolation of the nominal and the worst-case environments. Under this new formulation of DR, each interpolation coefficient corresponds to a distribution of state adversaries. (3) Besides, based on this formulation, we theoretically quantify the performance gap between the average-case and the worst-case environments; and prove that maximizing the averagecase performance can also benefit the worst-case performance. (4) Accordingly, we present Relaxed state-adversarial policy optimization, a bi-level framework that optimizes the rewards of the two cases alternatively and iteratively. One level updates the policy to maximize the average-case performance, and the other updates the interpolation coefficient of the relaxed state-adversarial transition kernel to increase the lower bound of the return of the worst-case environment. 2 RELATED WORK Robust Markov Decision Process (MDP) and Robust RL. Robust MDP aims to maximize rewards in the worst situations if the testing environment deviates from the training environment (Nilim & El Ghaoui, 2005; Iyengar, 2005; Wiesemann et al., 2013). Due to the large searching space, the complexity of robust MDP grows rapidly when the dimensionality increases. Therefore, Tamar et al. (2014) developed an approximated dynamic programming to scale up the robust MDPs paradigm. Roy et al. (2017) extended the method to nonlinear estimation and guaranteed the convergence to a regional minimum. Afterward, the works of (Wang & Zou, 2021; Badrinath & Kalathil, 2021) study the convergence rate when applying function approximations under assumptions. Derman et al. (2021) showed that the regularized MDPs are a particular instance of robust MDPs with uncertain rewards. They solved regularized MDPs rather than robust MDPs to reduce computation complexity. Grand-Clément & Kroer (2020) developed efficient proximal updates to solve the distributionally robust MDP via gradient descent and improved the convergence rate. However, although several approximations were presented, such model environments are still too restrictive, and they cannot be used to solve real-world problems. Adversary in Observations. Even a small perturbation to observations may significantly degrade agents’ performance because deep neural networks are vulnerable to inputs constructed by adversaries (Huang et al., 2017). Therefore, methods were presented to train agents under environments with adversarial attacks to improve their robustness (Kos & Song, 2017; Pattanaik et al., 2018). To guarantee a lower-bound performance, the works of (Lütjens et al., 2020; Wang et al., 2019) adopted the idea of certified defense used in classification problems. When making discrete actions, agents are certifiably robust to adversaries in observation within the ✏ distance (Lp-norm). Since most real-world problems are continuous, there were also methods (Weng et al., 2019; Zhang et al., 2020; Oikarinen et al., 2021; Zhang et al., 2021) presented to improve agents’ robustness for continuous actions. Domain Randomization. Environments can induce the uncertainty of transition probabilities. To simulate this circumstance, one can perturb the environmental parameters of a simulator to reasonably change transition probabilities when training agents (Huang et al., 2021; Tobin et al., 2017; Jiang et al., 2021; Igl et al., 2019; Cobbe et al., 2019). Specifically, Tobin et al. (2017) randomly sampled environmental variables and optimized the agents’ average reward. Given that a significant perturbation may fail the training, Cobbe et al. (2019) increased the level of difficulty step by step when training agents to improve their average rewards. Jiang et al. (2021) further considered the expected return in the optimal case and introduced monotonic robust policy optimization to maximize the average-case and worst-case returns simultaneously. Since perturbing transition probabilities through environmental parameters demands prior knowledge, Kuang et al. (2021) transferred states to the nearby local minimum based on gradients obtained from the value function to imitate environmental disturbance. Igl et al. (2019) injected selective noise based on a variational information bottleneck and value networks to prevent models from overfitting the training environment. The regularization helps agents resist the uncertainty of state transition probabilities. Our method perturbs states through the gradients of the value function, as Kuang et al. (2021) did. However, pushing states toward the nearby local minimum will make agents over-conservative because they consider only the worst-case scenarios. We present the relaxed state adversarial perturbation and optimize both the average-case and worst-case environments to overcome this problem. 3 PRELIMINARIES A robust Markov decision process (robust MDP) is characterized by a tuple (S,A,P, R, µ, ), where S is the state space, A is action space, P is the uncertainty set that contains all possible transition kernels, R : S ⇥A ! [ Rmax, Rmax] is the reward function, µ is the initial state distribution, and 2 (0, 1) is the discount factor. Let P0 2 P denote the nominal transition kernel, which characterizes the transition dynamics of the nominal environment without perturbation. We define the total expected return under a policy ⇡ and a transition kernel P 2 P as J(⇡|P ) := Es0⇠µ,at⇠⇡(·|st),st+1⇠P (·|st,at) 1X t=0 tR(st, at) . (1) For ease of exposition, we also define the value function under policy ⇡ and transition kernel P as V ⇡P (s) := Eat⇠⇡(·|st),st+1⇠P (·|st,at) hP1 at=0 tR(st, at)|s0 = s i . To learn a policy in a robust MDP, the DR approaches are built on two major design principles: (1) Construction of uncertainty set: DR presumes that one could have access to the environment parameters of the simulator. The uncertainty set P is constructed by specifying the possible range of one or multiple environment parameters, typically based on some domain knowledge. (2) Average-case perspective: DR resorts to maximizing the average performance with respect to some pre-configured distribution D over the uncertainty set P , i.e., EP⇠D[J(⇡|P )]. 4 DOMAIN RANDOMIZATION VIA RELAXED STATE-ADVERSARY 4.1 CONNECTING DOMAIN RANDOMIZATION AND STATE PERTURBATION Conventional DR methods enforce attacks on state transitions by perturbing the environment parameters of a simulator. This goal can be achieved by perturbing the state after each nominal transition (Kuang et al., 2021): Let (s, a) be some state-action pair, and : S ! S be a state perturbation function. In a nominal environment, the probability of the transition to some state s0 under s, a is P (s0|s, a). Under the state perturbation , the probability becomes P ( (s0)|s, a). However, this state adversarial attack is too effective since a value function considers the expected future return, and a perturbation to an early state may significantly influence the later states. The over-conservatism problem therefore occurs. We present a relaxed state-adversarial policy optimization to overcome the problem. We also prove that the relaxed MDP enjoys two main properties under relaxation: (1) it stands for the average performance of the uncertainty set; (2) it guarantees the improvement the performance of the worst-case MDP. Further, we prove that a specific average-case MDP corresponds to a relaxation parameter. Hence, we propose an algorithm for adapting the relaxation parameters during training. 4.2 STATE-ADVERSARIAL MDPS AND UNCERTAINTY SETS State-adversarial attacks perturb the current states to neighboring states with the lowest values. This perturbation process can be captured by a state-adversarial transition kernel, which connects the nominal MDP and the resulting state-adversarial MDP. For ease of exposition, for each state s 2 S , we define N (s) := {s0|d(s, s0) } to be the -neighborhood of s, where d(s, s0) can be any distance metric. In this study, we use L1-norm. Definition 1 (State Perturbation Matrix). Given a policy ⇡ and a perturbation parameter 0, the state perturbation matrix Z⇡ with respect to ⇡ is defined as follows: for each pair of states i, j 2 S , Z⇡ (i, j) := ⇢ 1, if j = argmins2N (i) V ⇡(s), 0, otherwise. (2) The justifications for choosing the above surrogate perturbation model are two-fold: (1) The model can be interpreted as constructing adversarial examples for the true states. (2) The perturbation model is closely related to the perturbation of environment parameters, which serve as the standard machinery in the canonical DR formulation, as described in (Kuang et al., 2021). Remark 1. In continuous state space, the argmin in Equation 2 can be computed by adapting the fast gradient sign method (FGSM) (Goodfellow et al., 2014). Let V be a value function (i.e., network) with parameter , s be a state, and ✏ be the strength of perturbation. FGSM finds the perturbed state (s) = s ✏ · sign(rsV ( , s)) that has the minimum value, where ||s (s)||1 ✏, and the gradient at s is computed using back-propagation. Definition 2 (State-Adversarial MDP). For any policy ⇡, the corresponding state-adversarial MDP with respect to ⇡ is defined as a tuple (S,A, P⇡ , R, µ, ), where the state-adversarial transition kernel P⇡ is defined as P⇡ (·|s, a) := [Z⇡ ]>P0(·|s, a), 8(s, a) 2 S ⇥A . (3) Recall that P0 is the nominal transition kernel. We use the notation P⇡ = [Z⇡ ]>P0 in the later paragraphs for simplicity. Note that the state adversarial transition matrix Z⇡ depends on the strength of perturbation . Each perturbation radius results in a unique state-adversarial MDP P⇡ . Remark 2. The state-adversarial MDP defined in Definition 2 involves perturbation of the true states, which is fundamentally different from the perturbation of observations (Zhang et al., 2020). Definition 3 (Uncertainty Set). Given a radius ✏ > 0, the uncertainty set induced by state-adversarial perturbations, denoted by P⇡✏ , is defined as P⇡✏ := {P⇡ : P⇡ = [Z⇡ ]>P0 and ✏}. (4) The adversarial attack transits agents toward low-value states. Agents trained using this state adversarial MDP would prevent themselves from falling into the worst situation (Kuang et al., 2021). However, a large ✏ will make agents too conservative and fail to reach any goal state because its value cannot be propagated to neighboring states by the TD updates (Figure 1). Although using a small ✏ can ease the problem, agents would completely omit the risks outside the bounding area. Besides, this strategy is unachievable in a discrete environment due to the lower-bound value of ✏. For example, the agent’s movement in the grid world is one hop and cannot be reduced. Lemma 1 (Monotonicity of Average Value in Perturbation Strength). Under the setting of state adversarial MDP, the value of the local minimum monotonically decreases as the bounded radius increases. Let x be a positive real number. The reward function J satisfies J(⇡|P⇡ ) J(⇡|P⇡ +x), 8⇡. (5) The proof is in Appendix A.3. Notably, Lemma 1 indicates that among the transition kernels in the uncertainty set P⇡✏ , the worst-case occurs when = ✏. 4.3 RELAXED STATE-ADVERSARIAL MDPS We present a relaxation framework to address the over-conservatism issue. To begin with, we consider a relaxation on the state-adversarial transition kernel as follows: Relaxed state-adversarial transition kernel. Given ✏ > 0 and ↵ 2 [0, 1], the ↵-relaxed stateadversarial transition kernel is defined as a convex combination of the nominal and the stateadversarial transition kernels, i.e., P⇡,↵✏ (·|s, a) = ↵P0(·|s, a) + (1 ↵)P⇡✏ (·|s, a). (6) Connecting relaxed state-adversarial MDPs with domain randomization. DR methods demand a prior distribution for computing the average case performance. Let D be a distribution over the uncertainty set P⇡✏ . In the following, we show that applying DR with respect to D is equivalently cast optimizing an objective under a relaxed state-adversarial transition kernel. Lemma 2 (Relaxation parameter ↵ as a prior distribution D in domain randomization). For any distribution D over the state-adversarial uncertainty set P⇡✏ , there must be an ↵ 2 [0, 1] such that EP⇠D[J(⇡|P )] = J(⇡|P⇡,↵✏ ). The proof is in Appendix A.4. It is worth noting that different values of ↵ represent different prior assumptions. For example, ↵ = 1 implies that the prior probability of nominal MDP is 1, whereas ↵ = 0 indicates that the prior probability of the worst-case MDP is 1. In other words, we can control the value of ↵ to represent different distributions D and train the policies under various environments. To achieve this goal, we quantify the gap between the average performance EP⇠D[J(⇡̃|P )] and the worst case performance J(⇡̃|P⇡✏ ) when updating the current policy ⇡ to a new policy ⇡̃, and then apply an optimization technique to maximize both of them. One naı̈ve bound is as follows. Theorem 1 (A naı̈ve connection between the average-case and the worst-case returns). Given a nominal MDP with state adversaries, when updating the current policy ⇡ to a new policy ⇡̃, the following bound holds (Jiang et al., 2021): J(⇡̃|P⇡✏ ) EP⇠D[J(⇡̃|P )] 2Rmax EP⇠D[dTV(P⇡✏ kP )] (1 )2 4Rmax dTV(⇡, ⇡̃) (1 )2 , (7) where Rmax is the maximum reward, dTV (⇡, ⇡̃) indicates the total variation divergence between ⇡ and ⇡̃, and P⇡✏ is the worst state-adversarial transition kernels. Theorem 1 indicates that the gap between the average- and the worst- case performance can be expressed using the MDP shift EP⇠D[dTV(P⇡✏ kP )] and the policy evolution dTV (⇡, ⇡̃). The proof is in Appendix A.5. Note that the bound in Theorem 1 is loose because the value on the right hand side (RHS) of Equation 7 can be tiny. Specifically, the transition kernel probability shift EP⇠D[dTV(P⇡✏ kP )] is multiplied by the total maximum return Rmax1 , and the additional denominator 1 makes the value even smaller since is usually set to 0.99 in RL applications. As a result, the bound can be meaningless unless the worst-case MDP P⇡✏ is very close to the average MDP. Since state perturbation only perturbs states to nearby states, we consider the smoothness of the reward function and transition property to build a tight connection between the average-case and the worst-case returns. Specifically, Lipschitz continuity in reward function has been widely used in the theory of RL (Fehr et al., 2018; Asadi et al., 2018; Ling et al., 2016). The smoothness of the transition kernel also holds in most of the environments (Shen et al., 2020; Lakshmanan et al., 2015). For example, in grid-world, the next state must be adjacent to the current state; and in MuJoCo, the poses of consecutive periods are similar, no matter what the state-action pairs are considered. Formally, we define this smoothness property of transition kernels as: Definition 4 ( -Smooth Transition Kernel in State). Let P be a transition kernel and be a positive constant. P is a -smooth transition kernel in state if ks s0k , (8) for all a and for all s, s0 with P (s0|s, a) > 0. With the assumption of Lipschitz continuity in reward function and smoothness of transition kernel, we arrive at the following bound: Theorem 2 (Connecting Worst-Case and Average-Case Returns). Given a nominal MDP with two properties: (1) Reward function of corresponding Markov Reward Process (MRP) with respect to any policy is an Lr-Lipschitz function. (2) Nominal transition kernel P0 has the smooth transition property , where ks s0k2 , 8a and 8P0(s0|s, a) > 0. Then, after updating the current policy ⇡ to a new policy ⇡̃, the following bound holds: J(⇡̃|P⇡✏ ) J(⇡̃|P⇡,↵✏ ) 4 (✏+ )Lr↵ (1 )3 4( (✏+ )Lr + (1 )2Rmax)dTV(⇡, ⇡̃) (1 )3 , (9) where dTV (⇡, ⇡̃) is total variation divergence between ⇡ and ⇡̃, P⇡,↵✏ is a relaxed state-adversarial transition kernel, and P⇡✏ is a worst-case state-adversarial transition kernel. The proof is provided in Appendix A.6. Notably, Theorem 2 holds for any relaxation parameter ↵ 2 [0, 1]. We now briefly discuss the technical challenges in the proof: (1) Propagation of state perturbations across time: The main difficulty lies in the fact that the difference of trajectories under different MDPs would increase in a rather nonlinear and complex manner as time evolves. (2) Quantifying the difference in rewards among trajectories generated under different transition kernels: To measure the difference in rewards under different MDPs, it is necessary to consider not only the probability difference at time t but also the difference in rewards at different states. Despite the above challenges, our proof uses the finding that the difference of initial probability of state under two MDPs P⇡✏ and P⇡,↵✏ at time step t can be quantified as ↵ t, where 0 t 1. Then under the smoothness conditions of the reward function and the transition matrix, we are able to characterize a tight bound between the average-case and the worst-case performance. The intuition of Theorem 2 can be expressed using the terms on the RHS of Equation 9. The first term is the average performance of all MDPs in the uncertainty set. The second term penalizes the large value of ↵ because it implies that the relaxed MDP is close to the nominal environment. In other words, we expect the average case performance to be high while pushing the uncertainty set close to the worst-case MDP. Finally, the third term prevents a significant update in a single step by reducing the total variation divergence dTV (⇡, ⇡̃). 4.4 ONLINE ADAPTATION OF THE RELAXATION PARAMETER We leverage Theorem 2 to address both the average-case and the worst-case performance. Specifically, we present a bi-level approach to maximize the lower-bound of the worst-case performance (i.e., RHS of Theorem 2) since the unknowns ↵ and ⇡ are correlated. The two tasks are optimized alternatively and iteratively. Details are as follows: • Lower-level task for average-case return: On the lower level, we improve the policy by optimizing the objective J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) under a fixed relaxation parameter ↵t. This can be done by using any off-the-shelf RL algorithm (e.g., PPO with a clipped objective). • Upper-level task for worst-case return: On the upper level, we design a meta objective Jmeta(↵t) to represent the lower bound of the worst case performance (RHS of Equation 9). Hence, the task aims to find a relaxation parameter ↵t that can maximize Jmeta(↵t). To enable a stable training, we iteratively update ↵t by applying the online cross-validation algorithm (Sutton, 1992). Both the lower and upper level tasks aim to increase the lower bound of the worst-case performance J(⇡✓t |P ⇡✓t 1 ✏ ) (Equation 9). In the lower-level, a constant relaxation parameter ↵t represents a specific distribution D. It seeks to maximize the average return over all environments in the uncertainty set following distribution D. In the upper-level, the optimization adjusts ↵ to maximize this lower bound. On one hand, increasing ↵t improves the average performance J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) since the average-case moves toward a nominal environment, yet the price is increasing the MDP shift (i.e., the second term of RHS in Equation 9). On the other hand, decreasing ↵ changes the performance and the penalty oppositely. Since ⇡ is weak initially and its performance gradually improves, the meta objective optimization tends to decrease and then increase ↵ during training. Algorithm 1 illustrates our implementation. We first update the policy ⇡✓t to maximize the averagecase return J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) using the proximal policy optimization (PPO). Afterward, we update the relaxation parameter ↵ to ensure that the worst-case return is higher than a specific bound (Equation 9). Note that samples used in the two steps are different (Lines 3 and 6 of Algorithm 1) because the meta objective optimization is an online method. In addition, we chose PPO as a base algorithm since it prevents the model from being updated significantly in a single step. It helps to control the penalty term dTV (⇡, ⇡̃) in Theorem 2. The implementation details are provided in Appendix A.7. Algorithm 1: Relaxed State-Adversarial Policy Optimization Input :MDP (S,A, P0, r, ), Objective function L, step size parameter ⌘, number of iterations T , P0 is the nominal transition kernel, ✏-Neighborhood 1 Initialize the policy ⇡✓0 for t = 0, . . . , T 1 do 2 Sample the tuple {si, ai, ri, s0i} Tupd i=1, where a 0 i ⇠ ⇡✓t(·|s0i), and s0i ⇠ P0(·|si, ai) 3 Evaluate J(⇡✓t |P ⇡✓t 1 ,↵t ✏ ) 4 Update the policy to ⇡✓t+1 by applying multi-step SGD to the objective function as PPO 5 Sample the tuple {si, ai, ri, s0i} T 0upd i=1, where a 0 i ⇠ ⇡✓t+1(·|s0i), and s0i ⇠ P0(·|si, ai) 6 Update the relaxation parameter to ↵t+1 via one SGD update with respect to the meta-objective 7 end 5 EXPERIMENTAL RESULTS AND EVALUATIONS We conducted two experiments on Mujoco (Todorov et al., 2012) to evaluate the performance of our relaxed state adversarial policy optimization (RAPPO). All the baselines and our method were implemented on the PPO (Schulman et al., 2017), and the default training parameters were used. In addition, the results were averaged from five different runs/seeds. Robustness against Environmental Adversaries. We compared our RAPPO with the latest DR method, MRPO (Jiang et al., 2021), to evaluate its robustness against the uncertainty of environmental parameters1. Agents trained using the two methods were evaluated in the environments, in which the size and gravity were drifted in the range of 0.6 - 1.4. To simulate the situation that domain knowledge is unavailable, during training, MRPO perturbed mass and friction in the range of 0.8 - 1.2, and our RAPPO attacked the states by its value function. Figure 2 shows the subtractions of the rewards of the two methods. As can be seen, our RAPPO outperformed MRPO since state adversaries were more general than environmental adversaries. Agents trained by MRPO could perform poorly when the perturbations in the training and testing environments were different. Robustness Against States Adversaries. We compared our RAPPO with SCPPO (Kuang et al., 2021) to evaluate its robustness against state adversaries. Both of the methods perturb states to improve agents’ robustness. We also included vanilla PPO in the experiment because it is the base algorithm of RAPPO and SCPPO. To achieve a fair comparison, the parameters used in RAPPO and SCPPO were the same. Specifically, we set ✏ to 0.015, 0.002, 0.03, 0.001, and 0.005 to the environments of HalfCheetah-v2, Hopper-v2, Ant-v2, Walker-v2, and Humanoid2d-v2, respectively. The parameters were chosen according to the variance of actions in the environments. 1We obtained the official implementation of MRPO from http://proceedings.mlr.press/v139/jiang21c.html and used their default parameter setting. Table 1 shows the testing results. We attacked the agents using their respective value functions under multiple strengths. Specifically, we repeated the experiments from 5 different seeds and generated 50 trajectories for each seed from different initial states for evaluation. The means and standard deviations of the rewards were reported. Clearly, the results fulfilled Lemma 1, where agents’ performance decreased as the strength of attack increased. In addition, our RAPPO was competitive to PPO and SCPPO in nominal environments, and its performance decreased the slowest as the strength of attack increased. It deserves noting that the attacks in the last two columns of Table 1 were stronger than that of the worst-case. Our RAPPO performed the best in the environments. Extending SAPPO Using Relaxed State Adversaries. While our RAPPO successfully improves the robustness of agents against state adversaries, a classical method, SAPPO (Zhang et al., 2020), can help agents against the perturbation of state observations. We thus extended SAPPO by adopting our relaxed state adversarial attacks during training and evaluated its effectiveness. Similarly, we compared the methods on the trajectories of 5 seeds and 50 initial states. Table 2 shows the results. As indicated, the extended RA SAPPO outperformed SAPPO in most of the environments, particularly under strong attacks. Steady Improvements of the Average and Worst Case Environments. We apply a bi-level approach to optimize the average and worst-case environments during training. To verify the feasibility of this approach, we evaluated the agents’ performance under these two cases during training. To determine the worst-case result, we generated 50 trajectories from different initial states, perturbed states with the same strength as the training ✏, and then averaged the rewards. In contrast, the average-case result was determined from 50 initial states and 10 different perturbation strengths, which were uniformly distributed between 0 and ✏. In total, the rewards of 50 ⇥ 10 trajectories were averaged. Figure 3 shows that our RAPPO can steadily improve the average-case performance without sacrificing the worst-case performance. Note that the high variance of the average-case rewards is reasonable because of different adversarial strengths. The value of the relaxation parameter ↵. Our meta-objective optimization determines the relaxation parameter ↵ (Equation 6) to control the strengths of state adversaries during training. While ↵ is unknown, an intuitive idea is to consider ↵ a hyper-parameter and let users specify the value. However, we point out that the value of ↵ should vary at different training stages since agents are weak initially and can perform well after training. To verify that a dynamic ↵ is over a constant ↵ (i.e., RAPPO-C), we evaluated the performance of agents under state perturbed environments. In the experiments, we set ↵ = 0.5 for RAPPO-C since it is in the middle of nominal and worst-case environments. The remaining parameters between the methods were exactly the same. As indicated in Table 1, RAPPO outperformed RAPPO-C without a doubt. We also refer readers to Appendix A.8 for the dynamics of ↵ during training. 6 CONCLUSIONS We have presented a relaxed state adversarial policy optimization to improve the robustness of agents against the uncertainty of environments. Compared to the methods in DR, we perturbed states using the adversarial attack so as to decouple randomization from simulators. Neither prior knowledge of selecting environmental parameters nor prior assumption of parameter distribution are needed. In addition, we introduced a relaxation strategy to tackle the over-conservative problem caused by state adversarial attacks. Our policy optimization maximizes rewards in the average-case while holding the lower-bound rewards in the worst-case environments simultaneously. Experiment results and theoretical proofs demonstrate the effectiveness of our method. Limitations and Future Work. Our relaxation method is state-independent, in which the value of ↵ is adjusted according to the overall performance of policy. Since the degrees of difficulty vary from states to states, it will be interesting to investigate the state-dependent relaxation method. In addition, we currently assume that each dimension of states is equally important, which may not be the case. We will also explore the weight of each dimension when perturbing states in the future.
1. What is the focus and contribution of the paper regarding robust RL? 2. What are the strengths of the proposed method, particularly in its theoretical foundation? 3. What are the weaknesses of the paper, especially regarding experimental results? 4. Do you have any concerns or suggestions regarding the schedules for a dynamic alpha? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper To overcome some of the limitations and assumptions required for domain randomization (knowledge of underlying simulator parameters and parameter distributions), the authors tackle the problem of robust RL through adversarial state perturbations instead. However, rather than only accounting for the worst-case perturbations, which can lead to overly conservative agents, the proposed method ‘Relaxed State-Adversarial Algorithm’ simultaneously maximizes both the worst-case and average-case policy performance with the hope that additionally optimizing for average-case performance will prevent overly-conservative behaviours. The authors present a theoretical derivation of their method and evaluate it empirically against relevant prior work. Strengths And Weaknesses Strengths Very clearly written and well motivated, was a pleasure to read. Motivation behind the method and theoretical insights seem well constructed. Weaknesses The high variance on the experimental results raise some questions: within the error bounds, RAPPO is often comparable with the baselines (e.g. for Humanoid and Ant). To make the improvements more clear, I would recommend either running more experiments and/or only bolding the results that are significantly the best performing. Questions Have the authors considered other schedules for a dynamic alpha? The authors compare against a constant alpha, but I would be curious if a linearly increasing schedule is also sufficient and maybe avoids some of the complexities of learning alpha. It looks like from Figure 5 in the Appendix that alpha can vary quite greatly across seeds, suggesting that it may be difficult to learn. It would also be interesting to see how the different alpha dynamics corresponds to different policy performance. Additional Feedback / Minor Notes Formatting in V P π definition at the bottom of page 3 — sum over a t instead of a t Use of ϵ for FGSM in Remark 1 is slightly confusing with use of ϵ as perturbation radius in Remark 2, maybe switch to a different symbol 'Upper-lever' instead of upper-level in 4th line on page 7 Figure 2 caption — last sentence remove ‘the more’ State meta-objective in algorithm for additional clarity Clarity, Quality, Novelty And Reproducibility The paper is well written and clear. While aspects of this work exist in prior work, the authors present a novel algorithm and compare against prior related methods empirically and show connections to prior work theoretically.
ICLR
Title Identifying through Flows for Recovering Latent Representations Abstract Identifiability, or recovery of the true latent representations from which the observed data originates, is de facto a fundamental goal of representation learning. Yet, most deep generative models do not address the question of identifiability, and thus fail to deliver on the promise of the recovery of the true latent sources that generate the observations. Recent work proposed identifiable generative modelling using variational autoencoders (iVAE) with a theory of identifiability. Due to the intractablity of KL divergence between variational approximate posterior and the true posterior, however, iVAE has to maximize the evidence lower bound (ELBO) of the marginal likelihood, leading to suboptimal solutions in both theory and practice. In contrast, we propose an identifiable framework for estimating latent representations using a flow-based model (iFlow). Our approach directly maximizes the marginal likelihood, allowing for theoretical guarantees on identifiability, thereby dispensing with variational approximations. We derive its optimization objective in analytical form, making it possible to train iFlow in an end-to-end manner. Simulations on synthetic data validate the correctness and effectiveness of our proposed method and demonstrate its practical advantages over other existing methods. N/A Identifiability, or recovery of the true latent representations from which the observed data originates, is de facto a fundamental goal of representation learning. Yet, most deep generative models do not address the question of identifiability, and thus fail to deliver on the promise of the recovery of the true latent sources that generate the observations. Recent work proposed identifiable generative modelling using variational autoencoders (iVAE) with a theory of identifiability. Due to the intractablity of KL divergence between variational approximate posterior and the true posterior, however, iVAE has to maximize the evidence lower bound (ELBO) of the marginal likelihood, leading to suboptimal solutions in both theory and practice. In contrast, we propose an identifiable framework for estimating latent representations using a flow-based model (iFlow). Our approach directly maximizes the marginal likelihood, allowing for theoretical guarantees on identifiability, thereby dispensing with variational approximations. We derive its optimization objective in analytical form, making it possible to train iFlow in an end-to-end manner. Simulations on synthetic data validate the correctness and effectiveness of our proposed method and demonstrate its practical advantages over other existing methods. 1 INTRODUCTION A fundamental question in representation learning relates to identifiability: under which condition is it possible to recover the true latent representations that generate the observed data? Most existing likelihood-based approaches for deep generative modelling, such as Variational Autoencoders (VAE) (Kingma & Welling, 2013) and flow-based models (Kobyzev et al., 2019), focus on performing latent-variable inference and efficient data synthesis, but do not address the question of identifiability, i.e. recovering the true latent representations. The question of identifiability is closely related to the goal of learning disentangled representations (Bengio et al., 2013). While there is no canonical definition for this term, we adopt the one where individual latent units are sensitive to changes in single generative factors while being relatively invariant to nuisance factors (Bengio et al., 2013). A good representation for human faces, for example, should encompass different latent factors that separately encode different attributes including gender, hair color, facial expression, etc. By aiming to recover the true latent representation, identifiable models also allow for principled disentanglement; this suggests that rather than being entangled in disentanglement learning in a completely unsupervised manner, we go a step further towards identifiability, since existing literature on disentangled representation learning, such as β-VAE (Higgins et al., 2017), β-TCVAE (Chen et al., 2018), DIP-VAE (Kumar et al., 2017) and FactorVAE (Kim & Mnih, 2018), are neither general endeavors to achieve identifiability; nor do they provide theoretical guarantees on recovering the true latent sources. Recently, Khemakhem et al. (2019) introduced a theory of identifiability for deep generative models, based upon which the authors proposed an identifiable variant of VAEs called iVAE, to learn the distribution over latent variables in an identifiable manner. However, the downside of learning such an identifiable model within the VAE framework lies in the intractability of KL divergence between the approximate posterior and the true posterior. Consequently, in both theory and practice, iVAE inevitably leads to a suboptimal solution, which renders the learned model far less identifiable. In this paper, aiming at avoiding such a pitfall, we propose to learn an identifiable generative model through flows (short for normalizing flows (Tabak et al., 2010; Rezende & Mohamed, 2015)). A normalizing flow is a transformation of a simple probability distribution (e.g. a standard normal) into a more complex probability distribution by a composition of a series of invertible and differentiable mappings (Kobyzev et al., 2019). Hence, they can be exploited to effectively model complex probability distributions. In contrast to VAEs relying on variational approximations, flow-based models allow for latent-variable inference and likelihood evaluation in an exact and efficient manner, making themselves a perfect choice for achieving identifiability. To this end, unifying identifiablity with flows, we propose iFlow, a framework for deep latentvariable models which allows for recovery of the true latent representations from which the observed data originates. We demonstrate that our flow-based model makes it possible to directly maximize the conditional marginal likelihood and thus achieves identifiability in a rigorous manner. We provide theoretical guarantees on the recovery of the true latent representations, and show experiments on synthetic data to validate the theoretical and practical advantages of our proposed formulation over prior approaches. 2 BACKGROUND An enduring demand in statistical machine learning is to develop probabilistic models that explain the generative process that produce observations. Oftentimes, this entails estimation of density that can be arbitrarily complex. As one of the promising tools, Normalizing Flows are a family of generative models that fit such a density of exquisite complexity by pushing an initial density (base distribution) through a series of transformations. Formally, let x ∈ X ⊆ Rn be an observed random variable, and z ∈ Z ⊆ Rn a latent variable obeying a base distribution pZ(z). A normalizing flow f is a diffeomorphism (i.e an invertible differentiable transformation with differentiable inverse) between two topologically equivalent spaces X and Z such that x = f(z). Under these conditions, the density of x is well-defined and can be obtained by using the change of variable formula: pX(x) = pZ(h(x)) ∣∣∣∣det(∂h∂x )∣∣∣∣ = pZ(z)∣∣∣∣det(∂f∂z )∣∣∣∣−1, (1) where h is the inverse of f . To approximate an arbitrarily complex nonlinear invertible bijection, one can compose a series of such functions, since the composition of invertible functions is also invertible, and its Jacobian determinant is the product of the individual functions’ Jacobian determinants. Denote φ as the diffeomorphism’s learnable parameters. Optimization can proceed as follows by maximizing log-likelihood for the density estimation model: φ∗ = arg max φ Ex [ log pZ(h(x;φ)) + log ∣∣∣∣det(∂h(x;φ)∂x )∣∣∣∣]. (2) 3 RELATED WORK Nonlinear ICA Nonlinear independent component analysis (ICA) is one of the biggest problems remaining unresolved in unsupervised learning. Given the observations alone, it aims to recover the inverse mixing function as well as their corresponding independent sources. In contrast with the linear case, research on nonlinear ICA is hampered by the fact that without auxiliary variables, recovering the independent latents is impossible (Hyvärinen & Pajunen, 1999). Similar impossibility result can be found in (Locatello et al., 2018). Fortunately, by exploiting additional temporal structure on the sources, recent work (Hyvarinen & Morioka, 2016; Hyvarinen et al., 2018) established the first identifiability results for deep latent-variable models. These approaches, however, do not explicitly learn the data distribution; nor are they capable of generating “fake” data. Khemakhem et al. (2019) bridged this gap by establishing a principled connection between VAEs and an identifiable model for nonlinear ICA. Their method with an identifiable VAE (known as iVAE) approximates the true joint distribution over observed and latent variables under mild conditions. Nevertheless, due to the intractablity of KL divergence between variational approximate posterior and the true posterior, iVAE maximizes the evidence lower bound on the data log-likelihood, which in both theory and practice acts as a detriment to the achievement of identifiability. We instead propose identifying through flows (normalizing flow), which maximizes the likelihood in a straightforward way, providing theoretical guarantees and practical advantages for identifiability. Normalizing Flows Normalizing Flows are a family of generative approaches that fits a data distribution by learning a bijection from observations to latent codes, and vice versa. Compared with VAEs which learn a posterior approximation to the true posterior, normalizing flows directly deal with marginal likelihood with exact inference while maintaining efficient sampling. Formally, a normalizing flow is a transform of a tractable probability distribution into a complex distribution by compositing a sequence of invertible and differentiable mappings. In practice, the challenge lies in designing a normalizing flow that satisfies the following conditions: (1) it should be bijective and thus invertible; (2) it is efficient to compute its inverse and its Jacobian determinant while ensuring sufficient capabilities. The framework of normalizing flows was first defined in (Tabak et al., 2010) and (Tabak & Turner, 2013) and then explored for density estimation in (Rippel & Adams, 2013). Rezende & Mohamed (2015) applied normalizing flows to variational inference by introducing planar and radial flows. Since then, there had been abundant literature towards expanding this family. Kingma & Dhariwal (2018) parameterizes linear flows with the LU factorization and “1 × 1” convolutions for the sake of efficient determinant calculation and invertibility of convolution operations. Despite their limits in expressive capabilities, linear flows act as essential building blocks of affine coupling flows as in (Dinh et al., 2014; 2016). Kingma et al. (2016) applied autoregressive models as a form of normalizing flows, which exhibit strong expressiveness in modelling statistical dependencies among variables. However, the forwarding operation of autoregressive models is inherently sequential, which makes it inefficient for training. Splines have also been used as building blocks of normalizing flows: Müller et al. (2018) suggested modelling a linear and quadratic spline as the integral of a univariate monotonic function for flow construction. Durkan et al. (2019a) proposed a natural extension to the framework of neural importance sampling and also suggested modelling a coupling layer as a monotonic rational-quadratic spine (Durkan et al., 2019b), which can be implemented either with a coupling architecture RQ-NSF(C) or with autoregressive architecture RQ-NSF(AR). The expressive capabilities of normalizing flows and their theoretical guarantee of invertibility make them a natural choice for recovering the true mixing mapping from sources to observations, and thus identifiability can be rigorously achieved. In our work, we show that by aligning normalizing flows with an existing identifiability theory, it is desirable to learn an identifiable latent-variable model with theoretical guarantees of identifiability. 4 IDENTIFIABLE FLOW In this section, we first introduce the identifiable latent-variable family and the theory of identifiability (Khemakhem et al., 2019) that makes it possible to recover the joint distribution between observations and latent variables. Then we derive our model, iFlow, and its optimization objective which admits principled disentanglement with theoretical guarantees of identifiability. 4.1 IDENTIFIABLE LATENT-VARIABLE FAMILY The primary assumption leading to identifiability is a conditionally factorized prior distribution over the latent variables, pθ(z|u), where u is an auxiliary variable, which can be the time index in a time series, categorical label, or an additionally observed variable (Khemakhem et al., 2019). Formally, let x ∈ X ⊆ Rn and u ∈ U ⊆ Rm be two observed random variables, and z ∈ Z ⊆ Rn a latent variable that is the source of x. This implies that there can be an arbitrarily complex nonlinear mapping f : Z → X . Assuming that f is a bijection, it is desirable to recover its inverse by approximating using a family of invertible mappings hφ parameterized by φ. The statistical dependencies among these random variables are defined by a Bayesian net: u → z → x, from which the following conditional generative model can be derived: p(x, z|u; Θ) = p(x|z;φ)p(z|u;T,λ), (3) where p(x|z;φ) def= p (x−h−1(z)) and p(z|u;T,λ) is assumed to be a factorized exponential family distribution conditioned upon u. Note that this density assumption is valid in most cases, since the exponential families have universal approximation capabilities (Sriperumbudur et al., 2017). Specifically, the probability density function is given by pT,λ(z|u) = n∏ i=1 pi(zi|u) = ∏ i Qi(zi) Zi(u) exp [ k∑ j=1 Ti,j(zi)λi,j(u) ] , (4) where Qi is the base measure, Zi(u) is the normalizing constant, Ti,j’s are the components of the sufficient statistic and λi,j(u) the natural parameters, critically depending on u. Note that k indicates the maximum order of statistics under consideration. 4.2 IDENTIFIABILITY THEORY The objective of identifiability is to learn a model that is subject to: for each quadruplet (Θ,Θ′,x, z), pΘ(x) = pΘ′(x) =⇒ pΘ(x, z) = pΘ′(x, z), (5) where Θ and Θ′ are two different choices of model parameters that imply the same marginal density (Khemakhem et al., 2019). One possible avenue towards this objective is to introduce the definition of identifiability up to equivalence class: Definition 4.1. (Identifiability up to equivalence class) Let ∼ be an equivalence relation on Θ. A model defined by p(x, z; Θ) = p(x|z; Θ)p(z; Θ) is said to be identifiable up to ∼ if pΘ(x) = pΘ′(x) =⇒ Θ ∼ Θ′, (6) where such an equivalence relation in the identifiable latent-variable family is defined as follows: Proposition 4.1. (φ, T̃, λ̃) and (φ′, T̃′, λ̃′) are of the same equivalence class if and only if there exist A and c such that ∀ x ∈ X , T(hφ(x)) = AT ′(hφ′(x)) + c, (7) where T̃(z) = (Q1(z1), ..., Qn(zn), T1,1(z1), ..., Tn,k(zn)), λ̃(u) = (Z1(u), ..., Zn(u), λ1,1(u), ..., λn,k(u)). (8) One can easily verify that∼ is an equivalence relation by showing its reflexivity, symmetry and transitivity. Then, the identifiability of the latent-variable family is given by Theorem 4.1 (Khemakhem et al., 2019). Theorem 4.1. LetZ = Z1×· · ·×Zn and suppose the following holds: (i) The set {x ∈ X |Ψ (x) = 0} has measure zero, where Ψ is the characteristic function of the density p ; (ii) The sufficient statistics Ti,j in (2) are differentiable almost everywhere and ∂Ti,j/∂z 6= 0 almost surely for z ∈ Zi and for all i ∈ {1, ..., n} and j ∈ {1, ..., k}. (iii) There exist (nk+1) distinct priors u0, ...,unk such that the matrix L = λ1,1(u 1)− λ1,1(u0) · · · λ1,1(unk)− λ1,1(u0) ... . . . ... λn,k(u 1)− λn,k(u0) · · · λn,k(unk)− λn,k(u0) (9) of size nk × nk is invertible. Then, the parameters (φ, T̃, λ̃) are ∼-identifiable. 4.3 OPTIMIZATION OBJECTIVE OF IFLOW We propose identifying through flows (iFlow) for recovering latent representations. Our proposed model falls into the identifiable latent-variable family with = 0, that is, p (·) = δ(·), where δ is a point mass, i.e. Dirac measure. Note that assumption (i) in Theorem 4.1 holds true for iFlow. In stark contrast to iVAE which resorts to variational approximations and maximizes the evidence lower bound, iFlow directly maximizes the marginal likelihood conditioned on u: max Θ pX(x|u; Θ) = pZ(hφ(x)|u;θ) ∣∣∣∣det(∂hφ∂x )∣∣∣∣ , (10) where pZ(·|u) is modeled by a factorized exponential family distribution. Therefore, the log marginal likelihood is obtained: log pX(x|u; Θ) = n∑ i=1 ( logQi(zi)− logZi(u) + Ti(zi)Tλi(u) ) + log ∣∣∣∣det(∂hφ∂x )∣∣∣∣ , (11) where zi is the ith component of the source z = hφ(x), and T and λ are both n-by-k matrices. Here, hφ is a normalizing flow of any kind. For the sake of simplicity, we set Qi(zi) = 1 for all i’s and consider maximum order of sufficient statistics of zi’s up to 2, that is, k = 2. Hence, T and λ are given by T(z) = z21 z1 z22 z2 ... ... z2n zn and λ(u) = ξ1 η1 ξ2 η2 ... ... ξn ηn . (12) Therefore, the optimization objective is to minimize L(Θ) = E(x,u)∼pD [( n∑ i=1 logZi(u) ) − trace ( T(z)λ(u)T ) − log ∣∣∣∣det(∂hφ∂x )∣∣∣∣ ] , (13) where pD denotes the empirical distribution, and the first term in (13) is given by n∑ i=1 logZi(u) = log ∫ Rn ( n∏ i=1 Qi(zi) ) exp ( trace ( T(z)λ(u)T )) dz = log ∫ Rn exp ( n∑ i=1 ξiz 2 i + ηizi ) dz = log n∏ i=1 ∫ R exp (ξiz 2 i + ηizi)dzi = log n∏ i=1 (√ − π ξi ) exp ( − η 2 i 4ξi ) = n∑ i=1 ( log √ − π ξi − η 2 i 4ξi ) . (14) In practice, λ(u) can be parameterized by a multi-layer perceptron with learnable parameters θ, where λθ : Rm → R2n. Here,m is the dimension of the space in which u’s lies. Note that ξi should be strictly negative in order for the exponential family’s probability density function to be finite. Negative softplus nonlinearity can be exploited to force this constraint. Therefore, optimization proceeds by minimizing the following closed-form objective: min Θ L(Θ) = E(x,u)∼pD [ n∑ i=1 ( log √ − π ξi − η 2 i 4ξi ) − trace ( T(z)λθ(u) T ) − log ∣∣∣∣det(∂hφ∂x )∣∣∣∣ ] . (15) where Θ = {θ,φ}. 4.4 IDENTIFIABILITY OF IFLOW The identifiability of our proposed model, iFlow, is characterized by Theorem 4.2. Theorem 4.2. Minimizing LΘ with respect to Θ, in the limit of infinite data, learns a model that is ∼-identifiable. Proof. Minimizing LΘ with respect to Θ is equivalent to maximizing the log conditional likelihood, log pX(x|u; Θ). Given infinite amount of data, maximizing log pX(x|u; Θ) will give us the true marginal likelihood conditioned on u, that is, pX(x|u; Θ̂) = pX(x|u; Θ∗), where Θ̂ = arg maxΘ log pX(x|u; Θ) and Θ∗ is the true parameter. According to Theorem 4.1, we obtain that Θ̂ and Θ∗ are of the same equivalence class defined by ∼. Thus, according to Definition 4.1, the joint distribution parameterized by Θ is identifiable up to ∼. Consequently, Theorem 4.2 guarantees strong identifiability of our proposed generative model, iFlow. Note that unlike Theorem 3 in (Khemakhem et al., 2019), Theorem 4.2 makes no assumption that the family of approximate posterior distributions contains the true posterior. And we show in experiments that this assumption is unlikely to hold true empirically. 5 SIMULATIONS To evaluate our method, we run simulations on a synthetic dataset. This section will elaborate on the details of the generated dataset, implementation, evaluation metric and fair comparison with existing methods. 5.1 DATASET We generate a synthetic dataset where the sources are non-stationary Gaussian time-series, as described in (Khemakhem et al., 2019): the sources are divided into M segments of L samples each. The auxiliary variable u is set to be the segment index. For each segment, the conditional prior distribution is chosen from the exponential family (4), where k = 2, Qi(zi) = 1, and Ti,1(zi) = z2i , Ti,2(zi) = zi, and the true λi,j’s are randomly and independently generated across the segments and the components such that their variances obey a uniform distribution on [0.5, 3]. The sources to recover are mixed by an invertible multi-layer perceptron (MLP) whose weight matrices are ensured to be full rank. 5.2 IMPLEMENTATION DETAILS The mapping λθ that outputs the natural parameters of the conditional factorized exponential family distribution is parameterized by a multi-layer perceptron with the activation of the last layer being the softplus nonlinearity. Additionally, a negative activation is taken on the second-order natural parameters in order to ensure its finiteness. The bijection hφ is modeled by RQ-NSF(AR) (Durkan et al., 2019b) with the flow length of 10 and the bin 8, which gives rise to sufficient flexibility and expressiveness. For each training iteration, we use a mini-batch of size 64, and an Adam optimizer with learning rate chosen in {0.01, 0.001} to optimize the learning objective (15). 5.3 EVALUATION METRIC As a standard measure used in ICA, the mean correlation coefficient (MCC) between the original sources and the corresponding predicted latents is chosen to be the evaluation metric. A high MCC indicates the strong correlation between the identified latents recovered and the true sources. In experiments, we found that such a metric can be sensitive to the synthetic data generated by different random seeds. We argue that unless one specifies the overall generating procedure including random seeds in particular any comparison remains debatable. This is crucially important since most of the existing works failed to do so. Therefore, we run each simulation of different methods through seed 1 to seed 100 and report averaged MCCs with standard deviations, which makes the comparison fair and meaningful. 5.4 COMPARISON AND RESULTS We compare our model, iFlow, with iVAE. These two models are trained on the same synthetic dataset aforementioned, with M = 40, L = 1000, n = d = 5. For visualization, we also apply another setting with M = 40, L = 1000, n = d = 2. To evaluate iVAE’s identifying performance, we use the original implementation that is officially released with exactly the same settings as described in (Khemakhem et al., 2019) (cf. Appendix A.2). First, we demonstrate a visualization of identifiablity of these two models in a 2-D case (n = d = 2) as illustrated in Figure 1, in which we plot the original sources (latent), observations and the identified sources recovered by iFlow and iVAE, respectively. Segments are marked with different colors. Clearly, iFlow outperforms iVAE in identifying the original sources while preserving the original geometry of source manifold. It is evident that the latents recovered by iFlow bears much higher resemblance to the true latent sources than those by iVAE in the presence of some trivial indeterminacies of scaling, global sign and permutation of the original sources, which are inevitable even in some cases of linear ICA. This exhibits consistency with the definition of identifiability up to equivalence class that allows for existence of an affine transformation between sufficient statistics, as described in Proposition 4.1. As shown in Figure 1(a), 1(c), and 1(d), iVAE achieves inferior identifying performance in the sense that its estimated latents tend to retain the manifold of the observations. Notably, we also find that despite the relatively high MCC performance of iVAE in Figure 1(d), iFlow is much more likely to recover the true geometric manifold in which the latent sources lie. In Figure 1(b), iVAE’s recovered latents collapses in face of a highly nonlinearly mixing case, while iFlow still works well in identifying the true sources. Note that these are not rare occurrences. More visualization examples can be found in Appendix A.3. https://github.com/ilkhem/iVAE/ Second, regarding quantitative results as shown in Figure 2(a), our model, iFlow, consistently outperforms iVAE in MCC by a considerable margin across different random seeds under consideration while experiencing less uncertainty (standard deviation as indicated in the brackets). Moreover, Figure 2(b) also showcases that the energy value of iFlow is much higher than that of iVAE, which serves as evidence that the optimization of the evidence lower bound, as in iVAE, would lead to suboptimal identifiability. As is borne out empirically, the gap between the evidence lower bound and the conditional marginal likelihood is inevitably far from being negligible in practice. For clearer analysis, we also report the correlation coefficients for each source-latent pair in each dimension. As shown in Figure 3, iFlow exhibits much stronger correlation than does iVAE in each single dimension of the latent space. Finally, we investigate the impact of different choices of activation for generating natural parameters of the exponential family distribution (see Appendix A.1 for details). Note that all of these choices are valid since theoretically the natural parameters form a convex space. However, iFlow(Softplus) achieves the highest identifying performance, suggesting that the range of softplus allows for greater flexibility, which makes itself a good choice for natural parameter nonlinearity. 6 CONCLUSIONS Among the most significant goals of unsupervised learning is to learn the disentangled representations of observed data, or to identify original latent codes that generate observations (i.e. identifiability). Bridging the theoretical and practical gap of rigorous identifiability, we have proposed iFlow, which directly maximizes the marginal likelihood conditioned on auxiliary variables, establishing a natural framework for recovering original independent sources. In theory, our contribution provides a rigorous way to achieve identifiability and hence the recovery of the joint distribution between observed and latent variables that leads to principled disentanglement. Extensive empirical studies confirm our theory and showcase its practical advantages over previous methods. ACKNOWLEDGMENTS We thank Xiaoyi Yin for helpful initial discussions. This work is in loving memory of Kobe Bryant (1978-2020) ... A APPENDIX A.1 ABLATION STUDY ON ACTIVATIONS FOR NATURAL PARAMETERS Figure 4 demonstrates the comparison of MCC of iFlows implemented with different nonlinear activations for natural parameters and that of iVAE, in which relu+eps denotes the ReLU activation added by a small value (e.g. 1e-5) and sigmoid×5 denotes the Sigmoid activation multiplied by 5. A.2 IMPLEMENTATION DETAILS OF IVAE As stated in Section 5.4, to evaluate iVAE’s identifying performance, we use the original implementation that is officially released with the same settings as described in (Khemakhem et al., 2019). Specifically, in terms of hyperparameters of iVAE, the functional parameters of the decoder and the inference model, as well as the conditional prior are parameterized by MLPs, where the dimension of the hidden layers is chosen from {50, 100, 200}, the activation function is a leaky RELU or a leaky hyperbolic tangent, and the number of layers is chosen from {3, 4, 5, 6}. Here we report all averaged MCC scores of different implementations for iVAE as shown in Table 1. Table 1 indicates that adding more layers or more hidden neurons does not improve MCC score, precluding the possibility that expressive capability is not the culprit of iVAE inferior performance. Instead, we argue that the assumption (i) of Theorem 3 in (Khemakhem et al., 2019) (i.e the family of approximate posterior distributions contains the true posterior) often fails or is hard to satisfy in practice, which is one of the major reasons for the inferior performance of iVAE. Additionally, Figure 2(b) demonstrates that the energy value of iFlow is much higher than that of iVAE, which provides evidence that optimizing the evidence lower bound, as in iVAE, leads to suboptimal identifiability. A.3 VISUALIZATION OF 2D CASES
1. What is the focus of the paper, and what are the key ideas proposed? 2. What are the strengths of the paper regarding its clarity, simplicity, and direct approach? 3. What are the weaknesses of the paper, particularly in terms of its experimental section and lack of representational aspects? 4. How does the reviewer assess the novelty of the paper compared to prior works, specifically Khemakhem et al. (2019)? 5. What are the concerns regarding the identifiability guarantee of the model, and how does the reviewer suggest addressing them? 6. How can the authors improve the paper to meet the standards of ICLR, specifically by providing more computational results?
Review
Review This paper is about learning an identifiable generative model, iFlow, that builds upon a recent result on nonlinear ICA. The key idea is providing side information to identify the latent representation, i.e., essentially a prior conditioned on extra information such as labels and restricting the mapping to flows for being able to compute the likelihood. As the loglikelihood of a flow model is readily available, a direct approach can be used for learning that optimizes both the prior and the observation model. The paper is very clear and very easy to follow. The idea is quite clear, and the direct approach is really attractive. Unfortunately, the experimental section is quite limited and does not fully study representational aspects. There is only an illustrative simulation on synthetic data, that in a sense verifies the theory. I was also not able to see the additional insight that the identifiability theory in 4.2 provides additional to Khemakhem et al. (2019). Please clarify. My main concern is that this model is actually a supervised model that learns a mapping from u to x via z. Hence, the theoretical ‘identifiability’ guarantee needs to be stated with some care as this depends on the choice of an arbitrary u. For example if we set x = u, will learning take place? Please comment. Overall, I like the approach but I am uncertain about the level of novelty. For such a paper, one should expect a much more involved computational study. In this respect, I feel that the paper could be accepted but it certainly feels as if it needs more computational results as otherwise the original contribution would be too incremental for ICLR standards. I am giving a provisional reject in the hope that the authors will provide convincing arguments about their original contributions for clarification.
ICLR
Title Identifying through Flows for Recovering Latent Representations Abstract Identifiability, or recovery of the true latent representations from which the observed data originates, is de facto a fundamental goal of representation learning. Yet, most deep generative models do not address the question of identifiability, and thus fail to deliver on the promise of the recovery of the true latent sources that generate the observations. Recent work proposed identifiable generative modelling using variational autoencoders (iVAE) with a theory of identifiability. Due to the intractablity of KL divergence between variational approximate posterior and the true posterior, however, iVAE has to maximize the evidence lower bound (ELBO) of the marginal likelihood, leading to suboptimal solutions in both theory and practice. In contrast, we propose an identifiable framework for estimating latent representations using a flow-based model (iFlow). Our approach directly maximizes the marginal likelihood, allowing for theoretical guarantees on identifiability, thereby dispensing with variational approximations. We derive its optimization objective in analytical form, making it possible to train iFlow in an end-to-end manner. Simulations on synthetic data validate the correctness and effectiveness of our proposed method and demonstrate its practical advantages over other existing methods. N/A Identifiability, or recovery of the true latent representations from which the observed data originates, is de facto a fundamental goal of representation learning. Yet, most deep generative models do not address the question of identifiability, and thus fail to deliver on the promise of the recovery of the true latent sources that generate the observations. Recent work proposed identifiable generative modelling using variational autoencoders (iVAE) with a theory of identifiability. Due to the intractablity of KL divergence between variational approximate posterior and the true posterior, however, iVAE has to maximize the evidence lower bound (ELBO) of the marginal likelihood, leading to suboptimal solutions in both theory and practice. In contrast, we propose an identifiable framework for estimating latent representations using a flow-based model (iFlow). Our approach directly maximizes the marginal likelihood, allowing for theoretical guarantees on identifiability, thereby dispensing with variational approximations. We derive its optimization objective in analytical form, making it possible to train iFlow in an end-to-end manner. Simulations on synthetic data validate the correctness and effectiveness of our proposed method and demonstrate its practical advantages over other existing methods. 1 INTRODUCTION A fundamental question in representation learning relates to identifiability: under which condition is it possible to recover the true latent representations that generate the observed data? Most existing likelihood-based approaches for deep generative modelling, such as Variational Autoencoders (VAE) (Kingma & Welling, 2013) and flow-based models (Kobyzev et al., 2019), focus on performing latent-variable inference and efficient data synthesis, but do not address the question of identifiability, i.e. recovering the true latent representations. The question of identifiability is closely related to the goal of learning disentangled representations (Bengio et al., 2013). While there is no canonical definition for this term, we adopt the one where individual latent units are sensitive to changes in single generative factors while being relatively invariant to nuisance factors (Bengio et al., 2013). A good representation for human faces, for example, should encompass different latent factors that separately encode different attributes including gender, hair color, facial expression, etc. By aiming to recover the true latent representation, identifiable models also allow for principled disentanglement; this suggests that rather than being entangled in disentanglement learning in a completely unsupervised manner, we go a step further towards identifiability, since existing literature on disentangled representation learning, such as β-VAE (Higgins et al., 2017), β-TCVAE (Chen et al., 2018), DIP-VAE (Kumar et al., 2017) and FactorVAE (Kim & Mnih, 2018), are neither general endeavors to achieve identifiability; nor do they provide theoretical guarantees on recovering the true latent sources. Recently, Khemakhem et al. (2019) introduced a theory of identifiability for deep generative models, based upon which the authors proposed an identifiable variant of VAEs called iVAE, to learn the distribution over latent variables in an identifiable manner. However, the downside of learning such an identifiable model within the VAE framework lies in the intractability of KL divergence between the approximate posterior and the true posterior. Consequently, in both theory and practice, iVAE inevitably leads to a suboptimal solution, which renders the learned model far less identifiable. In this paper, aiming at avoiding such a pitfall, we propose to learn an identifiable generative model through flows (short for normalizing flows (Tabak et al., 2010; Rezende & Mohamed, 2015)). A normalizing flow is a transformation of a simple probability distribution (e.g. a standard normal) into a more complex probability distribution by a composition of a series of invertible and differentiable mappings (Kobyzev et al., 2019). Hence, they can be exploited to effectively model complex probability distributions. In contrast to VAEs relying on variational approximations, flow-based models allow for latent-variable inference and likelihood evaluation in an exact and efficient manner, making themselves a perfect choice for achieving identifiability. To this end, unifying identifiablity with flows, we propose iFlow, a framework for deep latentvariable models which allows for recovery of the true latent representations from which the observed data originates. We demonstrate that our flow-based model makes it possible to directly maximize the conditional marginal likelihood and thus achieves identifiability in a rigorous manner. We provide theoretical guarantees on the recovery of the true latent representations, and show experiments on synthetic data to validate the theoretical and practical advantages of our proposed formulation over prior approaches. 2 BACKGROUND An enduring demand in statistical machine learning is to develop probabilistic models that explain the generative process that produce observations. Oftentimes, this entails estimation of density that can be arbitrarily complex. As one of the promising tools, Normalizing Flows are a family of generative models that fit such a density of exquisite complexity by pushing an initial density (base distribution) through a series of transformations. Formally, let x ∈ X ⊆ Rn be an observed random variable, and z ∈ Z ⊆ Rn a latent variable obeying a base distribution pZ(z). A normalizing flow f is a diffeomorphism (i.e an invertible differentiable transformation with differentiable inverse) between two topologically equivalent spaces X and Z such that x = f(z). Under these conditions, the density of x is well-defined and can be obtained by using the change of variable formula: pX(x) = pZ(h(x)) ∣∣∣∣det(∂h∂x )∣∣∣∣ = pZ(z)∣∣∣∣det(∂f∂z )∣∣∣∣−1, (1) where h is the inverse of f . To approximate an arbitrarily complex nonlinear invertible bijection, one can compose a series of such functions, since the composition of invertible functions is also invertible, and its Jacobian determinant is the product of the individual functions’ Jacobian determinants. Denote φ as the diffeomorphism’s learnable parameters. Optimization can proceed as follows by maximizing log-likelihood for the density estimation model: φ∗ = arg max φ Ex [ log pZ(h(x;φ)) + log ∣∣∣∣det(∂h(x;φ)∂x )∣∣∣∣]. (2) 3 RELATED WORK Nonlinear ICA Nonlinear independent component analysis (ICA) is one of the biggest problems remaining unresolved in unsupervised learning. Given the observations alone, it aims to recover the inverse mixing function as well as their corresponding independent sources. In contrast with the linear case, research on nonlinear ICA is hampered by the fact that without auxiliary variables, recovering the independent latents is impossible (Hyvärinen & Pajunen, 1999). Similar impossibility result can be found in (Locatello et al., 2018). Fortunately, by exploiting additional temporal structure on the sources, recent work (Hyvarinen & Morioka, 2016; Hyvarinen et al., 2018) established the first identifiability results for deep latent-variable models. These approaches, however, do not explicitly learn the data distribution; nor are they capable of generating “fake” data. Khemakhem et al. (2019) bridged this gap by establishing a principled connection between VAEs and an identifiable model for nonlinear ICA. Their method with an identifiable VAE (known as iVAE) approximates the true joint distribution over observed and latent variables under mild conditions. Nevertheless, due to the intractablity of KL divergence between variational approximate posterior and the true posterior, iVAE maximizes the evidence lower bound on the data log-likelihood, which in both theory and practice acts as a detriment to the achievement of identifiability. We instead propose identifying through flows (normalizing flow), which maximizes the likelihood in a straightforward way, providing theoretical guarantees and practical advantages for identifiability. Normalizing Flows Normalizing Flows are a family of generative approaches that fits a data distribution by learning a bijection from observations to latent codes, and vice versa. Compared with VAEs which learn a posterior approximation to the true posterior, normalizing flows directly deal with marginal likelihood with exact inference while maintaining efficient sampling. Formally, a normalizing flow is a transform of a tractable probability distribution into a complex distribution by compositing a sequence of invertible and differentiable mappings. In practice, the challenge lies in designing a normalizing flow that satisfies the following conditions: (1) it should be bijective and thus invertible; (2) it is efficient to compute its inverse and its Jacobian determinant while ensuring sufficient capabilities. The framework of normalizing flows was first defined in (Tabak et al., 2010) and (Tabak & Turner, 2013) and then explored for density estimation in (Rippel & Adams, 2013). Rezende & Mohamed (2015) applied normalizing flows to variational inference by introducing planar and radial flows. Since then, there had been abundant literature towards expanding this family. Kingma & Dhariwal (2018) parameterizes linear flows with the LU factorization and “1 × 1” convolutions for the sake of efficient determinant calculation and invertibility of convolution operations. Despite their limits in expressive capabilities, linear flows act as essential building blocks of affine coupling flows as in (Dinh et al., 2014; 2016). Kingma et al. (2016) applied autoregressive models as a form of normalizing flows, which exhibit strong expressiveness in modelling statistical dependencies among variables. However, the forwarding operation of autoregressive models is inherently sequential, which makes it inefficient for training. Splines have also been used as building blocks of normalizing flows: Müller et al. (2018) suggested modelling a linear and quadratic spline as the integral of a univariate monotonic function for flow construction. Durkan et al. (2019a) proposed a natural extension to the framework of neural importance sampling and also suggested modelling a coupling layer as a monotonic rational-quadratic spine (Durkan et al., 2019b), which can be implemented either with a coupling architecture RQ-NSF(C) or with autoregressive architecture RQ-NSF(AR). The expressive capabilities of normalizing flows and their theoretical guarantee of invertibility make them a natural choice for recovering the true mixing mapping from sources to observations, and thus identifiability can be rigorously achieved. In our work, we show that by aligning normalizing flows with an existing identifiability theory, it is desirable to learn an identifiable latent-variable model with theoretical guarantees of identifiability. 4 IDENTIFIABLE FLOW In this section, we first introduce the identifiable latent-variable family and the theory of identifiability (Khemakhem et al., 2019) that makes it possible to recover the joint distribution between observations and latent variables. Then we derive our model, iFlow, and its optimization objective which admits principled disentanglement with theoretical guarantees of identifiability. 4.1 IDENTIFIABLE LATENT-VARIABLE FAMILY The primary assumption leading to identifiability is a conditionally factorized prior distribution over the latent variables, pθ(z|u), where u is an auxiliary variable, which can be the time index in a time series, categorical label, or an additionally observed variable (Khemakhem et al., 2019). Formally, let x ∈ X ⊆ Rn and u ∈ U ⊆ Rm be two observed random variables, and z ∈ Z ⊆ Rn a latent variable that is the source of x. This implies that there can be an arbitrarily complex nonlinear mapping f : Z → X . Assuming that f is a bijection, it is desirable to recover its inverse by approximating using a family of invertible mappings hφ parameterized by φ. The statistical dependencies among these random variables are defined by a Bayesian net: u → z → x, from which the following conditional generative model can be derived: p(x, z|u; Θ) = p(x|z;φ)p(z|u;T,λ), (3) where p(x|z;φ) def= p (x−h−1(z)) and p(z|u;T,λ) is assumed to be a factorized exponential family distribution conditioned upon u. Note that this density assumption is valid in most cases, since the exponential families have universal approximation capabilities (Sriperumbudur et al., 2017). Specifically, the probability density function is given by pT,λ(z|u) = n∏ i=1 pi(zi|u) = ∏ i Qi(zi) Zi(u) exp [ k∑ j=1 Ti,j(zi)λi,j(u) ] , (4) where Qi is the base measure, Zi(u) is the normalizing constant, Ti,j’s are the components of the sufficient statistic and λi,j(u) the natural parameters, critically depending on u. Note that k indicates the maximum order of statistics under consideration. 4.2 IDENTIFIABILITY THEORY The objective of identifiability is to learn a model that is subject to: for each quadruplet (Θ,Θ′,x, z), pΘ(x) = pΘ′(x) =⇒ pΘ(x, z) = pΘ′(x, z), (5) where Θ and Θ′ are two different choices of model parameters that imply the same marginal density (Khemakhem et al., 2019). One possible avenue towards this objective is to introduce the definition of identifiability up to equivalence class: Definition 4.1. (Identifiability up to equivalence class) Let ∼ be an equivalence relation on Θ. A model defined by p(x, z; Θ) = p(x|z; Θ)p(z; Θ) is said to be identifiable up to ∼ if pΘ(x) = pΘ′(x) =⇒ Θ ∼ Θ′, (6) where such an equivalence relation in the identifiable latent-variable family is defined as follows: Proposition 4.1. (φ, T̃, λ̃) and (φ′, T̃′, λ̃′) are of the same equivalence class if and only if there exist A and c such that ∀ x ∈ X , T(hφ(x)) = AT ′(hφ′(x)) + c, (7) where T̃(z) = (Q1(z1), ..., Qn(zn), T1,1(z1), ..., Tn,k(zn)), λ̃(u) = (Z1(u), ..., Zn(u), λ1,1(u), ..., λn,k(u)). (8) One can easily verify that∼ is an equivalence relation by showing its reflexivity, symmetry and transitivity. Then, the identifiability of the latent-variable family is given by Theorem 4.1 (Khemakhem et al., 2019). Theorem 4.1. LetZ = Z1×· · ·×Zn and suppose the following holds: (i) The set {x ∈ X |Ψ (x) = 0} has measure zero, where Ψ is the characteristic function of the density p ; (ii) The sufficient statistics Ti,j in (2) are differentiable almost everywhere and ∂Ti,j/∂z 6= 0 almost surely for z ∈ Zi and for all i ∈ {1, ..., n} and j ∈ {1, ..., k}. (iii) There exist (nk+1) distinct priors u0, ...,unk such that the matrix L = λ1,1(u 1)− λ1,1(u0) · · · λ1,1(unk)− λ1,1(u0) ... . . . ... λn,k(u 1)− λn,k(u0) · · · λn,k(unk)− λn,k(u0) (9) of size nk × nk is invertible. Then, the parameters (φ, T̃, λ̃) are ∼-identifiable. 4.3 OPTIMIZATION OBJECTIVE OF IFLOW We propose identifying through flows (iFlow) for recovering latent representations. Our proposed model falls into the identifiable latent-variable family with = 0, that is, p (·) = δ(·), where δ is a point mass, i.e. Dirac measure. Note that assumption (i) in Theorem 4.1 holds true for iFlow. In stark contrast to iVAE which resorts to variational approximations and maximizes the evidence lower bound, iFlow directly maximizes the marginal likelihood conditioned on u: max Θ pX(x|u; Θ) = pZ(hφ(x)|u;θ) ∣∣∣∣det(∂hφ∂x )∣∣∣∣ , (10) where pZ(·|u) is modeled by a factorized exponential family distribution. Therefore, the log marginal likelihood is obtained: log pX(x|u; Θ) = n∑ i=1 ( logQi(zi)− logZi(u) + Ti(zi)Tλi(u) ) + log ∣∣∣∣det(∂hφ∂x )∣∣∣∣ , (11) where zi is the ith component of the source z = hφ(x), and T and λ are both n-by-k matrices. Here, hφ is a normalizing flow of any kind. For the sake of simplicity, we set Qi(zi) = 1 for all i’s and consider maximum order of sufficient statistics of zi’s up to 2, that is, k = 2. Hence, T and λ are given by T(z) = z21 z1 z22 z2 ... ... z2n zn and λ(u) = ξ1 η1 ξ2 η2 ... ... ξn ηn . (12) Therefore, the optimization objective is to minimize L(Θ) = E(x,u)∼pD [( n∑ i=1 logZi(u) ) − trace ( T(z)λ(u)T ) − log ∣∣∣∣det(∂hφ∂x )∣∣∣∣ ] , (13) where pD denotes the empirical distribution, and the first term in (13) is given by n∑ i=1 logZi(u) = log ∫ Rn ( n∏ i=1 Qi(zi) ) exp ( trace ( T(z)λ(u)T )) dz = log ∫ Rn exp ( n∑ i=1 ξiz 2 i + ηizi ) dz = log n∏ i=1 ∫ R exp (ξiz 2 i + ηizi)dzi = log n∏ i=1 (√ − π ξi ) exp ( − η 2 i 4ξi ) = n∑ i=1 ( log √ − π ξi − η 2 i 4ξi ) . (14) In practice, λ(u) can be parameterized by a multi-layer perceptron with learnable parameters θ, where λθ : Rm → R2n. Here,m is the dimension of the space in which u’s lies. Note that ξi should be strictly negative in order for the exponential family’s probability density function to be finite. Negative softplus nonlinearity can be exploited to force this constraint. Therefore, optimization proceeds by minimizing the following closed-form objective: min Θ L(Θ) = E(x,u)∼pD [ n∑ i=1 ( log √ − π ξi − η 2 i 4ξi ) − trace ( T(z)λθ(u) T ) − log ∣∣∣∣det(∂hφ∂x )∣∣∣∣ ] . (15) where Θ = {θ,φ}. 4.4 IDENTIFIABILITY OF IFLOW The identifiability of our proposed model, iFlow, is characterized by Theorem 4.2. Theorem 4.2. Minimizing LΘ with respect to Θ, in the limit of infinite data, learns a model that is ∼-identifiable. Proof. Minimizing LΘ with respect to Θ is equivalent to maximizing the log conditional likelihood, log pX(x|u; Θ). Given infinite amount of data, maximizing log pX(x|u; Θ) will give us the true marginal likelihood conditioned on u, that is, pX(x|u; Θ̂) = pX(x|u; Θ∗), where Θ̂ = arg maxΘ log pX(x|u; Θ) and Θ∗ is the true parameter. According to Theorem 4.1, we obtain that Θ̂ and Θ∗ are of the same equivalence class defined by ∼. Thus, according to Definition 4.1, the joint distribution parameterized by Θ is identifiable up to ∼. Consequently, Theorem 4.2 guarantees strong identifiability of our proposed generative model, iFlow. Note that unlike Theorem 3 in (Khemakhem et al., 2019), Theorem 4.2 makes no assumption that the family of approximate posterior distributions contains the true posterior. And we show in experiments that this assumption is unlikely to hold true empirically. 5 SIMULATIONS To evaluate our method, we run simulations on a synthetic dataset. This section will elaborate on the details of the generated dataset, implementation, evaluation metric and fair comparison with existing methods. 5.1 DATASET We generate a synthetic dataset where the sources are non-stationary Gaussian time-series, as described in (Khemakhem et al., 2019): the sources are divided into M segments of L samples each. The auxiliary variable u is set to be the segment index. For each segment, the conditional prior distribution is chosen from the exponential family (4), where k = 2, Qi(zi) = 1, and Ti,1(zi) = z2i , Ti,2(zi) = zi, and the true λi,j’s are randomly and independently generated across the segments and the components such that their variances obey a uniform distribution on [0.5, 3]. The sources to recover are mixed by an invertible multi-layer perceptron (MLP) whose weight matrices are ensured to be full rank. 5.2 IMPLEMENTATION DETAILS The mapping λθ that outputs the natural parameters of the conditional factorized exponential family distribution is parameterized by a multi-layer perceptron with the activation of the last layer being the softplus nonlinearity. Additionally, a negative activation is taken on the second-order natural parameters in order to ensure its finiteness. The bijection hφ is modeled by RQ-NSF(AR) (Durkan et al., 2019b) with the flow length of 10 and the bin 8, which gives rise to sufficient flexibility and expressiveness. For each training iteration, we use a mini-batch of size 64, and an Adam optimizer with learning rate chosen in {0.01, 0.001} to optimize the learning objective (15). 5.3 EVALUATION METRIC As a standard measure used in ICA, the mean correlation coefficient (MCC) between the original sources and the corresponding predicted latents is chosen to be the evaluation metric. A high MCC indicates the strong correlation between the identified latents recovered and the true sources. In experiments, we found that such a metric can be sensitive to the synthetic data generated by different random seeds. We argue that unless one specifies the overall generating procedure including random seeds in particular any comparison remains debatable. This is crucially important since most of the existing works failed to do so. Therefore, we run each simulation of different methods through seed 1 to seed 100 and report averaged MCCs with standard deviations, which makes the comparison fair and meaningful. 5.4 COMPARISON AND RESULTS We compare our model, iFlow, with iVAE. These two models are trained on the same synthetic dataset aforementioned, with M = 40, L = 1000, n = d = 5. For visualization, we also apply another setting with M = 40, L = 1000, n = d = 2. To evaluate iVAE’s identifying performance, we use the original implementation that is officially released with exactly the same settings as described in (Khemakhem et al., 2019) (cf. Appendix A.2). First, we demonstrate a visualization of identifiablity of these two models in a 2-D case (n = d = 2) as illustrated in Figure 1, in which we plot the original sources (latent), observations and the identified sources recovered by iFlow and iVAE, respectively. Segments are marked with different colors. Clearly, iFlow outperforms iVAE in identifying the original sources while preserving the original geometry of source manifold. It is evident that the latents recovered by iFlow bears much higher resemblance to the true latent sources than those by iVAE in the presence of some trivial indeterminacies of scaling, global sign and permutation of the original sources, which are inevitable even in some cases of linear ICA. This exhibits consistency with the definition of identifiability up to equivalence class that allows for existence of an affine transformation between sufficient statistics, as described in Proposition 4.1. As shown in Figure 1(a), 1(c), and 1(d), iVAE achieves inferior identifying performance in the sense that its estimated latents tend to retain the manifold of the observations. Notably, we also find that despite the relatively high MCC performance of iVAE in Figure 1(d), iFlow is much more likely to recover the true geometric manifold in which the latent sources lie. In Figure 1(b), iVAE’s recovered latents collapses in face of a highly nonlinearly mixing case, while iFlow still works well in identifying the true sources. Note that these are not rare occurrences. More visualization examples can be found in Appendix A.3. https://github.com/ilkhem/iVAE/ Second, regarding quantitative results as shown in Figure 2(a), our model, iFlow, consistently outperforms iVAE in MCC by a considerable margin across different random seeds under consideration while experiencing less uncertainty (standard deviation as indicated in the brackets). Moreover, Figure 2(b) also showcases that the energy value of iFlow is much higher than that of iVAE, which serves as evidence that the optimization of the evidence lower bound, as in iVAE, would lead to suboptimal identifiability. As is borne out empirically, the gap between the evidence lower bound and the conditional marginal likelihood is inevitably far from being negligible in practice. For clearer analysis, we also report the correlation coefficients for each source-latent pair in each dimension. As shown in Figure 3, iFlow exhibits much stronger correlation than does iVAE in each single dimension of the latent space. Finally, we investigate the impact of different choices of activation for generating natural parameters of the exponential family distribution (see Appendix A.1 for details). Note that all of these choices are valid since theoretically the natural parameters form a convex space. However, iFlow(Softplus) achieves the highest identifying performance, suggesting that the range of softplus allows for greater flexibility, which makes itself a good choice for natural parameter nonlinearity. 6 CONCLUSIONS Among the most significant goals of unsupervised learning is to learn the disentangled representations of observed data, or to identify original latent codes that generate observations (i.e. identifiability). Bridging the theoretical and practical gap of rigorous identifiability, we have proposed iFlow, which directly maximizes the marginal likelihood conditioned on auxiliary variables, establishing a natural framework for recovering original independent sources. In theory, our contribution provides a rigorous way to achieve identifiability and hence the recovery of the joint distribution between observed and latent variables that leads to principled disentanglement. Extensive empirical studies confirm our theory and showcase its practical advantages over previous methods. ACKNOWLEDGMENTS We thank Xiaoyi Yin for helpful initial discussions. This work is in loving memory of Kobe Bryant (1978-2020) ... A APPENDIX A.1 ABLATION STUDY ON ACTIVATIONS FOR NATURAL PARAMETERS Figure 4 demonstrates the comparison of MCC of iFlows implemented with different nonlinear activations for natural parameters and that of iVAE, in which relu+eps denotes the ReLU activation added by a small value (e.g. 1e-5) and sigmoid×5 denotes the Sigmoid activation multiplied by 5. A.2 IMPLEMENTATION DETAILS OF IVAE As stated in Section 5.4, to evaluate iVAE’s identifying performance, we use the original implementation that is officially released with the same settings as described in (Khemakhem et al., 2019). Specifically, in terms of hyperparameters of iVAE, the functional parameters of the decoder and the inference model, as well as the conditional prior are parameterized by MLPs, where the dimension of the hidden layers is chosen from {50, 100, 200}, the activation function is a leaky RELU or a leaky hyperbolic tangent, and the number of layers is chosen from {3, 4, 5, 6}. Here we report all averaged MCC scores of different implementations for iVAE as shown in Table 1. Table 1 indicates that adding more layers or more hidden neurons does not improve MCC score, precluding the possibility that expressive capability is not the culprit of iVAE inferior performance. Instead, we argue that the assumption (i) of Theorem 3 in (Khemakhem et al., 2019) (i.e the family of approximate posterior distributions contains the true posterior) often fails or is hard to satisfy in practice, which is one of the major reasons for the inferior performance of iVAE. Additionally, Figure 2(b) demonstrates that the energy value of iFlow is much higher than that of iVAE, which provides evidence that optimizing the evidence lower bound, as in iVAE, leads to suboptimal identifiability. A.3 VISUALIZATION OF 2D CASES
1. What is the main contribution of the paper in the field of generative modeling? 2. What are the strengths of the proposed identifiable normalizing flows (iFlow) method compared to existing methods like iVAE? 3. How does the reviewer assess the paper's overall quality and organization? 4. Are there any concerns or suggestions regarding the paper's content, such as equation punctuation or hyperparameter reporting?
Review
Review ## Overview The paper tackles the identifiability problem in generative modeling, i.e., recovering the true latent representations from which the observed data originates. The paper argues that identifiable variational autoencoder (iVAE) suffers from intractability issue which leads to suboptimal solutions. The paper instead proposes an identifiable normalizing flows (iFlow) method as an alternative. The proposed iFlow outperforms iVAE in experiments using synthetic data. The paper is very well motivated and well supports its claim. ## Summary of the contributions 1. The paper proposes iFlow, an identifiable normalizing flow method which allows recovery of true latent space from observed data. 2. The paper shows iFlow outperforms iVAE in synthetic experiments. 3. The paper provides theoretical justification on the identifiability of the proposed iFlow method. ## Overall feedback I find the paper well written and well organized. Easy to follow the content even though I am not expert on this matter. The paper provides both theoretical justification and empirical experimental validation which show the superior performance of proposed iFlow method. So I am leaning towards accepting the paper. The theory seems correct but I did not check all the equations and proofs in depth so I am not very confident about my rating. ## Suggestions 1. Please make sure all equations are properly punctuated. 2. The comparison with iVAE seems tricky since they use different types of architectures. Can you also report the hyper parameters used for iVAE?
ICLR
Title Orchestrated Value Mapping for Reinforcement Learning Abstract We present a general convergent class of reinforcement learning algorithms that is founded on two distinct principles: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. The first principle enables incorporating specific properties into the value estimator that can enhance learning. The second principle, on the other hand, allows for the value function to be represented as a composition of multiple utility functions. This can be leveraged for various purposes, e.g. dealing with highly varying reward scales, incorporating a priori knowledge about the sources of reward, and ensemble learning. Combining the two principles yields a general blueprint for instantiating convergent algorithms by orchestrating diverse mapping functions over multiple reward channels. This blueprint generalizes and subsumes algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In addition, our convergence proof for this general class relaxes certain required assumptions in some of these algorithms. Based on our theory, we discuss several interesting configurations as special cases. Finally, to illustrate the potential of the design space that our theory opens up, we instantiate a particular algorithm and evaluate its performance on the Atari suite. 1 INTRODUCTION The chief goal of reinforcement learning (RL) algorithms is to maximize the expected return (or the value function) from each state (Szepesvári, 2010; Sutton & Barto, 2018). For decades, many algorithms have been proposed to compute target value functions either as their main goal (criticonly algorithms) or as a means to help the policy search process (actor-critic algorithms). However, when the environment features certain characteristics, learning the underlying value function can become very challenging. Examples include environments where rewards are dense in some parts of the state space but very sparse in other parts, or where the scale of rewards varies drastically. In the Atari 2600 game of Ms. Pac-Man, for instance, the reward can vary from 10 (for small pellets) to as large as 5000 (for moving bananas). In other games such as Tennis, acting randomly leads to frequent negative rewards and losing the game. Then, once the agent learns to capture the ball, it can avoid incurring such penalties. However, it may still take a very long time before the agent scores a point and experiences a positive reward. Such learning scenarios, for one reason or another, have proved challenging for the conventional RL algorithms. One issue that can arise due to such environmental challenges is having highly non-uniform action gaps across the state space.1 In a recent study, van Seijen et al. (2019) showed promising results by simply mapping the value estimates to a logarithmic space and adding important algorithmic components to guarantee convergence under standard conditions. While this construction addresses the problem of non-uniform action gaps and enables using lower discount factors, it further opens a new direction for improving the learning performance: estimate the value function in a different space that admits better properties compared to the original space. This interesting view naturally raises theoretical questions about the required properties of the mapping functions, and whether the guarantees of convergence would carry over from the basis algorithm under this new construction. 1Action gap refers to the value difference between optimal and second best actions (Farahmand, 2011). One loosely related topic is that of nonlinear Bellman equations. In the canonical formulation of Bellman equations (Bellman, 1954; 1957), they are limited in their modeling power to cumulative rewards that are discounted exponentially. However, one may go beyond this basis and redefine the Bellman equations in a general nonlinear manner. In particular, van Hasselt et al. (2019) showed that many such Bellman operators are still contraction mappings and thus the resulting algorithms are reasonable and inherit many beneficial properties of their linear counterparts. Nevertheless, the application of such algorithms is still unclear since the fixed point does not have a direct connection to the concept of return. In this paper we do not consider nonlinear Bellman equations. Continuing with the first line of thought, a natural extension is to employ multiple mapping functions concurrently in an ensemble, allowing each to contribute their own benefits. This can be viewed as a form of separation of concerns (van Seijen et al., 2016). Ideally, we may want to dynamically modify the influence of different mappings as the learning advances. For example, the agent could start with mappings that facilitate learning on sparse rewards. Then, as it learns to collect more rewards, the mapping function can be gradually adapted to better support learning on denser rewards. Moreover, there may be several sources of reward with specific characteristics (e.g. sparse positive rewards but dense negative ones), in which case using a different mapping to deal with each reward channel could prove beneficial. Building upon these ideas, this paper presents a general class of algorithms based on the combination of two distinct principles: value mapping and linear reward decomposition. Specifically, we present a broad class of mapping functions that inherit the convergence properties of the basis algorithm. We further show that such mappings can be orchestrated through linear reward decomposition, proving convergence for the complete class of resulting algorithms. The outcome is a blueprint for building new convergent algorithms as instances. We conceptually discuss several interesting configurations, and experimentally validate one particular instance on the Atari 2600 suite. 2 VALUE MAPPING We consider the standard reinforcement learning problem which is commonly modeled as a Markov decision process (MDP; Puterman (1994))M = (S,A, P,R, P0, γ), where S andA are the discrete sets of states and actions, P (s′|s, a) .= P[st+1 =s′ | st=s, at=a] is the state-transition distribution, R(r|s, a, s′) .= P[rt = r | st = s, at = a, st+1 = s′] (where we assume r ∈ [rmin, rmax]) is the reward distribution, P0(s) . = P[s0 = s] is the initial-state distribution, and γ ∈ [0, 1] is the discount factor. A policy π(a|s) .= P[at = a | st = s] defines how an action is selected in a given state. Thus, selecting actions according to a stationary policy generally results in a stochastic trajectory. The discounted sum of rewards over the trajectory induces a random variable called the return. We assume that all returns are finite and bounded. The state-action value function Qπ(s, a) evaluates the expected return of taking action a at state s and following policy π thereafter. The optimal value function is defined as Q∗(s, a) .= maxπ Qπ(s, a), which gives the maximum expected return of all trajectories starting from the state-action pair (s, a). Similarly, an optimal policy is defined as π∗(a|s) ∈ arg maxπ Qπ(s, a). The optimal value function is unique (Bertsekas & Tsitsiklis, 1996) and can be found, e.g., as the fixed point of the Q-Learning algorithm (Watkins, 1989; Watkins & Dayan, 1992) which assumes the following update: Qt+1(st, at)← (1− αt)Qt(st, at) + αt ( rt + γmax a′ Qt(st+1, a ′) ) , (1) where αt is a positive learning rate at time t. Our goal is to map Q to a different space and perform the update in that space instead, so that the learning process can benefit from the properties of the mapping space. We define a function f that maps the value function to some new space. In particular, we consider the following assumptions: Assumption 1 The function f(x) is a bijection (either strictly increasing or strictly decreasing) for all x in the given domain D = [c1, c2] ⊆ R. Assumption 2 The function f(x) holds the following properties for all x in the given domain D = [c1, c2] ⊆ R: 1. f is continuous on [c1, c2] and differentiable on (c1, c2); 2. |f ′(x)| ∈ [δ1, δ2] for x ∈ (c1, c2), with 0 < δ1 < δ2 <∞; 3. f is either of semi-convex or semi-concave. We next use f to map the value function, Q(s, a), to its transformed version, namely Q̃(s, a) . = f ( Q(s, a) ) . (2) Assumption 1 implies that f is invertible and, as such, Q(s, a) is uniquely computable from Q̃(s, a) by means of the inverse function f−1. Of note, this assumption also implies that f preserves the ordering in x; however, it inverts the ordering direction if f is decreasing. Assumption 2 imposes further restrictions on f , but still leaves a broad class of mapping functions to consider. Throughout the paper, we use tilde to denote a “mapped” function or variable, while the mapping f is understandable from the context (otherwise it is explicitly said). 2.1 BASE ALGORITHM If mapped value estimates were naively placed in a Q-Learning style algorithm, the algorithm would fail to converge to the optimal values in stochastic environments. More formally, in the tabular case, an update of the form (cf. Equation 1) Q̃t+1(st, at)← (1− αt)Q̃t(st, at) + αtf ( rt + γmax a′ f−1 ( Q̃t(st+1, a ′) )) (3) converges2 to the fixed point Q̃ (s, a) that satisfies Q̃ (s, a) = Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f ( r + γmax a′ f−1 ( Q̃ (s′, a′) ))] . (4) Let us define the notation Q (s, a) .= f−1 ( Q̃ (s, a) ) . If f is a semi-convex bijection, f−1 will be semi-concave and Equation 4 deduces Q (s, a) . = f−1 ( Q̃ (s, a) ) = f−1 ( Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f ( r + γmax a′ Q (s′, a′) )]) ≥ Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f−1 ( f ( r + γmax a′ Q (s′, a′) ))] = Es′∼P (·|s,a), r∼R(·|s,a,s′) [ r + γmax a′ Q (s′, a′) ] , (5) where the third line follows Jensen’s inequality. Comparing Equation 5 with the Bellman optimality equation in the regular space, i.e. Q∗(s, a) = Es′,r∼P,R [r + γmaxa′ Q∗(s′, a′)], we conclude that the value function to which the update rule (3) converges overestimates Bellman’s backup. Similarly, if f is a semi-concave function, then Q (s, a) underestimates Bellman’s backup. Either way, it follows that the learned value function deviates from the optimal one. Furthermore, the Jensen’s gap at a given state s — the difference between the left-hand and right-hand sides of Equation 5 — depends on the action a because the expectation operator depends on a. That is, at a given state s, the deviation of Q (s, a) from Q∗(s, a) is not a fixed-value shift and can vary for various actions. Hence, the greedy policy w.r.t. (with respect to) Q (s, ·) may not preserve ordering and it may not be an optimal policy either. In an effort to address this problem in the spacial case of f being a logarithmic function, van Seijen et al. (2019) observed that in the algorithm described by Equation 3, the learning rate αt generally conflates two forms of averaging: (i) averaging of stochastic update targets due to environment stochasticity (happens in the regular space), and (ii) averaging over different states and actions (happens in the f ’s mapping space). To this end, they proposed to algorithmically disentangle the two and showed that such a separation will lift the Jensen’s gap if the learning rate for averaging in the regular space decays to zero fast enough. Building from Log Q-Learning (van Seijen et al., 2019), we define the base algorithm as follows: at each time t, the algorithm receives Q̃t(s, a) and a transition quadruple (s, a, r, s′), and outputs Q̃t+1(s, a), which then yields Qt+1(s, a) . = f−1 ( Q̃t+1(s, a) ) . The steps are listed below: 2The convergence follows from stochastic approximation theory with the additional steps to show by induction that Q̃ remains bounded and then the corresponding operator is a contraction mapping. Qt(s, a) := f −1 ( Q̃t(s, a) ) (6) ã′ := arg max a′ ( Qt(s ′, a′) ) (7) Ut := r + γf −1 ( Q̃t(s ′, ã′) ) (8) Ût := f −1 ( Q̃t(s, a) ) + βreg,t ( Ut − f−1 ( Q̃t(s, a) )) (9) Q̃t+1(s, a) := Q̃t(s, a) + βf,t ( f(Ût)− Q̃t(s, a) ) (10) Here, the mapping f is any function that satisfies Assumptions 1 and 2. Remark that similarly to the Log Q-Learning algorithm, Equations 9 and 10 have decoupled averaging of stochastic update targets from that over different states and actions. 3 REWARD DECOMPOSITION Reward decomposition can be seen as a generic way to facilitate (i) systematic use of environmental inductive biases in terms of known reward sources, and (ii) action selection as well as value-function updates in terms of communication between an arbitrator and several subagents, thus assembling several subagents to collectively solve a task. Both directions provide broad avenues for research and have been visited in various contexts. Russell & Zimdars (2003) introduced an algorithm called Q-Decomposition with the goal of extending beyond the “monolithic” view of RL. They studied the case of additive reward channels, where the reward signal can be written as the sum of several reward channels. They observed, however, that using Q-Learning to learn the corresponding Q function of each channel will lead to a non-optimal policy (they showed it through a counterexample). Hence, they used a Sarsa-like update w.r.t. the action that maximizes the arbitrator’s value. Laroche et al. (2017) provided a formal analysis of the problem, called the attractor phenomenon, and studied a number of variations to Q-Decomposition. On a related topic but with a different goal, Sutton et al. (2011) introduced the Horde architecture, which consists of a large number of “demons” that learn in parallel via off-policy learning. Each demon estimates a separate value function based on its own target policy and (pseudo) reward function, which can be seen as a decomposition of the original reward in addition to auxiliary ones. van Seijen et al. (2017) built on these ideas and presented hybrid reward architecture (HRA) to decompose the reward and learn their corresponding value functions in parallel, under mean bootstrapping. They further illustrated significant results on domains with many independent sources of reward, such as the Atari 2600 game of Ms. Pac-Man. Besides utilizing distinct environmental reward sources, reward decomposition can also be used as a technically-sound algorithmic machinery. For example, reward decomposition can enable utilization of a specific mapping that has a limited domain. In the Log Q-Learning algorithm, for example, the log(·) function cannot be directly used on non-positive values. Thus, the reward is decomposed such that two utility functions Q̃+ and Q̃− are learned for when the reward is non-negative or negative, respectively. Then the value is given by Q(s, a) = exp ( Q̃+(s, a) ) − exp ( Q̃−(s, a) ) . The learning process of each of Q̃+ and Q̃− bootstraps towards their corresponding value estimate at the next state with an action that is the arg max of the actual Q, rather than that of Q̃+ and Q̃− individually. We generalize this idea to incorporate arbitrary decompositions, beyond only two channels. To be specific, we are interested in linear decompositions of the reward function into L separate channels r(j), for j = 1 . . . L, in the following way: r := L∑ j=1 λjr (j), (11) with λj ∈ R. The channel functions r(j) map the original reward into some new space in such a way that their weighted sum recovers the original reward. Clearly, the case of L = 1 and λ1 = 1 would retrieve the standard scenario with no decomposition. In order to provide the update, expanding from Log Q-Learning, we define Q̃(j) for j = 1 . . . L, corresponding to the above reward channels, and construct the actual value function Q using the following: Qt(s, a) := L∑ j=1 λjf −1 j ( Q̃ (j) t (s, a) ) . (12) We explicitly allow the mapping functions, fj , to be different for each channel. That is, each reward channel can have a different mapping and each Q̃(j) is learned separately under its own mapping. Before discussing how the algorithm is updated with the new channels, we present a number of interesting examples of how Equation 11 can be deployed. As the first example, we can recover the original Log Q-Learning reward decomposition by considering L = 2, λ1 = +1, λ2 = −1, and the following channels: r (1) t := { rt if rt ≥ 0 0 otherwise ; r (2) t := { |rt| if rt < 0 0 otherwise (13) Notice that the original reward is retrieved via rt = r (1) t −r (2) t . This decomposition allows for using a mapping with only positive domain, such as the logarithmic function. This is an example of using reward decomposition to ensure that values do not cross the domain of mapping function f . In the second example, we consider different magnifications for different sources of reward in the environment so as to make the channels scale similarly. The Atari 2600 game of Ms. Pac-Man is an example which includes rewards with three orders of magnitude difference in size. We may therefore use distinct channels according to the size-range of rewards. To be concrete, let r ∈ [0, 100] and consider the following two configurations for decomposition (can also be extended to other ranges). Configuration 1: λ1 = 1, λ2 = 10, λ3 = 100 r (1) t := { rt if rt ∈ [0, 1] 0 rt > 1 r (2) t := 0 if rt ≤ 1 0.1rt if rt ∈ (1, 10] 0 rt > 10 r (3) t := { 0 if rt ≤ 10 0.01rt if rt ∈ (10, 100] Configuration 2: λ1 = 1, λ2 = 9, λ3 = 90 r (1) t := { rt if rt ∈ [0, 1] 1 rt > 1 r (2) t := 0 if rt ≤ 1 (rt − 1)/9 if rt ∈ (1, 10] 1 rt > 10 r (3) t := { 0 if rt ≤ 10 (rt − 10)/90 if rt ∈ (10, 100] Each of the above configurations presents certain characteristics. Configuration 1 gives a scheme where, at each time step, at most one channel is non-zero. Remark, however, that each channel will be non-zero with less frequency compared to the original reward signal, since rewards get assigned to different channels depending on their size. On the other hands, Configuration 2 keeps each channel to act as if there is a reward clipping at its upper bound, while each channel does not see rewards below its lower bound. As a result, Configuration 2 fully preserves the reward density at the first channel and presents a better density for higher channels compared to Configuration 1. However, the number of active channels depends on the reward size and can be larger than one. Importantly, the magnitude of reward for all channels always remains in [0, 1] in both configurations, which could be a desirable property. The final point to be careful about in using these configurations is that the large weight of higher channels significantly amplifies their corresponding value estimates. Hence, even a small estimation error at higher channels can overshadow the lower ones. Over and above the cases we have presented so far, reward decomposition enables an algorithmic machinery in order to utilize various mappings concurrently in an ensemble. In the simplest case, we note that in Equation 11 by construction we can always write r(j) := r and L∑ j=1 λj := 1. (14) That is, the channels are merely the original reward with arbitrary weights that should sum to one. We can then use arbitrary functions fj for different channels and build the value function as presented in Equation 12. This construction directly induces an ensemble of arbitrary mappings with different weights, all learning on the same reward signal. More broadly, this can potentially be combined with any other decomposition scheme, such as the ones we discussed above. For example, in the case of separating negative and positive rewards, one may also deploy two (or more) different mappings for each of the negative and positive reward channels. This certain case results in four channels, two negative and two positive, with proper weights that sum to one. 4 ORCHESTRATION OF VALUE-MAPPINGS USING DECOMPOSED REWARDS 4.1 ALGORITHM To have a full orchestration, we next combine value mapping and reward decomposition. We follow the previous steps in Equations 6 –10, but now also accounting for the reward decomposition. The core idea here is to replace Equation 6 with 12 and then compute Q̃(j) for each reward channel in parallel. In practice, these can be implemented as separate Q tables, separate Q networks, or different network heads with a shared torso. At each time t, the algorithm receives all channel outputs Q̃(j)t , for j = 1 . . . L, and updates them in accordance with the observed transition. The complete steps are presented in Algorithm 1. A few points are apropos to remark. Firstly, the steps in the for-loop can be computed in parallel for all L channels. Secondly, as mentioned previously, the mapping function fj may be different for each channel; however, the discount factor γ and both learning rates βf,t and βreg,t are shared among all the channels and must be the same. Finally, note also that the action ãt+1, from which all the channels bootstrap, comes from arg maxa′ Qt(st+1, a ′) and not the local value of each channel. This directly implies that each channel-level value Q(j) = f−1(Q̃j) does not solve a channel-level Bellman equation by itself. In other words, Q(j) does not represent any specific semantics such as expected return corresponding to the rewards of that channel. They only become meaningful when they compose back together and rebuild the original value function. 4.2 CONVERGENCE We establish convergence of Algorithm 1 by the following theorem. Algorithm 1: Orchestrated Value Mapping. Input: (at time t) Q̃ (j) t for j = 1 . . . L st, at, rt, and st+1 Output: Q̃(j)t+1 for j = 1 . . . L Compute rt(j) for j = 1 . . . L begin 1 Qt(st, at) := ∑L j=1 λjf −1 j ( Q̃ (j) t (st, at) ) 2 ãt+1 := arg maxa′ ( Qt(st+1, a ′) ) for j = 1 to L do 3 U (j) t := r (j) t + γf −1 j ( Q̃ (j) t (st+1, ãt+1) ) 4 Û (j) t := f −1 j ( Q̃ (j) t (st, at) ) + βreg,t ( U (j) t − f−1j ( Q̃ (j) t (st, at) )) 5 Q̃ (j) t+1(st, at) := Q̃ (j) t (st, at) + βf,t ( fj ( Û (j) t ) − Q̃(j)t (st, at) ) end end Theorem 1 Let the reward admit a decomposition as defined by Equation 11, Qt(st, at) be defined by Equation 12, and all Q̃(j)t (st, at) updated according to the steps of Algorithm 1. Assume further that the following hold: 1. All fj’s satisfy Assumptions 1 and 2; 2. TD error in the regular space (second term in line 4 of Algorithm 1) is bounded for all j; 3. ∑∞ t=0 βf,t · βreg,t =∞; 4. ∑∞ t=0(βf,t · βreg,t)2 <∞; 5. βf,t · βreg,t → 0 as t→∞. Then, Qt(s, a) converges to Q∗t (s, a) with probability one for all state-action pairs (s, a). The proof follows basic results from stochastic approximation theory (Jaakkola et al., 1994; Singh et al., 2000) with important additional steps to show that those results hold under the assumptions of Theorem 1. The full proof is fairly technical and is presented in Appendix A. We further remark that Theorem 1 only requires the product βf,t ·βreg,t to go to zero. As this product resembles the conventional learning rate in Q-Learning, this assumption is no particular limitation compared to traditional algorithms. We contrast this assumption with the one in the previous proof of Log Q-Learning which separately requires βreg to go to zero fast enough. We note that in the case of using function approximation, as in a DQN-like algorithm (Mnih et al., 2015), the update in line 5 of Algorithm 1 should naturally be managed by the used optimizer, while line 4 may be handled manually. This has proved challenging as the convergence properties can be significantly sensitive to learning rates. To get around this problem, van Seijen et al. (2019) decided to keep βreg,t at a fixed value in their deep RL experiments, contrary to the theory. Our new condition, however, formally allows βreg,t to be set to a constant value as long as βf,t properly decays to zero. A somewhat hidden step in the original proof of Log Q-Learning is that the TD error in the regular space (second term in Equation 9) must always remain bounded. We will make this condition explicit. In practice, with bounds of the reward being known, one can easily find bounds of return in regular as well as fj spaces, and ensure boundness of U (j) t − f−1j (Q̃ (j) t ) by proper clipping. Notably, clipping of Bellman target is also used in the literature to mitigate the value overflow issue (Fatemi et al., 2019). The scenarios covered by Assumption 2, with the new convergence proof due to Theorem 1, may be favorable in many practical cases. Moreover, several prior algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition can be derived by appropriate construction from Algorithm 1. 4.3 REMARKS TIME-DEPENDENT CHANNELS The proof of Theorem 1 does not directly involve the channel weights λj . However, changing them will impact Qt, which changes the action ãt+1 in line 2 of Algorithm 1. In the case that λj’s vary with time, if they all converge to their final fixed value soon enough before the learning rates become too small, and if additionally all state-action pairs are still visited frequently enough after λj’s are settled to their final values, then the algorithm should still converge to optimality. Of course, this analysis is far from formal; nevertheless, we can still strongly conjecture that an adaptive case where the channel weights vary with time should be possible to design. SLOPE OF MAPPINGS Assumption 2 asserts that the derivative of fj must be bounded from both below and above. While this condition is sufficient for the proof of Theorem 1, we can probe its impact further. The proof basically demonstrates a bounded error term, which ultimately converges to zero under the conditions of Theorem 1. However, the bound on this error term (see Lemma 2 in Appendix A) is scaled by δmax = maxj δ(j), with δ(j) being defined as δ(j) = δ (j) 2 / δ (j) 1 − 1, (15) where δ(j)1 and δ (j) 2 are defined according to Assumption 2 (0 < δ (j) 1 ≤ |f ′j(x)| ≤ δ (j) 2 ). In the case of fj being a straight line δ(j) = 0, thus no error is incurred and the algorithm shrinks to Q-Learning. An important extreme case is when δ(j)1 is too small while δ (j) 2 is not close to δ (j) 1 . It then follows from Equation 15 that the error can be significantly large and the algorithm may need a long time to converge. This can also be examined by observing that if the return is near the areas where f ′j is very small, the return may be too compressed when mapped. Consequently, the agent becomes insensitive to the change of return in such areas. This problem can be even more significant in deep RL due to more complex optimization processes and nonlinear approximations. The bottom-line is that the mapping functions should be carefully selected in light of Equation 15 to avoid extremely large errors while still having desired slopes to magnify or suppress the returns when needed. This analysis also explains why logarithmic mappings of the form f(x) = c · log(x + d) (as investigated in the context of Log Q-Learning by van Seijen et al. (2019)) present unfavorable results in dense reward scenarios; e.g. in the Atari 2600 game of Skiing where there is a reward at every step. In this expression c is a mapping hyperparameter that scales values in the logarithmic space and d is a small positive scalar to ensure bounded derivatives, where the functional form of the derivative is given by f ′(x) = cx+d . Hence, δ2 = c d , whereas δ1 can be very close to zero depending on the maximum return. As a result, when learning on a task which often faces large returns, Log Q-Learning operates mostly on areas of f where the slope is small and, as such, it can incur significant error compared to standard Q-Learning. See Appendix B for a detailed illustration of this issue, and Appendix D for the full list of reward density variations across a suite of 55 Atari 2600 games. 5 EXPERIMENTAL RESULTS In this section, we illustrate the simplicity and utility of instantiating new learning methods based on our theory. Since our framework provides a very broad algorithm class with numerous possibilities, deep and meaningful investigations of specific instances go far beyond the scope of this paper (or any single conference paper). Nevertheless, as an authentic illustration, we consider the LogDQN algorithm (van Seijen et al., 2019) and propose an altered mapping function. As discussed above, the logarithmic mapping in LogDQN suffers from a too-small slope when encountering large returns. We lift this undesirable property while keeping the desired magnification property around zero. Specifically, we substitute the logarithmic mapping with a piecewise function that at the break-point x = 1 − d switches from a logarithmic mapping to a straight line with slope c (i.e. the same slope as c · log(x+ d) at x = 1− d): f(x) := { c · log(x+ d) if x ≤ 1− d c · (x− 1 + d) if x > 1− d We call the resulting method LogLinDQN, or Logarithmic-Linear DQN. Remark that choosing x = 1−d as the break-point has the benefit of using only a single hyperparameter c to determine both the scaling of the logarithmic function and the slope of the linear function, which otherwise would require an additional hyperparameter. Also note that the new mapping satisfies Assumptions 1 and 2. We then use two reward channels for non-negative and negative rewards, as discussed in the first example of Section 3 (see Equation 13), and use the same mapping function for both channels. Our implementation of LogLinDQN is based on Dopamine (Castro et al., 2018) and closely matches that of LogDQN, with the only difference being in the mapping function specification. Notably, our LogLin mapping hyperparameters are realized using the same values as those of LogDQN; i.e. c = 0.5 and d ≈ 0.02. We test this method in the Atari 2600 games of the Arcade Learning Environment (ALE) (Bellemare et al., 2013) and compare its performance primarily against LogDQN and DQN (Mnih et al., 2015), denoted by “Lin” or “(Lin)DQN” to highlight that it corresponds to a linear mapping function with slope one. We also include two other major baselines for reference: C51 (Bellemare et al., 2017) and Rainbow (Hessel et al., 2018). Our tests are conducted on a stochastic version of Atari 2600 using sticky actions (Machado et al., 2018) and follow a unified evaluation protocol and codebase via the Dopamine framework (Castro et al., 2018). Figure 1 shows the relative human-normalized score of LogLinDQN w.r.t. the worst and best of LogDQN and DQN for each game. These results suggest that LogLinDQN reasonably unifies the good properties of linear and logarithmic mappings (i.e. handling dense or sparse reward distributions respectively), thereby enabling it to improve upon the per-game worst of LogDQN and DQN (top panel) and perform competitively against the per-game best of the two (bottom panel) across a large set of games. Figure 2 shows median and mean human-normalized scores across a suite of 55 Atari 2600 games. Our LogLinDQN agent demonstrates a significant improvement over most baselines and is competitive with Rainbow in terms of mean performance. This is somewhat remarkable provided the relative simplicity of LogLinDQN, especially, w.r.t. Rainbow which combines several other advances including distributional learning, prioritized experience replay, and n-step learning. 6 CONCLUSION In this paper we introduced a convergent class of algorithms based on the composition of two distinct foundations: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. Together, this new family of algorithms enables learning the value function in a collection of different spaces where the learning process can potentially be easier or more efficient than the original return space. Additionally, the introduced methodology incorporates various versions of ensemble learning in terms of linear decomposition of the reward. We presented a generic proof, which also relaxes certain limitations in previous proofs. We also remark that several known algorithms in classic and recent literature can be seen as special cases of the present algorithm class. Finally, we contemplate research on numerous special instances as future work, following our theoretical foundation. Also, we believe that studying the combination of our general value mapping ideas with value decomposition (Tavakoli et al., 2021), instead of the the reward decomposition paradigm studied in this paper, could prove to be a fruitful direction for future research. REPRODUCIBILITY STATEMENT We release a generic codebase, built upon the Dopamine framework (Castro et al., 2018), with the option of using arbitrary compositions of mapping functions and reward decomposition schemes as easy-to-code modules. This enables the community to easily explore the design space that our theory opens up and investigate new convergent families of algorithms. This also allows to reproduce the results of this paper through an accompanying configuration script. The source code can be accessed at: https://github.com/microsoft/orchestrated-value-mapping. A PROOF OF THEOREM 1 We use a basic convergence result from stochastic approximation theory. In particular, we invoke the following lemma, which has appeared and proved in various classic texts; see, e.g., Theorem 1 in Jaakkola et al. (1994) or Lemma 1 in Singh et al. (2000). Lemma 1 Consider an algorithm of the following form: ∆t+1(x) := (1− αt)∆t(x) + αtψt(x), (16) with x being the state variable (or vector of variables), and αt and ψt denoting respectively the learning rate and the update at time t. Then, ∆t converges to zero w.p. (with probability) one as t→∞ under the following assumptions: 1. The state space is finite; 2. ∑ t αt =∞ and ∑ t α 2 t <∞; 3. ||E{ψt(x) | Ft}||W ≤ ξ ||∆t(x)||W , with ξ ∈ (0, 1) and || · ||W denoting a weighted max norm; 4. Var{ψt(x) | Ft} ≤ C (1 + ||∆t(x)||W ) 2, for some constant C; where Ft is the history of the algorithm until time t. Remark that in applying Lemma 1, the ∆t process generally represents the difference between a stochastic process of interest and its optimal value (that isQt andQ∗t ), and x represents a proper concatenation of states and actions. In particular, it has been shown that Lemma 1 applies to Q-Learning as the TD update of Q-Learning satisfies the lemma’s assumptions 3 and 4 (Jaakkola et al., 1994). We define Q(j)t (s, a) := f −1 ( Q̃ (j) t (s, a) ) , for j = 1 . . . L. Hence, Qt(s, a) = L∑ j=1 λjf −1 j ( Q̃ (j) t (s, a) ) = L∑ j=1 λjQ (j) t (s, a). (17) We next establish the following key result, which is core to the proof of Theorem 1. The proof is given in the next section. Lemma 2 Following Algorithm 1, for each channel j ∈ {1, . . . , L} we have Q (j) t+1(st, at) = Q (j) t (st, at) + βreg,t · βf,t ( U (j) t −Q (j) t (st, at) + e (j) t ) , (18) with the error term satisfying the followings: 1. Bounded by TD error in the regular space with decaying coefficient |e(j)t | ≤ βreg,t · βf,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ , (19) where δ(j) = δ(j)2 / δ (j) 1 − 1 is a positive constant; 2. For a given fj , e (j) t does not change sign for all t (it is either always non-positive or always non-negative); 3. e(j)t is fully measurable given the variables defined at time t. From Lemma 2, it follows that for each channel: Q (j) t+1(st, at) = Q (j) t (st, at) + βreg,t · βf,t ( U (j) t −Q (j) t (st, at) + e (j) t ) , (20) with e(j)t converging to zero w.p. one under condition 4 of the theorem, and U (j) t defined as: U (j) t := r (j) t + γ Q (j) t (st+1, ãt+1). Multiplying both sides of Equation 20 by λj and taking the summation, we write: L∑ j=1 λjQ (j) t+1(st, at) = L∑ j=1 λjQ (j) t (st, at) + βreg,t · βf,t L∑ j=1 λj ( U (j) t −Q (j) t (st, at) + e (j) t ) . Hence, using Equation 17 we have: Qt+1(st, at) = Qt(st, at) + βreg,t · βf,t L∑ j=1 λj ( U (j) t −Q (j) t (st, at) + e (j) t ) = Qt(st, at) + βreg,t · βf,t L∑ j=1 λj ( r (j) t + γ Q (j) t (st+1, ãt+1)−Q (j) t (st, at) + e (j) t ) = Qt(st, at) + βreg,t · βf,t rt + γ Qt(st+1, ãt+1)−Qt(st, at) + L∑ j=1 λje (j) t . (21) Definition of ãt+1 deduces that Qt(st+1, ãt+1) = Qt ( st+1, arg max a′ Qt(st+1, a ′) ) = max a′ Qt(st+1, a ′). By defining et := ∑L j=1 λje (j) t , we rewire Equation 21 as the following: Qt+1(st, at) = Qt(st, at) + βreg,t · βf,t ( rt + γ max a′ Qt(st+1, a ′)−Qt(st, at) + et ) . (22) This is a noisy Q-Learning algorithm with the noise term decaying to zero at a quadratic rate w.r.t. the learning rate’s decay; more precisely, in the form of (βreg,t · βf,t)2. Lemma 1 requires the entire update to be properly bounded (as stated in its assumptions 3 and 4). It has been known from the proof of Q-Learning (Jaakkola et al., 1994) that TD error satisfies these conditions, i.e. rt + γ maxa′ Qt(st+1, a′)−Qt(st, at) satisfies assumptions 3 and 4 of Lemma 1. To prove convergence of mapped Q-Learning, we therefore require to show that |et| also satisfies a similar property; namely, not only it disappears in the limit, but also it does not interfere intractably with the learning process during training. To this end, we next show that as the learning continues, |et| is indeed bounded by a value that can be arbitrarily smaller than the TD error. Consequently, as TD error satisfies assumptions 3 and 4 of Lemma 1, so does |et|, and so does their sum. Let δmax = maxj δ(j), with δ(j) defined in Lemma 2. Multiplying both sides of Equation 19 by λj and taking the summation over j, it yields: |et| = ∣∣∣∣∣∣ L∑ j=1 λje (j) t ∣∣∣∣∣∣ ≤ L∑ j=1 ∣∣∣λje(j)t ∣∣∣ ≤ L∑ j=1 |λj | · βf,t · βreg,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ ≤ βf,t · βreg,t · δmax L∑ j=1 |λj | · ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ = βf,t · βreg,t · δmax L∑ j=1 |λj | · ∣∣∣r(j)t + γ Q(j)t (st+1, ãt+1)−Q(j)t (st, at)∣∣∣ . (23) The second line follows from Lemma 2. If TD error in the regular space is bounded, then ∣∣∣r(j)t + γ Q(j)t (st+1, ãt+1)−Q(j)t (st, at)∣∣∣ ≤ K(j) for some K(j) ≥ 0. Hence, Equation 23 induces: |et| ≤ βf,t · βreg,t · δmax L∑ j=1 |λj | ·K(j) = βf,t · βreg,t · δmax ·K, (24) with K = ∑L j=1 |λj | ·K(j) ≥ 0; thus, |et| is also bounded for all t. As (by assumption) βreg,t ·βf,t converges to zero, we conclude that there exists T ≥ 0 such that for all t ≥ T we have |et| ≤ ξ ∣∣∣rt + γ max a′ Qt(st+1, a ′)−Qt(st, at) ∣∣∣ , (25) for any given ξ ∈ (0, 1]. Hence, not only |et| goes to zero w.p. one as t→∞, but also its magnitude always remains upperbounded below the size of TD update with any arbitrary margin ξ. Since TD update already satisfies assumptions 3 and 4 of Lemma 1, we conclude that with the presence of et those assumptions remain satisfied, at least after reaching some time T where Equation 25 holds. Finally, Lemma 2 also asserts that et is measurable given information at time t, as required by Lemma 1. Invoking Lemma 1, we can now conclude that the iterative process defined by Algorithm 1 converges to Q∗t w.p. one. PROOF OF LEMMA 2 PART 1 Our proof partially builds upon the proof presented by van Seijen et al. (2019). To simplify the notation, we drop j in fj , while we keep j in other places for clarity. By definition we have Q̃(j)t (s, a) = f ( Q (j) t (s, a) ) . Hence, we rewrite Equations 3, 4, and 5 of Algorithm 1 in terms of Q(j)t : U (j) t = r (j) t + γQ (j) t (st+1, ãt+1) , (26) Û (j) t = Q (j) t (st, at) + βreg,t ( U (j) t −Q (j) t (st, at) ) , (27) f ( Q (j) t+1(st, at) ) = f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) )) . (28) The first two equations yield: Û (j) t = Q (j) t (st, at) + βreg,t ( r (j) t + γ Q (j) t (st+1, ãt+1)−Q (j) t (st, at) ) . (29) By applying f−1 to both sides of Equation 28, we get: Q (j) t+1(st, at) = f −1 ( f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) ))) , (30) which can be rewritten as: Q (j) t+1(st, at) = Q (j) t (st, at) + βf,t ( Û (j) t −Q (j) t (st, at) ) + e (j) t , (31) where e(j)t is the error due to averaging in the mapping space instead of in the regular space: e (j) t := f −1 ( f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) ))) −Q(j)t (st, at)− βf,t ( Û (j) t −Q (j) t (st, at) ) . (32) We next analyze the behavior of e(j)t under the Theorem 1’s assumptions. To simplify, let us introduce the following substitutions: a → Q(j)t (st, at) b → Û (j)t v → (1− βf,t) a+ βf,t b w̃ → (1− βf,t)f(a) + βf,tf(b) w → f−1(w̃) The error e(j)t can be written as e (j) t = f −1((1− βf,t)f(a) + βf,tf(b))− ((1− βf,t)a+ βf,tb ) = f−1(w̃)− v = w − v. We remark that both v and w lie between a and b. Notably, e(j)t has a particular structure which we can use to bound w − v. See Table 1 for the ordering of v and w for different possibilities of f . We define three lines g0(x), g1(x), and g2(x) such that they all pass through the point (a, f(a)). As for their slopes, g0(x) has the derivative f ′(a), and g2(x) has the derivative f ′(b). The function g1(x) passes through point (b, f(b)) as well, giving it derivative (f(a)−f(b))/(a−b). See Figure 3 for all the possible cases. We can see that no matter if f is semi-convex or semi-concave and if it is increasing or decreasing these three lines will sandwich f over the interval [a, b] if b ≥ a, or similarly over [b, a] if a ≥ b. Additionally, it is easy to prove that for all x in the interval of a and b, either of the following holds: g0(x) ≥ f(x) ≥ g1(x) ≥ g2(x) (33) or g0(x) ≤ f(x) ≤ g1(x) ≤ g2(x). (34) The first one is equivalent to g−10 (y) ≤ f−1(y) ≤ g −1 1 (y) ≤ g −1 2 (y), (35) while the second one is equivalent to g−10 (y) ≥ f−1(y) ≥ g −1 1 (y) ≥ g −1 2 (y). (36) From the definition of g1 it follow that in all the mentioned possibilities of f combined with either of a ≥ b or b ≥ a, we always have g1(v) = w̃ and g−11 (w̃) = v. Hence, plugging w̃ in Equation 35 and Equation 36 (and noting that f−1(w̃) = w) deduces g−10 (w̃) ≤ w ≤ v ≤ g −1 2 (w̃) (37) or g−10 (w̃) ≥ w ≥ v ≥ g −1 2 (w̃). (38) Either way, regardless of various possibilities for f as well as a and b, we conclude that |e(j)t | = |v − w| ≤ |g−12 (w̃)− g −1 0 (w̃)|. (39) From definition of the lines g0 and g2, we write the line equations as follows: g0(x)− f(a) = f ′(a)(x− a), g2(x)− f(a) = f ′(b)(x− a). Applying these equations on the points (g−10 (w̃), w̃) and (g −1 2 (w̃), w̃), respectively, it yields: w̃ − f(a) = f ′(a)(g−10 (w̃)− a), w̃ − f(a) = f ′(b)(g−12 (w̃)− a), which deduce g−10 (w̃) = w̃ − f(a) f ′(a) + a ; g−12 (w̃) = w̃ − f(a) f ′(b) + a . (40) Plugging the above in Equation 39, it follows: |e(j)t | = |v − w| ≤ ∣∣∣∣ w̃ − f(a)f ′(b) − w̃ − f(a)f ′(a) ∣∣∣∣ = ∣∣∣∣( 1f ′(b) − 1f ′(a) ) (w̃ − f(a)) ∣∣∣∣ = ∣∣∣∣( 1f ′(b) − 1f ′(a) )( (1− βf,t)f(a) + βf,tf(b)− f(a) )∣∣∣∣ = ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) )( f(b)− f(a) )∣∣∣∣ . (41) We next invoke the mean value theorem, which states that if f is a continuous function on the closed interval [a, b] and differentiable on the open interval (a, b), then there exists a point c ∈ (a, b) such that f(b)− f(a) = f ′(c)(b− a). Remark that based on Assumption 2, c would satisfy f ′j(c) ≤ δ (j) 2 , also that 1f ′(b) − 1 f ′(a) ≤ 1 δ (j) 1 − 1 δ (j) 2 . Hence, |e(j)t | ≤ ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) )( f(b)− f(a) )∣∣∣∣ = ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) ) f ′(c)(b− a) ∣∣∣∣ ≤ ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) · δ(j)2 · (b− a) ∣∣∣∣∣ = ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) · δ(j)2 · ( Û (j) t −Q (j) t (st, at) )∣∣∣∣∣ . (42) From Equation 27, it follows that Û (j) t −Q (j) t (st, at) = βreg,t ( U (j) t −Q (j) t (st, at) ) . We therefore can write |e(j)t | ≤ ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) δ (j) 2 ( Û (j) t −Q (j) t (st, at) )∣∣∣∣∣ = ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) δ (j) 2 · βreg,t ( U (j) t −Q (j) t (st, at) )∣∣∣∣∣ = βf,t · βreg,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ , where δ(j) = δ(j)2 ( 1 δ (j) 1 − 1 δ (j) 2 ) is a positive constant. This completes the proof for the first part of the lemma. PART 2 For this part, it can be directly seen from Figure 3 that for a given fj , the order of w′, w, v, and v′ is fixed, regardless of whether a ≥ b or b ≥ a (in Figure 3, compare each plot A, B, C, and D with their counterparts at the bottom). Hence, the sign of e(j)t = w − v will not change for a fixed mapping. PART 3 Finally, we note that by its definition, e(j)t comprises quantities that are all defined at time t. Hence, it is fully measurable at time t, and this completes the proof. B DISCUSSION ON LOG VERSUS LOGLIN As discussed in Section 4.3 (Slope of Mappings), the logarithmic (Log) function suffers from a toolow slope when the return is even moderately large. Figure 4 visualizes the impact more vividly. The logarithmic-linear (LogLin) function lifts this disadvantage by switching to a linear (Lin) function for such returns. For example, if the return changes by a unit of reward from 19 to 20, then the change will be seen as 0.05 in the Log space (i.e. log(20) − log(19)) versus 1.0 in the LogLin space; that is, Log compresses the change by 95% for a return of around 20. As, in general, learning subtle changes is more difficult and requires more training iterations, in such scenarios normal DQN (i.e. Lin function) is expected to outperform LogDQN. On the other hand, when the return is small (such as in sparse reward tasks), LogDQN is expected to outperform DQN. Since LogLin exposes the best of the two worlds of logarithmic and linear spaces (when the return lies in the respective regions), we should expect it to work best if it is to be used as a generic mapping for various games. C EXPERIMENTAL DETAILS The human-normalized scores reported in our Atari 2600 experiments are given by the formula (similarly to van Hasselt et al. (2016)): scoreagent − scorerandom scorehuman − scorerandom , where scoreagent, scorehuman, and scorerandom are the per-game scores (undiscounted returns) for the given agent, a reference human player, and random agent baseline. We use Table 2 from Wang et al. (2016) to retrieve the human player and random agent scores. The relative human-normalized score of LogLinDQN versus a baseline in each game is given by (similarly to Wang et al. (2016)): scoreLogLinDQN − scorebaseline max(scorebaseline, scorehuman)− scorerandom , where scoreLogLinDQN and scorebaseline are computed by averaging over the last 10% of each learning curve (i.e. last 20 iterations). The reported results are based on three independent trials for LogLinDQN and LogDQN, and five independent trials for DQN. D ADDITIONAL RESULTS Figure 5 shows the relative human-normalized score of LogLinDQN versus LogDQN (top panel) and versus (Lin)DQN (bottom panel) for each game, across a suite of 55 Atari 2600 games. LogLinDQN significantly outperforms both LogDQN and (Lin)DQN on several games, and is otherwise on par with them (i.e. when LogLinDQN is outperformed by either of LogDQN or (Lin)DQN, the difference is not by a large margin). Figure 6 shows the raw (i.e. without human-normalization) learning curves across a suite of 55 Atari 2600 games. Figures 7, 8, and 9 illustrate the change of reward density (measured for positive and negative rewards separately) at three different training points (before training begins, after iteration 5, and after iteration 49) across a suite of 55 Atari 2600 games.
1. What is the focus and contribution of the paper on reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of value function mapping and reward decomposition? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the significance and limitation of the proposed method? 5. Are there any suggestions for improving the evaluation and ablation studies?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a new RL algorithm that contains two principles of value function mapping and reward decomposition. This proposal generalizes many existing RL frameworks such as classical Q-learning, Logarithmic Q-learning, and Q-Decomposition. The paper also provides generic theoretical results to backup the theory. The paper also demonstrates this idea on the suite of Atari 2600 games. Review The strength: New general value mapping that generalizes from previous work e.g. Log Q-learning The orchestration of value mappings and decomnposed reward that can allow the above general value mapping. Theoretical and experimental results to backup the proposed idea Weakness: Contribution on value mapping and reward decomposition can be incremental. Experiment results on average are still worse than Rainbow. In overall, the paper is well written and pursues an interesting research problem. Though the proposed idea of value mapping and reward decomposition is incremental given existing work, it's worth trying and has showed the benefit. The reward decomposition that is based on fixed and hand-designed configuration will limit the novelty and application. It's too technical comparing to the contributions in Sections 2 and 4. It will be more useful if there are more choices of f . The current evaluation looks limited. Though there is improvement over LogDQN, it's unclear how and where the proposed ideas contribute to the score improvement. More ablation studies will also be helpful.
ICLR
Title Orchestrated Value Mapping for Reinforcement Learning Abstract We present a general convergent class of reinforcement learning algorithms that is founded on two distinct principles: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. The first principle enables incorporating specific properties into the value estimator that can enhance learning. The second principle, on the other hand, allows for the value function to be represented as a composition of multiple utility functions. This can be leveraged for various purposes, e.g. dealing with highly varying reward scales, incorporating a priori knowledge about the sources of reward, and ensemble learning. Combining the two principles yields a general blueprint for instantiating convergent algorithms by orchestrating diverse mapping functions over multiple reward channels. This blueprint generalizes and subsumes algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In addition, our convergence proof for this general class relaxes certain required assumptions in some of these algorithms. Based on our theory, we discuss several interesting configurations as special cases. Finally, to illustrate the potential of the design space that our theory opens up, we instantiate a particular algorithm and evaluate its performance on the Atari suite. 1 INTRODUCTION The chief goal of reinforcement learning (RL) algorithms is to maximize the expected return (or the value function) from each state (Szepesvári, 2010; Sutton & Barto, 2018). For decades, many algorithms have been proposed to compute target value functions either as their main goal (criticonly algorithms) or as a means to help the policy search process (actor-critic algorithms). However, when the environment features certain characteristics, learning the underlying value function can become very challenging. Examples include environments where rewards are dense in some parts of the state space but very sparse in other parts, or where the scale of rewards varies drastically. In the Atari 2600 game of Ms. Pac-Man, for instance, the reward can vary from 10 (for small pellets) to as large as 5000 (for moving bananas). In other games such as Tennis, acting randomly leads to frequent negative rewards and losing the game. Then, once the agent learns to capture the ball, it can avoid incurring such penalties. However, it may still take a very long time before the agent scores a point and experiences a positive reward. Such learning scenarios, for one reason or another, have proved challenging for the conventional RL algorithms. One issue that can arise due to such environmental challenges is having highly non-uniform action gaps across the state space.1 In a recent study, van Seijen et al. (2019) showed promising results by simply mapping the value estimates to a logarithmic space and adding important algorithmic components to guarantee convergence under standard conditions. While this construction addresses the problem of non-uniform action gaps and enables using lower discount factors, it further opens a new direction for improving the learning performance: estimate the value function in a different space that admits better properties compared to the original space. This interesting view naturally raises theoretical questions about the required properties of the mapping functions, and whether the guarantees of convergence would carry over from the basis algorithm under this new construction. 1Action gap refers to the value difference between optimal and second best actions (Farahmand, 2011). One loosely related topic is that of nonlinear Bellman equations. In the canonical formulation of Bellman equations (Bellman, 1954; 1957), they are limited in their modeling power to cumulative rewards that are discounted exponentially. However, one may go beyond this basis and redefine the Bellman equations in a general nonlinear manner. In particular, van Hasselt et al. (2019) showed that many such Bellman operators are still contraction mappings and thus the resulting algorithms are reasonable and inherit many beneficial properties of their linear counterparts. Nevertheless, the application of such algorithms is still unclear since the fixed point does not have a direct connection to the concept of return. In this paper we do not consider nonlinear Bellman equations. Continuing with the first line of thought, a natural extension is to employ multiple mapping functions concurrently in an ensemble, allowing each to contribute their own benefits. This can be viewed as a form of separation of concerns (van Seijen et al., 2016). Ideally, we may want to dynamically modify the influence of different mappings as the learning advances. For example, the agent could start with mappings that facilitate learning on sparse rewards. Then, as it learns to collect more rewards, the mapping function can be gradually adapted to better support learning on denser rewards. Moreover, there may be several sources of reward with specific characteristics (e.g. sparse positive rewards but dense negative ones), in which case using a different mapping to deal with each reward channel could prove beneficial. Building upon these ideas, this paper presents a general class of algorithms based on the combination of two distinct principles: value mapping and linear reward decomposition. Specifically, we present a broad class of mapping functions that inherit the convergence properties of the basis algorithm. We further show that such mappings can be orchestrated through linear reward decomposition, proving convergence for the complete class of resulting algorithms. The outcome is a blueprint for building new convergent algorithms as instances. We conceptually discuss several interesting configurations, and experimentally validate one particular instance on the Atari 2600 suite. 2 VALUE MAPPING We consider the standard reinforcement learning problem which is commonly modeled as a Markov decision process (MDP; Puterman (1994))M = (S,A, P,R, P0, γ), where S andA are the discrete sets of states and actions, P (s′|s, a) .= P[st+1 =s′ | st=s, at=a] is the state-transition distribution, R(r|s, a, s′) .= P[rt = r | st = s, at = a, st+1 = s′] (where we assume r ∈ [rmin, rmax]) is the reward distribution, P0(s) . = P[s0 = s] is the initial-state distribution, and γ ∈ [0, 1] is the discount factor. A policy π(a|s) .= P[at = a | st = s] defines how an action is selected in a given state. Thus, selecting actions according to a stationary policy generally results in a stochastic trajectory. The discounted sum of rewards over the trajectory induces a random variable called the return. We assume that all returns are finite and bounded. The state-action value function Qπ(s, a) evaluates the expected return of taking action a at state s and following policy π thereafter. The optimal value function is defined as Q∗(s, a) .= maxπ Qπ(s, a), which gives the maximum expected return of all trajectories starting from the state-action pair (s, a). Similarly, an optimal policy is defined as π∗(a|s) ∈ arg maxπ Qπ(s, a). The optimal value function is unique (Bertsekas & Tsitsiklis, 1996) and can be found, e.g., as the fixed point of the Q-Learning algorithm (Watkins, 1989; Watkins & Dayan, 1992) which assumes the following update: Qt+1(st, at)← (1− αt)Qt(st, at) + αt ( rt + γmax a′ Qt(st+1, a ′) ) , (1) where αt is a positive learning rate at time t. Our goal is to map Q to a different space and perform the update in that space instead, so that the learning process can benefit from the properties of the mapping space. We define a function f that maps the value function to some new space. In particular, we consider the following assumptions: Assumption 1 The function f(x) is a bijection (either strictly increasing or strictly decreasing) for all x in the given domain D = [c1, c2] ⊆ R. Assumption 2 The function f(x) holds the following properties for all x in the given domain D = [c1, c2] ⊆ R: 1. f is continuous on [c1, c2] and differentiable on (c1, c2); 2. |f ′(x)| ∈ [δ1, δ2] for x ∈ (c1, c2), with 0 < δ1 < δ2 <∞; 3. f is either of semi-convex or semi-concave. We next use f to map the value function, Q(s, a), to its transformed version, namely Q̃(s, a) . = f ( Q(s, a) ) . (2) Assumption 1 implies that f is invertible and, as such, Q(s, a) is uniquely computable from Q̃(s, a) by means of the inverse function f−1. Of note, this assumption also implies that f preserves the ordering in x; however, it inverts the ordering direction if f is decreasing. Assumption 2 imposes further restrictions on f , but still leaves a broad class of mapping functions to consider. Throughout the paper, we use tilde to denote a “mapped” function or variable, while the mapping f is understandable from the context (otherwise it is explicitly said). 2.1 BASE ALGORITHM If mapped value estimates were naively placed in a Q-Learning style algorithm, the algorithm would fail to converge to the optimal values in stochastic environments. More formally, in the tabular case, an update of the form (cf. Equation 1) Q̃t+1(st, at)← (1− αt)Q̃t(st, at) + αtf ( rt + γmax a′ f−1 ( Q̃t(st+1, a ′) )) (3) converges2 to the fixed point Q̃ (s, a) that satisfies Q̃ (s, a) = Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f ( r + γmax a′ f−1 ( Q̃ (s′, a′) ))] . (4) Let us define the notation Q (s, a) .= f−1 ( Q̃ (s, a) ) . If f is a semi-convex bijection, f−1 will be semi-concave and Equation 4 deduces Q (s, a) . = f−1 ( Q̃ (s, a) ) = f−1 ( Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f ( r + γmax a′ Q (s′, a′) )]) ≥ Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f−1 ( f ( r + γmax a′ Q (s′, a′) ))] = Es′∼P (·|s,a), r∼R(·|s,a,s′) [ r + γmax a′ Q (s′, a′) ] , (5) where the third line follows Jensen’s inequality. Comparing Equation 5 with the Bellman optimality equation in the regular space, i.e. Q∗(s, a) = Es′,r∼P,R [r + γmaxa′ Q∗(s′, a′)], we conclude that the value function to which the update rule (3) converges overestimates Bellman’s backup. Similarly, if f is a semi-concave function, then Q (s, a) underestimates Bellman’s backup. Either way, it follows that the learned value function deviates from the optimal one. Furthermore, the Jensen’s gap at a given state s — the difference between the left-hand and right-hand sides of Equation 5 — depends on the action a because the expectation operator depends on a. That is, at a given state s, the deviation of Q (s, a) from Q∗(s, a) is not a fixed-value shift and can vary for various actions. Hence, the greedy policy w.r.t. (with respect to) Q (s, ·) may not preserve ordering and it may not be an optimal policy either. In an effort to address this problem in the spacial case of f being a logarithmic function, van Seijen et al. (2019) observed that in the algorithm described by Equation 3, the learning rate αt generally conflates two forms of averaging: (i) averaging of stochastic update targets due to environment stochasticity (happens in the regular space), and (ii) averaging over different states and actions (happens in the f ’s mapping space). To this end, they proposed to algorithmically disentangle the two and showed that such a separation will lift the Jensen’s gap if the learning rate for averaging in the regular space decays to zero fast enough. Building from Log Q-Learning (van Seijen et al., 2019), we define the base algorithm as follows: at each time t, the algorithm receives Q̃t(s, a) and a transition quadruple (s, a, r, s′), and outputs Q̃t+1(s, a), which then yields Qt+1(s, a) . = f−1 ( Q̃t+1(s, a) ) . The steps are listed below: 2The convergence follows from stochastic approximation theory with the additional steps to show by induction that Q̃ remains bounded and then the corresponding operator is a contraction mapping. Qt(s, a) := f −1 ( Q̃t(s, a) ) (6) ã′ := arg max a′ ( Qt(s ′, a′) ) (7) Ut := r + γf −1 ( Q̃t(s ′, ã′) ) (8) Ût := f −1 ( Q̃t(s, a) ) + βreg,t ( Ut − f−1 ( Q̃t(s, a) )) (9) Q̃t+1(s, a) := Q̃t(s, a) + βf,t ( f(Ût)− Q̃t(s, a) ) (10) Here, the mapping f is any function that satisfies Assumptions 1 and 2. Remark that similarly to the Log Q-Learning algorithm, Equations 9 and 10 have decoupled averaging of stochastic update targets from that over different states and actions. 3 REWARD DECOMPOSITION Reward decomposition can be seen as a generic way to facilitate (i) systematic use of environmental inductive biases in terms of known reward sources, and (ii) action selection as well as value-function updates in terms of communication between an arbitrator and several subagents, thus assembling several subagents to collectively solve a task. Both directions provide broad avenues for research and have been visited in various contexts. Russell & Zimdars (2003) introduced an algorithm called Q-Decomposition with the goal of extending beyond the “monolithic” view of RL. They studied the case of additive reward channels, where the reward signal can be written as the sum of several reward channels. They observed, however, that using Q-Learning to learn the corresponding Q function of each channel will lead to a non-optimal policy (they showed it through a counterexample). Hence, they used a Sarsa-like update w.r.t. the action that maximizes the arbitrator’s value. Laroche et al. (2017) provided a formal analysis of the problem, called the attractor phenomenon, and studied a number of variations to Q-Decomposition. On a related topic but with a different goal, Sutton et al. (2011) introduced the Horde architecture, which consists of a large number of “demons” that learn in parallel via off-policy learning. Each demon estimates a separate value function based on its own target policy and (pseudo) reward function, which can be seen as a decomposition of the original reward in addition to auxiliary ones. van Seijen et al. (2017) built on these ideas and presented hybrid reward architecture (HRA) to decompose the reward and learn their corresponding value functions in parallel, under mean bootstrapping. They further illustrated significant results on domains with many independent sources of reward, such as the Atari 2600 game of Ms. Pac-Man. Besides utilizing distinct environmental reward sources, reward decomposition can also be used as a technically-sound algorithmic machinery. For example, reward decomposition can enable utilization of a specific mapping that has a limited domain. In the Log Q-Learning algorithm, for example, the log(·) function cannot be directly used on non-positive values. Thus, the reward is decomposed such that two utility functions Q̃+ and Q̃− are learned for when the reward is non-negative or negative, respectively. Then the value is given by Q(s, a) = exp ( Q̃+(s, a) ) − exp ( Q̃−(s, a) ) . The learning process of each of Q̃+ and Q̃− bootstraps towards their corresponding value estimate at the next state with an action that is the arg max of the actual Q, rather than that of Q̃+ and Q̃− individually. We generalize this idea to incorporate arbitrary decompositions, beyond only two channels. To be specific, we are interested in linear decompositions of the reward function into L separate channels r(j), for j = 1 . . . L, in the following way: r := L∑ j=1 λjr (j), (11) with λj ∈ R. The channel functions r(j) map the original reward into some new space in such a way that their weighted sum recovers the original reward. Clearly, the case of L = 1 and λ1 = 1 would retrieve the standard scenario with no decomposition. In order to provide the update, expanding from Log Q-Learning, we define Q̃(j) for j = 1 . . . L, corresponding to the above reward channels, and construct the actual value function Q using the following: Qt(s, a) := L∑ j=1 λjf −1 j ( Q̃ (j) t (s, a) ) . (12) We explicitly allow the mapping functions, fj , to be different for each channel. That is, each reward channel can have a different mapping and each Q̃(j) is learned separately under its own mapping. Before discussing how the algorithm is updated with the new channels, we present a number of interesting examples of how Equation 11 can be deployed. As the first example, we can recover the original Log Q-Learning reward decomposition by considering L = 2, λ1 = +1, λ2 = −1, and the following channels: r (1) t := { rt if rt ≥ 0 0 otherwise ; r (2) t := { |rt| if rt < 0 0 otherwise (13) Notice that the original reward is retrieved via rt = r (1) t −r (2) t . This decomposition allows for using a mapping with only positive domain, such as the logarithmic function. This is an example of using reward decomposition to ensure that values do not cross the domain of mapping function f . In the second example, we consider different magnifications for different sources of reward in the environment so as to make the channels scale similarly. The Atari 2600 game of Ms. Pac-Man is an example which includes rewards with three orders of magnitude difference in size. We may therefore use distinct channels according to the size-range of rewards. To be concrete, let r ∈ [0, 100] and consider the following two configurations for decomposition (can also be extended to other ranges). Configuration 1: λ1 = 1, λ2 = 10, λ3 = 100 r (1) t := { rt if rt ∈ [0, 1] 0 rt > 1 r (2) t := 0 if rt ≤ 1 0.1rt if rt ∈ (1, 10] 0 rt > 10 r (3) t := { 0 if rt ≤ 10 0.01rt if rt ∈ (10, 100] Configuration 2: λ1 = 1, λ2 = 9, λ3 = 90 r (1) t := { rt if rt ∈ [0, 1] 1 rt > 1 r (2) t := 0 if rt ≤ 1 (rt − 1)/9 if rt ∈ (1, 10] 1 rt > 10 r (3) t := { 0 if rt ≤ 10 (rt − 10)/90 if rt ∈ (10, 100] Each of the above configurations presents certain characteristics. Configuration 1 gives a scheme where, at each time step, at most one channel is non-zero. Remark, however, that each channel will be non-zero with less frequency compared to the original reward signal, since rewards get assigned to different channels depending on their size. On the other hands, Configuration 2 keeps each channel to act as if there is a reward clipping at its upper bound, while each channel does not see rewards below its lower bound. As a result, Configuration 2 fully preserves the reward density at the first channel and presents a better density for higher channels compared to Configuration 1. However, the number of active channels depends on the reward size and can be larger than one. Importantly, the magnitude of reward for all channels always remains in [0, 1] in both configurations, which could be a desirable property. The final point to be careful about in using these configurations is that the large weight of higher channels significantly amplifies their corresponding value estimates. Hence, even a small estimation error at higher channels can overshadow the lower ones. Over and above the cases we have presented so far, reward decomposition enables an algorithmic machinery in order to utilize various mappings concurrently in an ensemble. In the simplest case, we note that in Equation 11 by construction we can always write r(j) := r and L∑ j=1 λj := 1. (14) That is, the channels are merely the original reward with arbitrary weights that should sum to one. We can then use arbitrary functions fj for different channels and build the value function as presented in Equation 12. This construction directly induces an ensemble of arbitrary mappings with different weights, all learning on the same reward signal. More broadly, this can potentially be combined with any other decomposition scheme, such as the ones we discussed above. For example, in the case of separating negative and positive rewards, one may also deploy two (or more) different mappings for each of the negative and positive reward channels. This certain case results in four channels, two negative and two positive, with proper weights that sum to one. 4 ORCHESTRATION OF VALUE-MAPPINGS USING DECOMPOSED REWARDS 4.1 ALGORITHM To have a full orchestration, we next combine value mapping and reward decomposition. We follow the previous steps in Equations 6 –10, but now also accounting for the reward decomposition. The core idea here is to replace Equation 6 with 12 and then compute Q̃(j) for each reward channel in parallel. In practice, these can be implemented as separate Q tables, separate Q networks, or different network heads with a shared torso. At each time t, the algorithm receives all channel outputs Q̃(j)t , for j = 1 . . . L, and updates them in accordance with the observed transition. The complete steps are presented in Algorithm 1. A few points are apropos to remark. Firstly, the steps in the for-loop can be computed in parallel for all L channels. Secondly, as mentioned previously, the mapping function fj may be different for each channel; however, the discount factor γ and both learning rates βf,t and βreg,t are shared among all the channels and must be the same. Finally, note also that the action ãt+1, from which all the channels bootstrap, comes from arg maxa′ Qt(st+1, a ′) and not the local value of each channel. This directly implies that each channel-level value Q(j) = f−1(Q̃j) does not solve a channel-level Bellman equation by itself. In other words, Q(j) does not represent any specific semantics such as expected return corresponding to the rewards of that channel. They only become meaningful when they compose back together and rebuild the original value function. 4.2 CONVERGENCE We establish convergence of Algorithm 1 by the following theorem. Algorithm 1: Orchestrated Value Mapping. Input: (at time t) Q̃ (j) t for j = 1 . . . L st, at, rt, and st+1 Output: Q̃(j)t+1 for j = 1 . . . L Compute rt(j) for j = 1 . . . L begin 1 Qt(st, at) := ∑L j=1 λjf −1 j ( Q̃ (j) t (st, at) ) 2 ãt+1 := arg maxa′ ( Qt(st+1, a ′) ) for j = 1 to L do 3 U (j) t := r (j) t + γf −1 j ( Q̃ (j) t (st+1, ãt+1) ) 4 Û (j) t := f −1 j ( Q̃ (j) t (st, at) ) + βreg,t ( U (j) t − f−1j ( Q̃ (j) t (st, at) )) 5 Q̃ (j) t+1(st, at) := Q̃ (j) t (st, at) + βf,t ( fj ( Û (j) t ) − Q̃(j)t (st, at) ) end end Theorem 1 Let the reward admit a decomposition as defined by Equation 11, Qt(st, at) be defined by Equation 12, and all Q̃(j)t (st, at) updated according to the steps of Algorithm 1. Assume further that the following hold: 1. All fj’s satisfy Assumptions 1 and 2; 2. TD error in the regular space (second term in line 4 of Algorithm 1) is bounded for all j; 3. ∑∞ t=0 βf,t · βreg,t =∞; 4. ∑∞ t=0(βf,t · βreg,t)2 <∞; 5. βf,t · βreg,t → 0 as t→∞. Then, Qt(s, a) converges to Q∗t (s, a) with probability one for all state-action pairs (s, a). The proof follows basic results from stochastic approximation theory (Jaakkola et al., 1994; Singh et al., 2000) with important additional steps to show that those results hold under the assumptions of Theorem 1. The full proof is fairly technical and is presented in Appendix A. We further remark that Theorem 1 only requires the product βf,t ·βreg,t to go to zero. As this product resembles the conventional learning rate in Q-Learning, this assumption is no particular limitation compared to traditional algorithms. We contrast this assumption with the one in the previous proof of Log Q-Learning which separately requires βreg to go to zero fast enough. We note that in the case of using function approximation, as in a DQN-like algorithm (Mnih et al., 2015), the update in line 5 of Algorithm 1 should naturally be managed by the used optimizer, while line 4 may be handled manually. This has proved challenging as the convergence properties can be significantly sensitive to learning rates. To get around this problem, van Seijen et al. (2019) decided to keep βreg,t at a fixed value in their deep RL experiments, contrary to the theory. Our new condition, however, formally allows βreg,t to be set to a constant value as long as βf,t properly decays to zero. A somewhat hidden step in the original proof of Log Q-Learning is that the TD error in the regular space (second term in Equation 9) must always remain bounded. We will make this condition explicit. In practice, with bounds of the reward being known, one can easily find bounds of return in regular as well as fj spaces, and ensure boundness of U (j) t − f−1j (Q̃ (j) t ) by proper clipping. Notably, clipping of Bellman target is also used in the literature to mitigate the value overflow issue (Fatemi et al., 2019). The scenarios covered by Assumption 2, with the new convergence proof due to Theorem 1, may be favorable in many practical cases. Moreover, several prior algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition can be derived by appropriate construction from Algorithm 1. 4.3 REMARKS TIME-DEPENDENT CHANNELS The proof of Theorem 1 does not directly involve the channel weights λj . However, changing them will impact Qt, which changes the action ãt+1 in line 2 of Algorithm 1. In the case that λj’s vary with time, if they all converge to their final fixed value soon enough before the learning rates become too small, and if additionally all state-action pairs are still visited frequently enough after λj’s are settled to their final values, then the algorithm should still converge to optimality. Of course, this analysis is far from formal; nevertheless, we can still strongly conjecture that an adaptive case where the channel weights vary with time should be possible to design. SLOPE OF MAPPINGS Assumption 2 asserts that the derivative of fj must be bounded from both below and above. While this condition is sufficient for the proof of Theorem 1, we can probe its impact further. The proof basically demonstrates a bounded error term, which ultimately converges to zero under the conditions of Theorem 1. However, the bound on this error term (see Lemma 2 in Appendix A) is scaled by δmax = maxj δ(j), with δ(j) being defined as δ(j) = δ (j) 2 / δ (j) 1 − 1, (15) where δ(j)1 and δ (j) 2 are defined according to Assumption 2 (0 < δ (j) 1 ≤ |f ′j(x)| ≤ δ (j) 2 ). In the case of fj being a straight line δ(j) = 0, thus no error is incurred and the algorithm shrinks to Q-Learning. An important extreme case is when δ(j)1 is too small while δ (j) 2 is not close to δ (j) 1 . It then follows from Equation 15 that the error can be significantly large and the algorithm may need a long time to converge. This can also be examined by observing that if the return is near the areas where f ′j is very small, the return may be too compressed when mapped. Consequently, the agent becomes insensitive to the change of return in such areas. This problem can be even more significant in deep RL due to more complex optimization processes and nonlinear approximations. The bottom-line is that the mapping functions should be carefully selected in light of Equation 15 to avoid extremely large errors while still having desired slopes to magnify or suppress the returns when needed. This analysis also explains why logarithmic mappings of the form f(x) = c · log(x + d) (as investigated in the context of Log Q-Learning by van Seijen et al. (2019)) present unfavorable results in dense reward scenarios; e.g. in the Atari 2600 game of Skiing where there is a reward at every step. In this expression c is a mapping hyperparameter that scales values in the logarithmic space and d is a small positive scalar to ensure bounded derivatives, where the functional form of the derivative is given by f ′(x) = cx+d . Hence, δ2 = c d , whereas δ1 can be very close to zero depending on the maximum return. As a result, when learning on a task which often faces large returns, Log Q-Learning operates mostly on areas of f where the slope is small and, as such, it can incur significant error compared to standard Q-Learning. See Appendix B for a detailed illustration of this issue, and Appendix D for the full list of reward density variations across a suite of 55 Atari 2600 games. 5 EXPERIMENTAL RESULTS In this section, we illustrate the simplicity and utility of instantiating new learning methods based on our theory. Since our framework provides a very broad algorithm class with numerous possibilities, deep and meaningful investigations of specific instances go far beyond the scope of this paper (or any single conference paper). Nevertheless, as an authentic illustration, we consider the LogDQN algorithm (van Seijen et al., 2019) and propose an altered mapping function. As discussed above, the logarithmic mapping in LogDQN suffers from a too-small slope when encountering large returns. We lift this undesirable property while keeping the desired magnification property around zero. Specifically, we substitute the logarithmic mapping with a piecewise function that at the break-point x = 1 − d switches from a logarithmic mapping to a straight line with slope c (i.e. the same slope as c · log(x+ d) at x = 1− d): f(x) := { c · log(x+ d) if x ≤ 1− d c · (x− 1 + d) if x > 1− d We call the resulting method LogLinDQN, or Logarithmic-Linear DQN. Remark that choosing x = 1−d as the break-point has the benefit of using only a single hyperparameter c to determine both the scaling of the logarithmic function and the slope of the linear function, which otherwise would require an additional hyperparameter. Also note that the new mapping satisfies Assumptions 1 and 2. We then use two reward channels for non-negative and negative rewards, as discussed in the first example of Section 3 (see Equation 13), and use the same mapping function for both channels. Our implementation of LogLinDQN is based on Dopamine (Castro et al., 2018) and closely matches that of LogDQN, with the only difference being in the mapping function specification. Notably, our LogLin mapping hyperparameters are realized using the same values as those of LogDQN; i.e. c = 0.5 and d ≈ 0.02. We test this method in the Atari 2600 games of the Arcade Learning Environment (ALE) (Bellemare et al., 2013) and compare its performance primarily against LogDQN and DQN (Mnih et al., 2015), denoted by “Lin” or “(Lin)DQN” to highlight that it corresponds to a linear mapping function with slope one. We also include two other major baselines for reference: C51 (Bellemare et al., 2017) and Rainbow (Hessel et al., 2018). Our tests are conducted on a stochastic version of Atari 2600 using sticky actions (Machado et al., 2018) and follow a unified evaluation protocol and codebase via the Dopamine framework (Castro et al., 2018). Figure 1 shows the relative human-normalized score of LogLinDQN w.r.t. the worst and best of LogDQN and DQN for each game. These results suggest that LogLinDQN reasonably unifies the good properties of linear and logarithmic mappings (i.e. handling dense or sparse reward distributions respectively), thereby enabling it to improve upon the per-game worst of LogDQN and DQN (top panel) and perform competitively against the per-game best of the two (bottom panel) across a large set of games. Figure 2 shows median and mean human-normalized scores across a suite of 55 Atari 2600 games. Our LogLinDQN agent demonstrates a significant improvement over most baselines and is competitive with Rainbow in terms of mean performance. This is somewhat remarkable provided the relative simplicity of LogLinDQN, especially, w.r.t. Rainbow which combines several other advances including distributional learning, prioritized experience replay, and n-step learning. 6 CONCLUSION In this paper we introduced a convergent class of algorithms based on the composition of two distinct foundations: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. Together, this new family of algorithms enables learning the value function in a collection of different spaces where the learning process can potentially be easier or more efficient than the original return space. Additionally, the introduced methodology incorporates various versions of ensemble learning in terms of linear decomposition of the reward. We presented a generic proof, which also relaxes certain limitations in previous proofs. We also remark that several known algorithms in classic and recent literature can be seen as special cases of the present algorithm class. Finally, we contemplate research on numerous special instances as future work, following our theoretical foundation. Also, we believe that studying the combination of our general value mapping ideas with value decomposition (Tavakoli et al., 2021), instead of the the reward decomposition paradigm studied in this paper, could prove to be a fruitful direction for future research. REPRODUCIBILITY STATEMENT We release a generic codebase, built upon the Dopamine framework (Castro et al., 2018), with the option of using arbitrary compositions of mapping functions and reward decomposition schemes as easy-to-code modules. This enables the community to easily explore the design space that our theory opens up and investigate new convergent families of algorithms. This also allows to reproduce the results of this paper through an accompanying configuration script. The source code can be accessed at: https://github.com/microsoft/orchestrated-value-mapping. A PROOF OF THEOREM 1 We use a basic convergence result from stochastic approximation theory. In particular, we invoke the following lemma, which has appeared and proved in various classic texts; see, e.g., Theorem 1 in Jaakkola et al. (1994) or Lemma 1 in Singh et al. (2000). Lemma 1 Consider an algorithm of the following form: ∆t+1(x) := (1− αt)∆t(x) + αtψt(x), (16) with x being the state variable (or vector of variables), and αt and ψt denoting respectively the learning rate and the update at time t. Then, ∆t converges to zero w.p. (with probability) one as t→∞ under the following assumptions: 1. The state space is finite; 2. ∑ t αt =∞ and ∑ t α 2 t <∞; 3. ||E{ψt(x) | Ft}||W ≤ ξ ||∆t(x)||W , with ξ ∈ (0, 1) and || · ||W denoting a weighted max norm; 4. Var{ψt(x) | Ft} ≤ C (1 + ||∆t(x)||W ) 2, for some constant C; where Ft is the history of the algorithm until time t. Remark that in applying Lemma 1, the ∆t process generally represents the difference between a stochastic process of interest and its optimal value (that isQt andQ∗t ), and x represents a proper concatenation of states and actions. In particular, it has been shown that Lemma 1 applies to Q-Learning as the TD update of Q-Learning satisfies the lemma’s assumptions 3 and 4 (Jaakkola et al., 1994). We define Q(j)t (s, a) := f −1 ( Q̃ (j) t (s, a) ) , for j = 1 . . . L. Hence, Qt(s, a) = L∑ j=1 λjf −1 j ( Q̃ (j) t (s, a) ) = L∑ j=1 λjQ (j) t (s, a). (17) We next establish the following key result, which is core to the proof of Theorem 1. The proof is given in the next section. Lemma 2 Following Algorithm 1, for each channel j ∈ {1, . . . , L} we have Q (j) t+1(st, at) = Q (j) t (st, at) + βreg,t · βf,t ( U (j) t −Q (j) t (st, at) + e (j) t ) , (18) with the error term satisfying the followings: 1. Bounded by TD error in the regular space with decaying coefficient |e(j)t | ≤ βreg,t · βf,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ , (19) where δ(j) = δ(j)2 / δ (j) 1 − 1 is a positive constant; 2. For a given fj , e (j) t does not change sign for all t (it is either always non-positive or always non-negative); 3. e(j)t is fully measurable given the variables defined at time t. From Lemma 2, it follows that for each channel: Q (j) t+1(st, at) = Q (j) t (st, at) + βreg,t · βf,t ( U (j) t −Q (j) t (st, at) + e (j) t ) , (20) with e(j)t converging to zero w.p. one under condition 4 of the theorem, and U (j) t defined as: U (j) t := r (j) t + γ Q (j) t (st+1, ãt+1). Multiplying both sides of Equation 20 by λj and taking the summation, we write: L∑ j=1 λjQ (j) t+1(st, at) = L∑ j=1 λjQ (j) t (st, at) + βreg,t · βf,t L∑ j=1 λj ( U (j) t −Q (j) t (st, at) + e (j) t ) . Hence, using Equation 17 we have: Qt+1(st, at) = Qt(st, at) + βreg,t · βf,t L∑ j=1 λj ( U (j) t −Q (j) t (st, at) + e (j) t ) = Qt(st, at) + βreg,t · βf,t L∑ j=1 λj ( r (j) t + γ Q (j) t (st+1, ãt+1)−Q (j) t (st, at) + e (j) t ) = Qt(st, at) + βreg,t · βf,t rt + γ Qt(st+1, ãt+1)−Qt(st, at) + L∑ j=1 λje (j) t . (21) Definition of ãt+1 deduces that Qt(st+1, ãt+1) = Qt ( st+1, arg max a′ Qt(st+1, a ′) ) = max a′ Qt(st+1, a ′). By defining et := ∑L j=1 λje (j) t , we rewire Equation 21 as the following: Qt+1(st, at) = Qt(st, at) + βreg,t · βf,t ( rt + γ max a′ Qt(st+1, a ′)−Qt(st, at) + et ) . (22) This is a noisy Q-Learning algorithm with the noise term decaying to zero at a quadratic rate w.r.t. the learning rate’s decay; more precisely, in the form of (βreg,t · βf,t)2. Lemma 1 requires the entire update to be properly bounded (as stated in its assumptions 3 and 4). It has been known from the proof of Q-Learning (Jaakkola et al., 1994) that TD error satisfies these conditions, i.e. rt + γ maxa′ Qt(st+1, a′)−Qt(st, at) satisfies assumptions 3 and 4 of Lemma 1. To prove convergence of mapped Q-Learning, we therefore require to show that |et| also satisfies a similar property; namely, not only it disappears in the limit, but also it does not interfere intractably with the learning process during training. To this end, we next show that as the learning continues, |et| is indeed bounded by a value that can be arbitrarily smaller than the TD error. Consequently, as TD error satisfies assumptions 3 and 4 of Lemma 1, so does |et|, and so does their sum. Let δmax = maxj δ(j), with δ(j) defined in Lemma 2. Multiplying both sides of Equation 19 by λj and taking the summation over j, it yields: |et| = ∣∣∣∣∣∣ L∑ j=1 λje (j) t ∣∣∣∣∣∣ ≤ L∑ j=1 ∣∣∣λje(j)t ∣∣∣ ≤ L∑ j=1 |λj | · βf,t · βreg,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ ≤ βf,t · βreg,t · δmax L∑ j=1 |λj | · ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ = βf,t · βreg,t · δmax L∑ j=1 |λj | · ∣∣∣r(j)t + γ Q(j)t (st+1, ãt+1)−Q(j)t (st, at)∣∣∣ . (23) The second line follows from Lemma 2. If TD error in the regular space is bounded, then ∣∣∣r(j)t + γ Q(j)t (st+1, ãt+1)−Q(j)t (st, at)∣∣∣ ≤ K(j) for some K(j) ≥ 0. Hence, Equation 23 induces: |et| ≤ βf,t · βreg,t · δmax L∑ j=1 |λj | ·K(j) = βf,t · βreg,t · δmax ·K, (24) with K = ∑L j=1 |λj | ·K(j) ≥ 0; thus, |et| is also bounded for all t. As (by assumption) βreg,t ·βf,t converges to zero, we conclude that there exists T ≥ 0 such that for all t ≥ T we have |et| ≤ ξ ∣∣∣rt + γ max a′ Qt(st+1, a ′)−Qt(st, at) ∣∣∣ , (25) for any given ξ ∈ (0, 1]. Hence, not only |et| goes to zero w.p. one as t→∞, but also its magnitude always remains upperbounded below the size of TD update with any arbitrary margin ξ. Since TD update already satisfies assumptions 3 and 4 of Lemma 1, we conclude that with the presence of et those assumptions remain satisfied, at least after reaching some time T where Equation 25 holds. Finally, Lemma 2 also asserts that et is measurable given information at time t, as required by Lemma 1. Invoking Lemma 1, we can now conclude that the iterative process defined by Algorithm 1 converges to Q∗t w.p. one. PROOF OF LEMMA 2 PART 1 Our proof partially builds upon the proof presented by van Seijen et al. (2019). To simplify the notation, we drop j in fj , while we keep j in other places for clarity. By definition we have Q̃(j)t (s, a) = f ( Q (j) t (s, a) ) . Hence, we rewrite Equations 3, 4, and 5 of Algorithm 1 in terms of Q(j)t : U (j) t = r (j) t + γQ (j) t (st+1, ãt+1) , (26) Û (j) t = Q (j) t (st, at) + βreg,t ( U (j) t −Q (j) t (st, at) ) , (27) f ( Q (j) t+1(st, at) ) = f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) )) . (28) The first two equations yield: Û (j) t = Q (j) t (st, at) + βreg,t ( r (j) t + γ Q (j) t (st+1, ãt+1)−Q (j) t (st, at) ) . (29) By applying f−1 to both sides of Equation 28, we get: Q (j) t+1(st, at) = f −1 ( f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) ))) , (30) which can be rewritten as: Q (j) t+1(st, at) = Q (j) t (st, at) + βf,t ( Û (j) t −Q (j) t (st, at) ) + e (j) t , (31) where e(j)t is the error due to averaging in the mapping space instead of in the regular space: e (j) t := f −1 ( f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) ))) −Q(j)t (st, at)− βf,t ( Û (j) t −Q (j) t (st, at) ) . (32) We next analyze the behavior of e(j)t under the Theorem 1’s assumptions. To simplify, let us introduce the following substitutions: a → Q(j)t (st, at) b → Û (j)t v → (1− βf,t) a+ βf,t b w̃ → (1− βf,t)f(a) + βf,tf(b) w → f−1(w̃) The error e(j)t can be written as e (j) t = f −1((1− βf,t)f(a) + βf,tf(b))− ((1− βf,t)a+ βf,tb ) = f−1(w̃)− v = w − v. We remark that both v and w lie between a and b. Notably, e(j)t has a particular structure which we can use to bound w − v. See Table 1 for the ordering of v and w for different possibilities of f . We define three lines g0(x), g1(x), and g2(x) such that they all pass through the point (a, f(a)). As for their slopes, g0(x) has the derivative f ′(a), and g2(x) has the derivative f ′(b). The function g1(x) passes through point (b, f(b)) as well, giving it derivative (f(a)−f(b))/(a−b). See Figure 3 for all the possible cases. We can see that no matter if f is semi-convex or semi-concave and if it is increasing or decreasing these three lines will sandwich f over the interval [a, b] if b ≥ a, or similarly over [b, a] if a ≥ b. Additionally, it is easy to prove that for all x in the interval of a and b, either of the following holds: g0(x) ≥ f(x) ≥ g1(x) ≥ g2(x) (33) or g0(x) ≤ f(x) ≤ g1(x) ≤ g2(x). (34) The first one is equivalent to g−10 (y) ≤ f−1(y) ≤ g −1 1 (y) ≤ g −1 2 (y), (35) while the second one is equivalent to g−10 (y) ≥ f−1(y) ≥ g −1 1 (y) ≥ g −1 2 (y). (36) From the definition of g1 it follow that in all the mentioned possibilities of f combined with either of a ≥ b or b ≥ a, we always have g1(v) = w̃ and g−11 (w̃) = v. Hence, plugging w̃ in Equation 35 and Equation 36 (and noting that f−1(w̃) = w) deduces g−10 (w̃) ≤ w ≤ v ≤ g −1 2 (w̃) (37) or g−10 (w̃) ≥ w ≥ v ≥ g −1 2 (w̃). (38) Either way, regardless of various possibilities for f as well as a and b, we conclude that |e(j)t | = |v − w| ≤ |g−12 (w̃)− g −1 0 (w̃)|. (39) From definition of the lines g0 and g2, we write the line equations as follows: g0(x)− f(a) = f ′(a)(x− a), g2(x)− f(a) = f ′(b)(x− a). Applying these equations on the points (g−10 (w̃), w̃) and (g −1 2 (w̃), w̃), respectively, it yields: w̃ − f(a) = f ′(a)(g−10 (w̃)− a), w̃ − f(a) = f ′(b)(g−12 (w̃)− a), which deduce g−10 (w̃) = w̃ − f(a) f ′(a) + a ; g−12 (w̃) = w̃ − f(a) f ′(b) + a . (40) Plugging the above in Equation 39, it follows: |e(j)t | = |v − w| ≤ ∣∣∣∣ w̃ − f(a)f ′(b) − w̃ − f(a)f ′(a) ∣∣∣∣ = ∣∣∣∣( 1f ′(b) − 1f ′(a) ) (w̃ − f(a)) ∣∣∣∣ = ∣∣∣∣( 1f ′(b) − 1f ′(a) )( (1− βf,t)f(a) + βf,tf(b)− f(a) )∣∣∣∣ = ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) )( f(b)− f(a) )∣∣∣∣ . (41) We next invoke the mean value theorem, which states that if f is a continuous function on the closed interval [a, b] and differentiable on the open interval (a, b), then there exists a point c ∈ (a, b) such that f(b)− f(a) = f ′(c)(b− a). Remark that based on Assumption 2, c would satisfy f ′j(c) ≤ δ (j) 2 , also that 1f ′(b) − 1 f ′(a) ≤ 1 δ (j) 1 − 1 δ (j) 2 . Hence, |e(j)t | ≤ ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) )( f(b)− f(a) )∣∣∣∣ = ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) ) f ′(c)(b− a) ∣∣∣∣ ≤ ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) · δ(j)2 · (b− a) ∣∣∣∣∣ = ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) · δ(j)2 · ( Û (j) t −Q (j) t (st, at) )∣∣∣∣∣ . (42) From Equation 27, it follows that Û (j) t −Q (j) t (st, at) = βreg,t ( U (j) t −Q (j) t (st, at) ) . We therefore can write |e(j)t | ≤ ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) δ (j) 2 ( Û (j) t −Q (j) t (st, at) )∣∣∣∣∣ = ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) δ (j) 2 · βreg,t ( U (j) t −Q (j) t (st, at) )∣∣∣∣∣ = βf,t · βreg,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ , where δ(j) = δ(j)2 ( 1 δ (j) 1 − 1 δ (j) 2 ) is a positive constant. This completes the proof for the first part of the lemma. PART 2 For this part, it can be directly seen from Figure 3 that for a given fj , the order of w′, w, v, and v′ is fixed, regardless of whether a ≥ b or b ≥ a (in Figure 3, compare each plot A, B, C, and D with their counterparts at the bottom). Hence, the sign of e(j)t = w − v will not change for a fixed mapping. PART 3 Finally, we note that by its definition, e(j)t comprises quantities that are all defined at time t. Hence, it is fully measurable at time t, and this completes the proof. B DISCUSSION ON LOG VERSUS LOGLIN As discussed in Section 4.3 (Slope of Mappings), the logarithmic (Log) function suffers from a toolow slope when the return is even moderately large. Figure 4 visualizes the impact more vividly. The logarithmic-linear (LogLin) function lifts this disadvantage by switching to a linear (Lin) function for such returns. For example, if the return changes by a unit of reward from 19 to 20, then the change will be seen as 0.05 in the Log space (i.e. log(20) − log(19)) versus 1.0 in the LogLin space; that is, Log compresses the change by 95% for a return of around 20. As, in general, learning subtle changes is more difficult and requires more training iterations, in such scenarios normal DQN (i.e. Lin function) is expected to outperform LogDQN. On the other hand, when the return is small (such as in sparse reward tasks), LogDQN is expected to outperform DQN. Since LogLin exposes the best of the two worlds of logarithmic and linear spaces (when the return lies in the respective regions), we should expect it to work best if it is to be used as a generic mapping for various games. C EXPERIMENTAL DETAILS The human-normalized scores reported in our Atari 2600 experiments are given by the formula (similarly to van Hasselt et al. (2016)): scoreagent − scorerandom scorehuman − scorerandom , where scoreagent, scorehuman, and scorerandom are the per-game scores (undiscounted returns) for the given agent, a reference human player, and random agent baseline. We use Table 2 from Wang et al. (2016) to retrieve the human player and random agent scores. The relative human-normalized score of LogLinDQN versus a baseline in each game is given by (similarly to Wang et al. (2016)): scoreLogLinDQN − scorebaseline max(scorebaseline, scorehuman)− scorerandom , where scoreLogLinDQN and scorebaseline are computed by averaging over the last 10% of each learning curve (i.e. last 20 iterations). The reported results are based on three independent trials for LogLinDQN and LogDQN, and five independent trials for DQN. D ADDITIONAL RESULTS Figure 5 shows the relative human-normalized score of LogLinDQN versus LogDQN (top panel) and versus (Lin)DQN (bottom panel) for each game, across a suite of 55 Atari 2600 games. LogLinDQN significantly outperforms both LogDQN and (Lin)DQN on several games, and is otherwise on par with them (i.e. when LogLinDQN is outperformed by either of LogDQN or (Lin)DQN, the difference is not by a large margin). Figure 6 shows the raw (i.e. without human-normalization) learning curves across a suite of 55 Atari 2600 games. Figures 7, 8, and 9 illustrate the change of reward density (measured for positive and negative rewards separately) at three different training points (before training begins, after iteration 5, and after iteration 49) across a suite of 55 Atari 2600 games.
1. What is the focus of the paper regarding Q-value function estimation? 2. What are the strengths of the proposed framework, particularly in its novelty and generalizability? 3. What are the weaknesses of the paper, especially regarding the experimental results and their connection to the theoretical discussions? 4. How does the reviewer assess the clarity, quality, and impact of the paper's content? 5. Are there any minor comments or suggestions for improvement that the reviewer has regarding the paper's presentation or content?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a framework for estimating the Q-value function by decomposing the reward into a linear combination of reward signals or channels. These individual channels are then mapped into a new space, similarly to the log Q-learning approach of van Seijen et al (2019). However, here the framework is extended to a more general class of function (convex, etc). Additionally, some of the theoretical assumptions in the original log Q-learning paper are softened. The method (with a particular instantiation of reward mapping) is tested on the Atari benchmark, and demonstrates improved performance compared to DQN and log Q-learning. Review The main contribution of the paper is a framework for mapping multiple reward channels to a "nicer" space in parallel. I find this general concept and framework to be both interesting and novel. I also thought the paper was well presented: the ideas are clearly described and easy to follow. The framework does build upon the work of van Seijen et al (2019), which somewhat hinders the novelty rating, but overall I believe this to be a useful contribution to the literature. In particular, I could see future work that seeks to discover good mappings or pick from a library of potential functions, and this paper provides the platform for that. Other positives include the softening of theoretical assumptions, which means we can get away with using a fixed learning rate (and have the other handled by Adam etc). This removes the need for yet another hyperparameter and will make implementing this framework (or even the original log Q learning work) easier. While I did not spend too much time on the appendix, I also liked the inclusion of Figure 3 in lemma 2, which made it slightly easier to follow than if it were just writing or mathematical notation. The only downside for me to this paper is the misalignment between the theory and the experimental results. The theory and discussion talk at length about the advantages of various mappings. This is done with the Pac-Man example, as well as in the section "Slope of Mappings". While these discussions are interesting and highly relevant, the experiment itself only serves to show that the chosen mapping outperforms DQN and log Q learning across the Atari tasks. And while a reason for that performance is provided, this behaviour is never demonstrated experimentally. Put another way, the experiments could conceivably have come from any paper that improves upon DQN --- they don't really speak to the power or specifics of the framework here. I realise that space is an issue, but perhaps the Pacman example can be removed to make more room for an additional experiment? In particular, I could imagine something like a toy domain setup that has a top down view of the reward function and the effect that various choices of mappings have on it, as well as the resulting behaviour when the value functions are learned in the mapped space compared to the original reward function. Obviously, this is a very rough idea, but the main thing I'm looking for here would be to provide empirical support for some of the claims, such as "As a result, when learning on a game which often has a large return, LogDQN operates mostly on areas of f where the slope is small, and it can incur significant error compared to normal DQN." Minor comments: The second last paragraph in Section 4.2 that talks about boundedness. Do you mean here that if we know r m i n and r m a x , then we know that the max return is r m a x 1 − γ and so we can use this to clip the values if necessary? How are c and d selected in Section 5? Bottom of page 3: "bellow"
ICLR
Title Orchestrated Value Mapping for Reinforcement Learning Abstract We present a general convergent class of reinforcement learning algorithms that is founded on two distinct principles: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. The first principle enables incorporating specific properties into the value estimator that can enhance learning. The second principle, on the other hand, allows for the value function to be represented as a composition of multiple utility functions. This can be leveraged for various purposes, e.g. dealing with highly varying reward scales, incorporating a priori knowledge about the sources of reward, and ensemble learning. Combining the two principles yields a general blueprint for instantiating convergent algorithms by orchestrating diverse mapping functions over multiple reward channels. This blueprint generalizes and subsumes algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In addition, our convergence proof for this general class relaxes certain required assumptions in some of these algorithms. Based on our theory, we discuss several interesting configurations as special cases. Finally, to illustrate the potential of the design space that our theory opens up, we instantiate a particular algorithm and evaluate its performance on the Atari suite. 1 INTRODUCTION The chief goal of reinforcement learning (RL) algorithms is to maximize the expected return (or the value function) from each state (Szepesvári, 2010; Sutton & Barto, 2018). For decades, many algorithms have been proposed to compute target value functions either as their main goal (criticonly algorithms) or as a means to help the policy search process (actor-critic algorithms). However, when the environment features certain characteristics, learning the underlying value function can become very challenging. Examples include environments where rewards are dense in some parts of the state space but very sparse in other parts, or where the scale of rewards varies drastically. In the Atari 2600 game of Ms. Pac-Man, for instance, the reward can vary from 10 (for small pellets) to as large as 5000 (for moving bananas). In other games such as Tennis, acting randomly leads to frequent negative rewards and losing the game. Then, once the agent learns to capture the ball, it can avoid incurring such penalties. However, it may still take a very long time before the agent scores a point and experiences a positive reward. Such learning scenarios, for one reason or another, have proved challenging for the conventional RL algorithms. One issue that can arise due to such environmental challenges is having highly non-uniform action gaps across the state space.1 In a recent study, van Seijen et al. (2019) showed promising results by simply mapping the value estimates to a logarithmic space and adding important algorithmic components to guarantee convergence under standard conditions. While this construction addresses the problem of non-uniform action gaps and enables using lower discount factors, it further opens a new direction for improving the learning performance: estimate the value function in a different space that admits better properties compared to the original space. This interesting view naturally raises theoretical questions about the required properties of the mapping functions, and whether the guarantees of convergence would carry over from the basis algorithm under this new construction. 1Action gap refers to the value difference between optimal and second best actions (Farahmand, 2011). One loosely related topic is that of nonlinear Bellman equations. In the canonical formulation of Bellman equations (Bellman, 1954; 1957), they are limited in their modeling power to cumulative rewards that are discounted exponentially. However, one may go beyond this basis and redefine the Bellman equations in a general nonlinear manner. In particular, van Hasselt et al. (2019) showed that many such Bellman operators are still contraction mappings and thus the resulting algorithms are reasonable and inherit many beneficial properties of their linear counterparts. Nevertheless, the application of such algorithms is still unclear since the fixed point does not have a direct connection to the concept of return. In this paper we do not consider nonlinear Bellman equations. Continuing with the first line of thought, a natural extension is to employ multiple mapping functions concurrently in an ensemble, allowing each to contribute their own benefits. This can be viewed as a form of separation of concerns (van Seijen et al., 2016). Ideally, we may want to dynamically modify the influence of different mappings as the learning advances. For example, the agent could start with mappings that facilitate learning on sparse rewards. Then, as it learns to collect more rewards, the mapping function can be gradually adapted to better support learning on denser rewards. Moreover, there may be several sources of reward with specific characteristics (e.g. sparse positive rewards but dense negative ones), in which case using a different mapping to deal with each reward channel could prove beneficial. Building upon these ideas, this paper presents a general class of algorithms based on the combination of two distinct principles: value mapping and linear reward decomposition. Specifically, we present a broad class of mapping functions that inherit the convergence properties of the basis algorithm. We further show that such mappings can be orchestrated through linear reward decomposition, proving convergence for the complete class of resulting algorithms. The outcome is a blueprint for building new convergent algorithms as instances. We conceptually discuss several interesting configurations, and experimentally validate one particular instance on the Atari 2600 suite. 2 VALUE MAPPING We consider the standard reinforcement learning problem which is commonly modeled as a Markov decision process (MDP; Puterman (1994))M = (S,A, P,R, P0, γ), where S andA are the discrete sets of states and actions, P (s′|s, a) .= P[st+1 =s′ | st=s, at=a] is the state-transition distribution, R(r|s, a, s′) .= P[rt = r | st = s, at = a, st+1 = s′] (where we assume r ∈ [rmin, rmax]) is the reward distribution, P0(s) . = P[s0 = s] is the initial-state distribution, and γ ∈ [0, 1] is the discount factor. A policy π(a|s) .= P[at = a | st = s] defines how an action is selected in a given state. Thus, selecting actions according to a stationary policy generally results in a stochastic trajectory. The discounted sum of rewards over the trajectory induces a random variable called the return. We assume that all returns are finite and bounded. The state-action value function Qπ(s, a) evaluates the expected return of taking action a at state s and following policy π thereafter. The optimal value function is defined as Q∗(s, a) .= maxπ Qπ(s, a), which gives the maximum expected return of all trajectories starting from the state-action pair (s, a). Similarly, an optimal policy is defined as π∗(a|s) ∈ arg maxπ Qπ(s, a). The optimal value function is unique (Bertsekas & Tsitsiklis, 1996) and can be found, e.g., as the fixed point of the Q-Learning algorithm (Watkins, 1989; Watkins & Dayan, 1992) which assumes the following update: Qt+1(st, at)← (1− αt)Qt(st, at) + αt ( rt + γmax a′ Qt(st+1, a ′) ) , (1) where αt is a positive learning rate at time t. Our goal is to map Q to a different space and perform the update in that space instead, so that the learning process can benefit from the properties of the mapping space. We define a function f that maps the value function to some new space. In particular, we consider the following assumptions: Assumption 1 The function f(x) is a bijection (either strictly increasing or strictly decreasing) for all x in the given domain D = [c1, c2] ⊆ R. Assumption 2 The function f(x) holds the following properties for all x in the given domain D = [c1, c2] ⊆ R: 1. f is continuous on [c1, c2] and differentiable on (c1, c2); 2. |f ′(x)| ∈ [δ1, δ2] for x ∈ (c1, c2), with 0 < δ1 < δ2 <∞; 3. f is either of semi-convex or semi-concave. We next use f to map the value function, Q(s, a), to its transformed version, namely Q̃(s, a) . = f ( Q(s, a) ) . (2) Assumption 1 implies that f is invertible and, as such, Q(s, a) is uniquely computable from Q̃(s, a) by means of the inverse function f−1. Of note, this assumption also implies that f preserves the ordering in x; however, it inverts the ordering direction if f is decreasing. Assumption 2 imposes further restrictions on f , but still leaves a broad class of mapping functions to consider. Throughout the paper, we use tilde to denote a “mapped” function or variable, while the mapping f is understandable from the context (otherwise it is explicitly said). 2.1 BASE ALGORITHM If mapped value estimates were naively placed in a Q-Learning style algorithm, the algorithm would fail to converge to the optimal values in stochastic environments. More formally, in the tabular case, an update of the form (cf. Equation 1) Q̃t+1(st, at)← (1− αt)Q̃t(st, at) + αtf ( rt + γmax a′ f−1 ( Q̃t(st+1, a ′) )) (3) converges2 to the fixed point Q̃ (s, a) that satisfies Q̃ (s, a) = Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f ( r + γmax a′ f−1 ( Q̃ (s′, a′) ))] . (4) Let us define the notation Q (s, a) .= f−1 ( Q̃ (s, a) ) . If f is a semi-convex bijection, f−1 will be semi-concave and Equation 4 deduces Q (s, a) . = f−1 ( Q̃ (s, a) ) = f−1 ( Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f ( r + γmax a′ Q (s′, a′) )]) ≥ Es′∼P (·|s,a), r∼R(·|s,a,s′) [ f−1 ( f ( r + γmax a′ Q (s′, a′) ))] = Es′∼P (·|s,a), r∼R(·|s,a,s′) [ r + γmax a′ Q (s′, a′) ] , (5) where the third line follows Jensen’s inequality. Comparing Equation 5 with the Bellman optimality equation in the regular space, i.e. Q∗(s, a) = Es′,r∼P,R [r + γmaxa′ Q∗(s′, a′)], we conclude that the value function to which the update rule (3) converges overestimates Bellman’s backup. Similarly, if f is a semi-concave function, then Q (s, a) underestimates Bellman’s backup. Either way, it follows that the learned value function deviates from the optimal one. Furthermore, the Jensen’s gap at a given state s — the difference between the left-hand and right-hand sides of Equation 5 — depends on the action a because the expectation operator depends on a. That is, at a given state s, the deviation of Q (s, a) from Q∗(s, a) is not a fixed-value shift and can vary for various actions. Hence, the greedy policy w.r.t. (with respect to) Q (s, ·) may not preserve ordering and it may not be an optimal policy either. In an effort to address this problem in the spacial case of f being a logarithmic function, van Seijen et al. (2019) observed that in the algorithm described by Equation 3, the learning rate αt generally conflates two forms of averaging: (i) averaging of stochastic update targets due to environment stochasticity (happens in the regular space), and (ii) averaging over different states and actions (happens in the f ’s mapping space). To this end, they proposed to algorithmically disentangle the two and showed that such a separation will lift the Jensen’s gap if the learning rate for averaging in the regular space decays to zero fast enough. Building from Log Q-Learning (van Seijen et al., 2019), we define the base algorithm as follows: at each time t, the algorithm receives Q̃t(s, a) and a transition quadruple (s, a, r, s′), and outputs Q̃t+1(s, a), which then yields Qt+1(s, a) . = f−1 ( Q̃t+1(s, a) ) . The steps are listed below: 2The convergence follows from stochastic approximation theory with the additional steps to show by induction that Q̃ remains bounded and then the corresponding operator is a contraction mapping. Qt(s, a) := f −1 ( Q̃t(s, a) ) (6) ã′ := arg max a′ ( Qt(s ′, a′) ) (7) Ut := r + γf −1 ( Q̃t(s ′, ã′) ) (8) Ût := f −1 ( Q̃t(s, a) ) + βreg,t ( Ut − f−1 ( Q̃t(s, a) )) (9) Q̃t+1(s, a) := Q̃t(s, a) + βf,t ( f(Ût)− Q̃t(s, a) ) (10) Here, the mapping f is any function that satisfies Assumptions 1 and 2. Remark that similarly to the Log Q-Learning algorithm, Equations 9 and 10 have decoupled averaging of stochastic update targets from that over different states and actions. 3 REWARD DECOMPOSITION Reward decomposition can be seen as a generic way to facilitate (i) systematic use of environmental inductive biases in terms of known reward sources, and (ii) action selection as well as value-function updates in terms of communication between an arbitrator and several subagents, thus assembling several subagents to collectively solve a task. Both directions provide broad avenues for research and have been visited in various contexts. Russell & Zimdars (2003) introduced an algorithm called Q-Decomposition with the goal of extending beyond the “monolithic” view of RL. They studied the case of additive reward channels, where the reward signal can be written as the sum of several reward channels. They observed, however, that using Q-Learning to learn the corresponding Q function of each channel will lead to a non-optimal policy (they showed it through a counterexample). Hence, they used a Sarsa-like update w.r.t. the action that maximizes the arbitrator’s value. Laroche et al. (2017) provided a formal analysis of the problem, called the attractor phenomenon, and studied a number of variations to Q-Decomposition. On a related topic but with a different goal, Sutton et al. (2011) introduced the Horde architecture, which consists of a large number of “demons” that learn in parallel via off-policy learning. Each demon estimates a separate value function based on its own target policy and (pseudo) reward function, which can be seen as a decomposition of the original reward in addition to auxiliary ones. van Seijen et al. (2017) built on these ideas and presented hybrid reward architecture (HRA) to decompose the reward and learn their corresponding value functions in parallel, under mean bootstrapping. They further illustrated significant results on domains with many independent sources of reward, such as the Atari 2600 game of Ms. Pac-Man. Besides utilizing distinct environmental reward sources, reward decomposition can also be used as a technically-sound algorithmic machinery. For example, reward decomposition can enable utilization of a specific mapping that has a limited domain. In the Log Q-Learning algorithm, for example, the log(·) function cannot be directly used on non-positive values. Thus, the reward is decomposed such that two utility functions Q̃+ and Q̃− are learned for when the reward is non-negative or negative, respectively. Then the value is given by Q(s, a) = exp ( Q̃+(s, a) ) − exp ( Q̃−(s, a) ) . The learning process of each of Q̃+ and Q̃− bootstraps towards their corresponding value estimate at the next state with an action that is the arg max of the actual Q, rather than that of Q̃+ and Q̃− individually. We generalize this idea to incorporate arbitrary decompositions, beyond only two channels. To be specific, we are interested in linear decompositions of the reward function into L separate channels r(j), for j = 1 . . . L, in the following way: r := L∑ j=1 λjr (j), (11) with λj ∈ R. The channel functions r(j) map the original reward into some new space in such a way that their weighted sum recovers the original reward. Clearly, the case of L = 1 and λ1 = 1 would retrieve the standard scenario with no decomposition. In order to provide the update, expanding from Log Q-Learning, we define Q̃(j) for j = 1 . . . L, corresponding to the above reward channels, and construct the actual value function Q using the following: Qt(s, a) := L∑ j=1 λjf −1 j ( Q̃ (j) t (s, a) ) . (12) We explicitly allow the mapping functions, fj , to be different for each channel. That is, each reward channel can have a different mapping and each Q̃(j) is learned separately under its own mapping. Before discussing how the algorithm is updated with the new channels, we present a number of interesting examples of how Equation 11 can be deployed. As the first example, we can recover the original Log Q-Learning reward decomposition by considering L = 2, λ1 = +1, λ2 = −1, and the following channels: r (1) t := { rt if rt ≥ 0 0 otherwise ; r (2) t := { |rt| if rt < 0 0 otherwise (13) Notice that the original reward is retrieved via rt = r (1) t −r (2) t . This decomposition allows for using a mapping with only positive domain, such as the logarithmic function. This is an example of using reward decomposition to ensure that values do not cross the domain of mapping function f . In the second example, we consider different magnifications for different sources of reward in the environment so as to make the channels scale similarly. The Atari 2600 game of Ms. Pac-Man is an example which includes rewards with three orders of magnitude difference in size. We may therefore use distinct channels according to the size-range of rewards. To be concrete, let r ∈ [0, 100] and consider the following two configurations for decomposition (can also be extended to other ranges). Configuration 1: λ1 = 1, λ2 = 10, λ3 = 100 r (1) t := { rt if rt ∈ [0, 1] 0 rt > 1 r (2) t := 0 if rt ≤ 1 0.1rt if rt ∈ (1, 10] 0 rt > 10 r (3) t := { 0 if rt ≤ 10 0.01rt if rt ∈ (10, 100] Configuration 2: λ1 = 1, λ2 = 9, λ3 = 90 r (1) t := { rt if rt ∈ [0, 1] 1 rt > 1 r (2) t := 0 if rt ≤ 1 (rt − 1)/9 if rt ∈ (1, 10] 1 rt > 10 r (3) t := { 0 if rt ≤ 10 (rt − 10)/90 if rt ∈ (10, 100] Each of the above configurations presents certain characteristics. Configuration 1 gives a scheme where, at each time step, at most one channel is non-zero. Remark, however, that each channel will be non-zero with less frequency compared to the original reward signal, since rewards get assigned to different channels depending on their size. On the other hands, Configuration 2 keeps each channel to act as if there is a reward clipping at its upper bound, while each channel does not see rewards below its lower bound. As a result, Configuration 2 fully preserves the reward density at the first channel and presents a better density for higher channels compared to Configuration 1. However, the number of active channels depends on the reward size and can be larger than one. Importantly, the magnitude of reward for all channels always remains in [0, 1] in both configurations, which could be a desirable property. The final point to be careful about in using these configurations is that the large weight of higher channels significantly amplifies their corresponding value estimates. Hence, even a small estimation error at higher channels can overshadow the lower ones. Over and above the cases we have presented so far, reward decomposition enables an algorithmic machinery in order to utilize various mappings concurrently in an ensemble. In the simplest case, we note that in Equation 11 by construction we can always write r(j) := r and L∑ j=1 λj := 1. (14) That is, the channels are merely the original reward with arbitrary weights that should sum to one. We can then use arbitrary functions fj for different channels and build the value function as presented in Equation 12. This construction directly induces an ensemble of arbitrary mappings with different weights, all learning on the same reward signal. More broadly, this can potentially be combined with any other decomposition scheme, such as the ones we discussed above. For example, in the case of separating negative and positive rewards, one may also deploy two (or more) different mappings for each of the negative and positive reward channels. This certain case results in four channels, two negative and two positive, with proper weights that sum to one. 4 ORCHESTRATION OF VALUE-MAPPINGS USING DECOMPOSED REWARDS 4.1 ALGORITHM To have a full orchestration, we next combine value mapping and reward decomposition. We follow the previous steps in Equations 6 –10, but now also accounting for the reward decomposition. The core idea here is to replace Equation 6 with 12 and then compute Q̃(j) for each reward channel in parallel. In practice, these can be implemented as separate Q tables, separate Q networks, or different network heads with a shared torso. At each time t, the algorithm receives all channel outputs Q̃(j)t , for j = 1 . . . L, and updates them in accordance with the observed transition. The complete steps are presented in Algorithm 1. A few points are apropos to remark. Firstly, the steps in the for-loop can be computed in parallel for all L channels. Secondly, as mentioned previously, the mapping function fj may be different for each channel; however, the discount factor γ and both learning rates βf,t and βreg,t are shared among all the channels and must be the same. Finally, note also that the action ãt+1, from which all the channels bootstrap, comes from arg maxa′ Qt(st+1, a ′) and not the local value of each channel. This directly implies that each channel-level value Q(j) = f−1(Q̃j) does not solve a channel-level Bellman equation by itself. In other words, Q(j) does not represent any specific semantics such as expected return corresponding to the rewards of that channel. They only become meaningful when they compose back together and rebuild the original value function. 4.2 CONVERGENCE We establish convergence of Algorithm 1 by the following theorem. Algorithm 1: Orchestrated Value Mapping. Input: (at time t) Q̃ (j) t for j = 1 . . . L st, at, rt, and st+1 Output: Q̃(j)t+1 for j = 1 . . . L Compute rt(j) for j = 1 . . . L begin 1 Qt(st, at) := ∑L j=1 λjf −1 j ( Q̃ (j) t (st, at) ) 2 ãt+1 := arg maxa′ ( Qt(st+1, a ′) ) for j = 1 to L do 3 U (j) t := r (j) t + γf −1 j ( Q̃ (j) t (st+1, ãt+1) ) 4 Û (j) t := f −1 j ( Q̃ (j) t (st, at) ) + βreg,t ( U (j) t − f−1j ( Q̃ (j) t (st, at) )) 5 Q̃ (j) t+1(st, at) := Q̃ (j) t (st, at) + βf,t ( fj ( Û (j) t ) − Q̃(j)t (st, at) ) end end Theorem 1 Let the reward admit a decomposition as defined by Equation 11, Qt(st, at) be defined by Equation 12, and all Q̃(j)t (st, at) updated according to the steps of Algorithm 1. Assume further that the following hold: 1. All fj’s satisfy Assumptions 1 and 2; 2. TD error in the regular space (second term in line 4 of Algorithm 1) is bounded for all j; 3. ∑∞ t=0 βf,t · βreg,t =∞; 4. ∑∞ t=0(βf,t · βreg,t)2 <∞; 5. βf,t · βreg,t → 0 as t→∞. Then, Qt(s, a) converges to Q∗t (s, a) with probability one for all state-action pairs (s, a). The proof follows basic results from stochastic approximation theory (Jaakkola et al., 1994; Singh et al., 2000) with important additional steps to show that those results hold under the assumptions of Theorem 1. The full proof is fairly technical and is presented in Appendix A. We further remark that Theorem 1 only requires the product βf,t ·βreg,t to go to zero. As this product resembles the conventional learning rate in Q-Learning, this assumption is no particular limitation compared to traditional algorithms. We contrast this assumption with the one in the previous proof of Log Q-Learning which separately requires βreg to go to zero fast enough. We note that in the case of using function approximation, as in a DQN-like algorithm (Mnih et al., 2015), the update in line 5 of Algorithm 1 should naturally be managed by the used optimizer, while line 4 may be handled manually. This has proved challenging as the convergence properties can be significantly sensitive to learning rates. To get around this problem, van Seijen et al. (2019) decided to keep βreg,t at a fixed value in their deep RL experiments, contrary to the theory. Our new condition, however, formally allows βreg,t to be set to a constant value as long as βf,t properly decays to zero. A somewhat hidden step in the original proof of Log Q-Learning is that the TD error in the regular space (second term in Equation 9) must always remain bounded. We will make this condition explicit. In practice, with bounds of the reward being known, one can easily find bounds of return in regular as well as fj spaces, and ensure boundness of U (j) t − f−1j (Q̃ (j) t ) by proper clipping. Notably, clipping of Bellman target is also used in the literature to mitigate the value overflow issue (Fatemi et al., 2019). The scenarios covered by Assumption 2, with the new convergence proof due to Theorem 1, may be favorable in many practical cases. Moreover, several prior algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition can be derived by appropriate construction from Algorithm 1. 4.3 REMARKS TIME-DEPENDENT CHANNELS The proof of Theorem 1 does not directly involve the channel weights λj . However, changing them will impact Qt, which changes the action ãt+1 in line 2 of Algorithm 1. In the case that λj’s vary with time, if they all converge to their final fixed value soon enough before the learning rates become too small, and if additionally all state-action pairs are still visited frequently enough after λj’s are settled to their final values, then the algorithm should still converge to optimality. Of course, this analysis is far from formal; nevertheless, we can still strongly conjecture that an adaptive case where the channel weights vary with time should be possible to design. SLOPE OF MAPPINGS Assumption 2 asserts that the derivative of fj must be bounded from both below and above. While this condition is sufficient for the proof of Theorem 1, we can probe its impact further. The proof basically demonstrates a bounded error term, which ultimately converges to zero under the conditions of Theorem 1. However, the bound on this error term (see Lemma 2 in Appendix A) is scaled by δmax = maxj δ(j), with δ(j) being defined as δ(j) = δ (j) 2 / δ (j) 1 − 1, (15) where δ(j)1 and δ (j) 2 are defined according to Assumption 2 (0 < δ (j) 1 ≤ |f ′j(x)| ≤ δ (j) 2 ). In the case of fj being a straight line δ(j) = 0, thus no error is incurred and the algorithm shrinks to Q-Learning. An important extreme case is when δ(j)1 is too small while δ (j) 2 is not close to δ (j) 1 . It then follows from Equation 15 that the error can be significantly large and the algorithm may need a long time to converge. This can also be examined by observing that if the return is near the areas where f ′j is very small, the return may be too compressed when mapped. Consequently, the agent becomes insensitive to the change of return in such areas. This problem can be even more significant in deep RL due to more complex optimization processes and nonlinear approximations. The bottom-line is that the mapping functions should be carefully selected in light of Equation 15 to avoid extremely large errors while still having desired slopes to magnify or suppress the returns when needed. This analysis also explains why logarithmic mappings of the form f(x) = c · log(x + d) (as investigated in the context of Log Q-Learning by van Seijen et al. (2019)) present unfavorable results in dense reward scenarios; e.g. in the Atari 2600 game of Skiing where there is a reward at every step. In this expression c is a mapping hyperparameter that scales values in the logarithmic space and d is a small positive scalar to ensure bounded derivatives, where the functional form of the derivative is given by f ′(x) = cx+d . Hence, δ2 = c d , whereas δ1 can be very close to zero depending on the maximum return. As a result, when learning on a task which often faces large returns, Log Q-Learning operates mostly on areas of f where the slope is small and, as such, it can incur significant error compared to standard Q-Learning. See Appendix B for a detailed illustration of this issue, and Appendix D for the full list of reward density variations across a suite of 55 Atari 2600 games. 5 EXPERIMENTAL RESULTS In this section, we illustrate the simplicity and utility of instantiating new learning methods based on our theory. Since our framework provides a very broad algorithm class with numerous possibilities, deep and meaningful investigations of specific instances go far beyond the scope of this paper (or any single conference paper). Nevertheless, as an authentic illustration, we consider the LogDQN algorithm (van Seijen et al., 2019) and propose an altered mapping function. As discussed above, the logarithmic mapping in LogDQN suffers from a too-small slope when encountering large returns. We lift this undesirable property while keeping the desired magnification property around zero. Specifically, we substitute the logarithmic mapping with a piecewise function that at the break-point x = 1 − d switches from a logarithmic mapping to a straight line with slope c (i.e. the same slope as c · log(x+ d) at x = 1− d): f(x) := { c · log(x+ d) if x ≤ 1− d c · (x− 1 + d) if x > 1− d We call the resulting method LogLinDQN, or Logarithmic-Linear DQN. Remark that choosing x = 1−d as the break-point has the benefit of using only a single hyperparameter c to determine both the scaling of the logarithmic function and the slope of the linear function, which otherwise would require an additional hyperparameter. Also note that the new mapping satisfies Assumptions 1 and 2. We then use two reward channels for non-negative and negative rewards, as discussed in the first example of Section 3 (see Equation 13), and use the same mapping function for both channels. Our implementation of LogLinDQN is based on Dopamine (Castro et al., 2018) and closely matches that of LogDQN, with the only difference being in the mapping function specification. Notably, our LogLin mapping hyperparameters are realized using the same values as those of LogDQN; i.e. c = 0.5 and d ≈ 0.02. We test this method in the Atari 2600 games of the Arcade Learning Environment (ALE) (Bellemare et al., 2013) and compare its performance primarily against LogDQN and DQN (Mnih et al., 2015), denoted by “Lin” or “(Lin)DQN” to highlight that it corresponds to a linear mapping function with slope one. We also include two other major baselines for reference: C51 (Bellemare et al., 2017) and Rainbow (Hessel et al., 2018). Our tests are conducted on a stochastic version of Atari 2600 using sticky actions (Machado et al., 2018) and follow a unified evaluation protocol and codebase via the Dopamine framework (Castro et al., 2018). Figure 1 shows the relative human-normalized score of LogLinDQN w.r.t. the worst and best of LogDQN and DQN for each game. These results suggest that LogLinDQN reasonably unifies the good properties of linear and logarithmic mappings (i.e. handling dense or sparse reward distributions respectively), thereby enabling it to improve upon the per-game worst of LogDQN and DQN (top panel) and perform competitively against the per-game best of the two (bottom panel) across a large set of games. Figure 2 shows median and mean human-normalized scores across a suite of 55 Atari 2600 games. Our LogLinDQN agent demonstrates a significant improvement over most baselines and is competitive with Rainbow in terms of mean performance. This is somewhat remarkable provided the relative simplicity of LogLinDQN, especially, w.r.t. Rainbow which combines several other advances including distributional learning, prioritized experience replay, and n-step learning. 6 CONCLUSION In this paper we introduced a convergent class of algorithms based on the composition of two distinct foundations: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. Together, this new family of algorithms enables learning the value function in a collection of different spaces where the learning process can potentially be easier or more efficient than the original return space. Additionally, the introduced methodology incorporates various versions of ensemble learning in terms of linear decomposition of the reward. We presented a generic proof, which also relaxes certain limitations in previous proofs. We also remark that several known algorithms in classic and recent literature can be seen as special cases of the present algorithm class. Finally, we contemplate research on numerous special instances as future work, following our theoretical foundation. Also, we believe that studying the combination of our general value mapping ideas with value decomposition (Tavakoli et al., 2021), instead of the the reward decomposition paradigm studied in this paper, could prove to be a fruitful direction for future research. REPRODUCIBILITY STATEMENT We release a generic codebase, built upon the Dopamine framework (Castro et al., 2018), with the option of using arbitrary compositions of mapping functions and reward decomposition schemes as easy-to-code modules. This enables the community to easily explore the design space that our theory opens up and investigate new convergent families of algorithms. This also allows to reproduce the results of this paper through an accompanying configuration script. The source code can be accessed at: https://github.com/microsoft/orchestrated-value-mapping. A PROOF OF THEOREM 1 We use a basic convergence result from stochastic approximation theory. In particular, we invoke the following lemma, which has appeared and proved in various classic texts; see, e.g., Theorem 1 in Jaakkola et al. (1994) or Lemma 1 in Singh et al. (2000). Lemma 1 Consider an algorithm of the following form: ∆t+1(x) := (1− αt)∆t(x) + αtψt(x), (16) with x being the state variable (or vector of variables), and αt and ψt denoting respectively the learning rate and the update at time t. Then, ∆t converges to zero w.p. (with probability) one as t→∞ under the following assumptions: 1. The state space is finite; 2. ∑ t αt =∞ and ∑ t α 2 t <∞; 3. ||E{ψt(x) | Ft}||W ≤ ξ ||∆t(x)||W , with ξ ∈ (0, 1) and || · ||W denoting a weighted max norm; 4. Var{ψt(x) | Ft} ≤ C (1 + ||∆t(x)||W ) 2, for some constant C; where Ft is the history of the algorithm until time t. Remark that in applying Lemma 1, the ∆t process generally represents the difference between a stochastic process of interest and its optimal value (that isQt andQ∗t ), and x represents a proper concatenation of states and actions. In particular, it has been shown that Lemma 1 applies to Q-Learning as the TD update of Q-Learning satisfies the lemma’s assumptions 3 and 4 (Jaakkola et al., 1994). We define Q(j)t (s, a) := f −1 ( Q̃ (j) t (s, a) ) , for j = 1 . . . L. Hence, Qt(s, a) = L∑ j=1 λjf −1 j ( Q̃ (j) t (s, a) ) = L∑ j=1 λjQ (j) t (s, a). (17) We next establish the following key result, which is core to the proof of Theorem 1. The proof is given in the next section. Lemma 2 Following Algorithm 1, for each channel j ∈ {1, . . . , L} we have Q (j) t+1(st, at) = Q (j) t (st, at) + βreg,t · βf,t ( U (j) t −Q (j) t (st, at) + e (j) t ) , (18) with the error term satisfying the followings: 1. Bounded by TD error in the regular space with decaying coefficient |e(j)t | ≤ βreg,t · βf,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ , (19) where δ(j) = δ(j)2 / δ (j) 1 − 1 is a positive constant; 2. For a given fj , e (j) t does not change sign for all t (it is either always non-positive or always non-negative); 3. e(j)t is fully measurable given the variables defined at time t. From Lemma 2, it follows that for each channel: Q (j) t+1(st, at) = Q (j) t (st, at) + βreg,t · βf,t ( U (j) t −Q (j) t (st, at) + e (j) t ) , (20) with e(j)t converging to zero w.p. one under condition 4 of the theorem, and U (j) t defined as: U (j) t := r (j) t + γ Q (j) t (st+1, ãt+1). Multiplying both sides of Equation 20 by λj and taking the summation, we write: L∑ j=1 λjQ (j) t+1(st, at) = L∑ j=1 λjQ (j) t (st, at) + βreg,t · βf,t L∑ j=1 λj ( U (j) t −Q (j) t (st, at) + e (j) t ) . Hence, using Equation 17 we have: Qt+1(st, at) = Qt(st, at) + βreg,t · βf,t L∑ j=1 λj ( U (j) t −Q (j) t (st, at) + e (j) t ) = Qt(st, at) + βreg,t · βf,t L∑ j=1 λj ( r (j) t + γ Q (j) t (st+1, ãt+1)−Q (j) t (st, at) + e (j) t ) = Qt(st, at) + βreg,t · βf,t rt + γ Qt(st+1, ãt+1)−Qt(st, at) + L∑ j=1 λje (j) t . (21) Definition of ãt+1 deduces that Qt(st+1, ãt+1) = Qt ( st+1, arg max a′ Qt(st+1, a ′) ) = max a′ Qt(st+1, a ′). By defining et := ∑L j=1 λje (j) t , we rewire Equation 21 as the following: Qt+1(st, at) = Qt(st, at) + βreg,t · βf,t ( rt + γ max a′ Qt(st+1, a ′)−Qt(st, at) + et ) . (22) This is a noisy Q-Learning algorithm with the noise term decaying to zero at a quadratic rate w.r.t. the learning rate’s decay; more precisely, in the form of (βreg,t · βf,t)2. Lemma 1 requires the entire update to be properly bounded (as stated in its assumptions 3 and 4). It has been known from the proof of Q-Learning (Jaakkola et al., 1994) that TD error satisfies these conditions, i.e. rt + γ maxa′ Qt(st+1, a′)−Qt(st, at) satisfies assumptions 3 and 4 of Lemma 1. To prove convergence of mapped Q-Learning, we therefore require to show that |et| also satisfies a similar property; namely, not only it disappears in the limit, but also it does not interfere intractably with the learning process during training. To this end, we next show that as the learning continues, |et| is indeed bounded by a value that can be arbitrarily smaller than the TD error. Consequently, as TD error satisfies assumptions 3 and 4 of Lemma 1, so does |et|, and so does their sum. Let δmax = maxj δ(j), with δ(j) defined in Lemma 2. Multiplying both sides of Equation 19 by λj and taking the summation over j, it yields: |et| = ∣∣∣∣∣∣ L∑ j=1 λje (j) t ∣∣∣∣∣∣ ≤ L∑ j=1 ∣∣∣λje(j)t ∣∣∣ ≤ L∑ j=1 |λj | · βf,t · βreg,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ ≤ βf,t · βreg,t · δmax L∑ j=1 |λj | · ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ = βf,t · βreg,t · δmax L∑ j=1 |λj | · ∣∣∣r(j)t + γ Q(j)t (st+1, ãt+1)−Q(j)t (st, at)∣∣∣ . (23) The second line follows from Lemma 2. If TD error in the regular space is bounded, then ∣∣∣r(j)t + γ Q(j)t (st+1, ãt+1)−Q(j)t (st, at)∣∣∣ ≤ K(j) for some K(j) ≥ 0. Hence, Equation 23 induces: |et| ≤ βf,t · βreg,t · δmax L∑ j=1 |λj | ·K(j) = βf,t · βreg,t · δmax ·K, (24) with K = ∑L j=1 |λj | ·K(j) ≥ 0; thus, |et| is also bounded for all t. As (by assumption) βreg,t ·βf,t converges to zero, we conclude that there exists T ≥ 0 such that for all t ≥ T we have |et| ≤ ξ ∣∣∣rt + γ max a′ Qt(st+1, a ′)−Qt(st, at) ∣∣∣ , (25) for any given ξ ∈ (0, 1]. Hence, not only |et| goes to zero w.p. one as t→∞, but also its magnitude always remains upperbounded below the size of TD update with any arbitrary margin ξ. Since TD update already satisfies assumptions 3 and 4 of Lemma 1, we conclude that with the presence of et those assumptions remain satisfied, at least after reaching some time T where Equation 25 holds. Finally, Lemma 2 also asserts that et is measurable given information at time t, as required by Lemma 1. Invoking Lemma 1, we can now conclude that the iterative process defined by Algorithm 1 converges to Q∗t w.p. one. PROOF OF LEMMA 2 PART 1 Our proof partially builds upon the proof presented by van Seijen et al. (2019). To simplify the notation, we drop j in fj , while we keep j in other places for clarity. By definition we have Q̃(j)t (s, a) = f ( Q (j) t (s, a) ) . Hence, we rewrite Equations 3, 4, and 5 of Algorithm 1 in terms of Q(j)t : U (j) t = r (j) t + γQ (j) t (st+1, ãt+1) , (26) Û (j) t = Q (j) t (st, at) + βreg,t ( U (j) t −Q (j) t (st, at) ) , (27) f ( Q (j) t+1(st, at) ) = f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) )) . (28) The first two equations yield: Û (j) t = Q (j) t (st, at) + βreg,t ( r (j) t + γ Q (j) t (st+1, ãt+1)−Q (j) t (st, at) ) . (29) By applying f−1 to both sides of Equation 28, we get: Q (j) t+1(st, at) = f −1 ( f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) ))) , (30) which can be rewritten as: Q (j) t+1(st, at) = Q (j) t (st, at) + βf,t ( Û (j) t −Q (j) t (st, at) ) + e (j) t , (31) where e(j)t is the error due to averaging in the mapping space instead of in the regular space: e (j) t := f −1 ( f ( Q (j) t (st, at) ) + βf,t ( f ( Û (j) t ) − f ( Q (j) t (st, at) ))) −Q(j)t (st, at)− βf,t ( Û (j) t −Q (j) t (st, at) ) . (32) We next analyze the behavior of e(j)t under the Theorem 1’s assumptions. To simplify, let us introduce the following substitutions: a → Q(j)t (st, at) b → Û (j)t v → (1− βf,t) a+ βf,t b w̃ → (1− βf,t)f(a) + βf,tf(b) w → f−1(w̃) The error e(j)t can be written as e (j) t = f −1((1− βf,t)f(a) + βf,tf(b))− ((1− βf,t)a+ βf,tb ) = f−1(w̃)− v = w − v. We remark that both v and w lie between a and b. Notably, e(j)t has a particular structure which we can use to bound w − v. See Table 1 for the ordering of v and w for different possibilities of f . We define three lines g0(x), g1(x), and g2(x) such that they all pass through the point (a, f(a)). As for their slopes, g0(x) has the derivative f ′(a), and g2(x) has the derivative f ′(b). The function g1(x) passes through point (b, f(b)) as well, giving it derivative (f(a)−f(b))/(a−b). See Figure 3 for all the possible cases. We can see that no matter if f is semi-convex or semi-concave and if it is increasing or decreasing these three lines will sandwich f over the interval [a, b] if b ≥ a, or similarly over [b, a] if a ≥ b. Additionally, it is easy to prove that for all x in the interval of a and b, either of the following holds: g0(x) ≥ f(x) ≥ g1(x) ≥ g2(x) (33) or g0(x) ≤ f(x) ≤ g1(x) ≤ g2(x). (34) The first one is equivalent to g−10 (y) ≤ f−1(y) ≤ g −1 1 (y) ≤ g −1 2 (y), (35) while the second one is equivalent to g−10 (y) ≥ f−1(y) ≥ g −1 1 (y) ≥ g −1 2 (y). (36) From the definition of g1 it follow that in all the mentioned possibilities of f combined with either of a ≥ b or b ≥ a, we always have g1(v) = w̃ and g−11 (w̃) = v. Hence, plugging w̃ in Equation 35 and Equation 36 (and noting that f−1(w̃) = w) deduces g−10 (w̃) ≤ w ≤ v ≤ g −1 2 (w̃) (37) or g−10 (w̃) ≥ w ≥ v ≥ g −1 2 (w̃). (38) Either way, regardless of various possibilities for f as well as a and b, we conclude that |e(j)t | = |v − w| ≤ |g−12 (w̃)− g −1 0 (w̃)|. (39) From definition of the lines g0 and g2, we write the line equations as follows: g0(x)− f(a) = f ′(a)(x− a), g2(x)− f(a) = f ′(b)(x− a). Applying these equations on the points (g−10 (w̃), w̃) and (g −1 2 (w̃), w̃), respectively, it yields: w̃ − f(a) = f ′(a)(g−10 (w̃)− a), w̃ − f(a) = f ′(b)(g−12 (w̃)− a), which deduce g−10 (w̃) = w̃ − f(a) f ′(a) + a ; g−12 (w̃) = w̃ − f(a) f ′(b) + a . (40) Plugging the above in Equation 39, it follows: |e(j)t | = |v − w| ≤ ∣∣∣∣ w̃ − f(a)f ′(b) − w̃ − f(a)f ′(a) ∣∣∣∣ = ∣∣∣∣( 1f ′(b) − 1f ′(a) ) (w̃ − f(a)) ∣∣∣∣ = ∣∣∣∣( 1f ′(b) − 1f ′(a) )( (1− βf,t)f(a) + βf,tf(b)− f(a) )∣∣∣∣ = ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) )( f(b)− f(a) )∣∣∣∣ . (41) We next invoke the mean value theorem, which states that if f is a continuous function on the closed interval [a, b] and differentiable on the open interval (a, b), then there exists a point c ∈ (a, b) such that f(b)− f(a) = f ′(c)(b− a). Remark that based on Assumption 2, c would satisfy f ′j(c) ≤ δ (j) 2 , also that 1f ′(b) − 1 f ′(a) ≤ 1 δ (j) 1 − 1 δ (j) 2 . Hence, |e(j)t | ≤ ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) )( f(b)− f(a) )∣∣∣∣ = ∣∣∣∣βf,t( 1f ′(b) − 1f ′(a) ) f ′(c)(b− a) ∣∣∣∣ ≤ ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) · δ(j)2 · (b− a) ∣∣∣∣∣ = ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) · δ(j)2 · ( Û (j) t −Q (j) t (st, at) )∣∣∣∣∣ . (42) From Equation 27, it follows that Û (j) t −Q (j) t (st, at) = βreg,t ( U (j) t −Q (j) t (st, at) ) . We therefore can write |e(j)t | ≤ ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) δ (j) 2 ( Û (j) t −Q (j) t (st, at) )∣∣∣∣∣ = ∣∣∣∣∣βf,t ( 1 δ (j) 1 − 1 δ (j) 2 ) δ (j) 2 · βreg,t ( U (j) t −Q (j) t (st, at) )∣∣∣∣∣ = βf,t · βreg,t · δ(j) ∣∣∣U (j)t −Q(j)t (st, at)∣∣∣ , where δ(j) = δ(j)2 ( 1 δ (j) 1 − 1 δ (j) 2 ) is a positive constant. This completes the proof for the first part of the lemma. PART 2 For this part, it can be directly seen from Figure 3 that for a given fj , the order of w′, w, v, and v′ is fixed, regardless of whether a ≥ b or b ≥ a (in Figure 3, compare each plot A, B, C, and D with their counterparts at the bottom). Hence, the sign of e(j)t = w − v will not change for a fixed mapping. PART 3 Finally, we note that by its definition, e(j)t comprises quantities that are all defined at time t. Hence, it is fully measurable at time t, and this completes the proof. B DISCUSSION ON LOG VERSUS LOGLIN As discussed in Section 4.3 (Slope of Mappings), the logarithmic (Log) function suffers from a toolow slope when the return is even moderately large. Figure 4 visualizes the impact more vividly. The logarithmic-linear (LogLin) function lifts this disadvantage by switching to a linear (Lin) function for such returns. For example, if the return changes by a unit of reward from 19 to 20, then the change will be seen as 0.05 in the Log space (i.e. log(20) − log(19)) versus 1.0 in the LogLin space; that is, Log compresses the change by 95% for a return of around 20. As, in general, learning subtle changes is more difficult and requires more training iterations, in such scenarios normal DQN (i.e. Lin function) is expected to outperform LogDQN. On the other hand, when the return is small (such as in sparse reward tasks), LogDQN is expected to outperform DQN. Since LogLin exposes the best of the two worlds of logarithmic and linear spaces (when the return lies in the respective regions), we should expect it to work best if it is to be used as a generic mapping for various games. C EXPERIMENTAL DETAILS The human-normalized scores reported in our Atari 2600 experiments are given by the formula (similarly to van Hasselt et al. (2016)): scoreagent − scorerandom scorehuman − scorerandom , where scoreagent, scorehuman, and scorerandom are the per-game scores (undiscounted returns) for the given agent, a reference human player, and random agent baseline. We use Table 2 from Wang et al. (2016) to retrieve the human player and random agent scores. The relative human-normalized score of LogLinDQN versus a baseline in each game is given by (similarly to Wang et al. (2016)): scoreLogLinDQN − scorebaseline max(scorebaseline, scorehuman)− scorerandom , where scoreLogLinDQN and scorebaseline are computed by averaging over the last 10% of each learning curve (i.e. last 20 iterations). The reported results are based on three independent trials for LogLinDQN and LogDQN, and five independent trials for DQN. D ADDITIONAL RESULTS Figure 5 shows the relative human-normalized score of LogLinDQN versus LogDQN (top panel) and versus (Lin)DQN (bottom panel) for each game, across a suite of 55 Atari 2600 games. LogLinDQN significantly outperforms both LogDQN and (Lin)DQN on several games, and is otherwise on par with them (i.e. when LogLinDQN is outperformed by either of LogDQN or (Lin)DQN, the difference is not by a large margin). Figure 6 shows the raw (i.e. without human-normalization) learning curves across a suite of 55 Atari 2600 games. Figures 7, 8, and 9 illustrate the change of reward density (measured for positive and negative rewards separately) at three different training points (before training begins, after iteration 5, and after iteration 49) across a suite of 55 Atari 2600 games.
1. What is the main contribution of the paper regarding Q-learning updates? 2. What are the strengths of the proposed approach, particularly in its ability to decompose the reward signal? 3. Do you have any concerns about the paper's structure and organization? 4. How does the reviewer assess the clarity and conciseness of the writing style? 5. What are the limitations and assumptions made by the proposed method, especially regarding the accessibility of minimum and maximum returns? 6. Can the reviewer suggest any improvements or additions to the experimental section?
Summary Of The Paper Review
Summary Of The Paper The paper describes a generic class of algorithms that decompose the reward signal into multiple channels and also map the value function into another generic space via arbitrary functions. They argue such a class of function is useful to specify certain properties of the learned value on a specific reward signal, also decomposed into channels. They show that known algorithms from the literature are instances of this convergent class of methods. Review This paper applies the observations from [1] to describe a general algorithm that separates the Q-learning update into two steps: (i) averaging due to environment stochasticity, and (ii) averaging over different states and actions, moving the latter into the other space. The paper presents these in a clear and simple way, which is very nice. It further formalizes these principles and incorporates them in a generic class of algorithms. It is interesting to see instances of these class in known existing algorithms. The paper is motivated from the view of small agent, big world needing to learn about many things at the same time, which is a promising avenue for autonomous agents. The paper is fairly clear, though it could benefit from some improvements in terms of structure. For instance, the section presenting the first idea describes a proof for a theorem it has not yet been stated, so it is a bit confusing. The authors could maybe move this discussion further down, after the theorem is presented. The assumption of having access to the minimum and maximum return is a bit alarming, how would one have access to those in an \emph{unknown} environment, this seems to imply that the algorithm is not applicable in the same way to all instances of the same problem. The paper feels a bit disconnected, as one idea is presented after the next, and they are presented in a disconnected manner. The ideas are only at the end connected into an algorithm. In the reward decomposition section, there is a mention "SARSA-like update" as common knowledge, without introducing it. The reward decomposition is linear. Why? What are the limitations or assumptions that this is making, and why do we think that channels with separate properties will emerge in the reward space, such that they can be mapped to value functions with different properties, beyond the game-like artificial environments? Where do the values of the weightings applied to the reward channels come from ( λ i )? The section discussing the "slope of mappings" makes insightful observations, but is a bit hard to follow, and would greatly benefit an illustration. It is only here that the reader is explained why "Assumption 2" was introduced all the way at the beginning, along with a lay word interpretation. The experimental section does not explain the results. What is d , where does it come from?
ICLR
Title Cross-lingual Transfer Learning for Pre-trained Contextualized Language Models Abstract Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant. In this work, building upon the recent works connecting cross-lingual transfer learning and neural machine translation, we thus propose a novel crosslingual transfer learning framework for PrLMs: TRELM. To handle the symbol order and sequence length differences between languages, we propose an intermediate “TRILayer” structure that learns from these differences and creates a better transfer in our primary translation direction, as well as a new cross-lingual language modeling objective for transfer training. Additionally, we showcase an embedding aligning that adversarially adapts a PrLM’s non-contextualized embedding space and the TRILayer structure to learn a text transformation network across languages, which addresses the vocabulary difference between languages. Experiments on both language understanding and structure parsing tasks show the proposed framework significantly outperforms language models trained from scratch with limited data in both performance and efficiency. Moreover, despite an insignificant performance loss compared to pre-training from scratch in resourcerich scenarios, our transfer learning framework is significantly more economical. 1 INTRODUCTION Recently, the pre-trained contextualized language model has greatly improved performance in natural language processing tasks and allowed the development of natural language processing to extend beyond the ivory tower of research to more practical scenarios. Despite their convenience of use, PrLMs currently consume and require increasingly more resources and time. In addition, most of these PrLMs are concentrated in English, which prevents the users of different languages from enjoying the fruits of large PrLMs. Thus, the task of transferring the knowledge of language models from one language to another is an important task for two reasons. First, many languages do not have the data resources that English uses to train such massive and data-dependent models. This causes a disparity in the quality of models available to English users and users of other languages. Second, languages share many commonalities - for efficiency’s sake, transferring knowledge between models rather than wasting resources training new ones is preferable. Multilingual PrLMs (mPrLMs) also aim to leverage languages’ shared commonalities and lessen the amount of language models needed, but they accomplish this by jointly pre-training on multiple languages, which means when they encounter new languages, they need to be pre-trained from scratch again, which causes a waste of resources. This is distinct from using TreLM to adapt models to new languages because TreLM foregoes redoing massive pre-training and instead presents a much more lightweight approach for transferring a PrLM. mPrLMs can risk their multilingualism and finetune on a specific target language, but we will demonstrate that using TreLM to transfer an mPrLM actually leads to better performance than solely finetuning. Therefore, in order to allow more people to benefit from the PrLM, we aim to transfer the knowledge stored in English PrLMs to models for other languages. The differences in training for new languages with mPrLMs and TRELM are shown in Figure 1. Machine translation, perhaps the most common cross-lingual task, is the task of automatically converting source text in one language to text in another language; that is, the machine translation model converts the input consisting of a sequence of symbols in some language into a sequence of symbols in another language; i.e., it follows a sequence-to-sequence paradigm. Language has been defined as “a sequence that is an enumerated collection of symbols in which repetitions are allowed and order does matter” (Chomsky, 2002). From this definition, we can derive three important differences in the sequences of different languages: symbol sets, symbol order, and sequence length, which can also be seen as three challenges for machine translation and three critical issues that we need to address in migrating a PrLM across languages. In this work, to resolve these critical differences in language sequences, we propose a novel framework that enables rapid cross-lingual transfer learning for PrLMs and reduces loss when only limited monolingual and bilingual data are available. To address the first aforementioned issue, symbol sets, we employ a new shared vocabulary and adversarially align our target embedding space with the raw embedding of the original PrLMs. For the symbol order and sequence length issues, our approach draws inspiration from neural machine translation methods that overcome the differences between languages (Bahdanau et al., 2014), and we thus propose a new cross-lingual language modeling objective, CdLM, which tasks our model with predicting the tokens for a from its parallel sentence in the target language. To facilitate this, we also propose a new “TRILayer” structure, which acts as an intermediary layer that evenly splits our models’ encoder layers set into two halves and serves to convert the source representations to the length and order of the target language. Using parallel corpora for a given language pair, we train two models (one in each translation direction) initialized with the desired pre-trained language model’s parameters. Combining the first half of our target-tosource model’s encoder layer set and the second half of our source-to-target model’s encoder layer set, we are thus able to create a full target-to-target language model. During training, we use three separate phases for the proposed framework, where combinations of Masked Language Modeling (MLM), the proposed CdLM, and other secondary language modeling objectives are used. We conduct extensive experiments on Chinese and Indonesian, as well as German and Japanese (shown in Appendix 10), in challenging situations with limited data and transfer knowledge from English PrLMs. On several natural language understanding and structure parsing tasks, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) PrLM models that we migrate using our proposed framework improve the performance of downstream tasks compared to monolingual models trained from scratch and models pre-trained in a multilingual setting. Moreover, statistics show that our framework also has advantages in terms of training costs. 2 RELATED WORK Because of neural networks’ reliance on heavy amounts of data, transfer learning has been an increasingly popular method of exploiting otherwise irrelevant data in recent years. It has seen many applications and has been used particularly often in Machine Translation (Zoph et al., 2016; Dabre et al., 2017; Qi et al., 2018; Nguyen & Chiang, 2017; Gu et al., 2018; Kocmi & Bojar, 2018; Neubig & Hu, 2018; Kim et al., 2019; Aji et al., 2020), in which transfer learning is generally used to improve translation performance in a low resource scenario using the knowledge of a model trained in a high resource scenario. In addition to cross-lingual situations, transfer learning has also been applied to adapt across domains in the POS tagging (Schnabel & Schütze, 2013) and syntactic parsing (McClosky et al., 2010; Rush et al., 2012) tasks, for example, as well as specifically for adapting language models to downstream tasks (Chronopoulou et al., 2019; Houlsby et al., 2019). One particular difference between our method and many transfer learning methods is that we do not exactly use the popular ”Teacher-Student” framework of transfer learning, which is particularly often used in knowledge distillation (Hinton et al., 2015; Sanh et al., 2020) - transferring knowledge from a larger model to a smaller model. We instead use two ”student” models, and unlike traditional methods, these student models do not share a target space with their teacher (the language is different), and their parameters are initialized with the teacher’s parameters rather than being probabilistically guided by the teacher during training. When using transfer learning for cross-lingual training, there have been various solutions for the vocabulary mismatch. Zoph et al. (2016) did not find vocabulary alignment to be necessary, while Nguyen & Chiang (2017) and Kocmi & Bojar (2018) used joint vocabularies, and Kim et al. (2019) made use of cross-lingual word embeddings. One particular work that inspired us is that of Lample et al. (2018), who also used an adversarial approach to align word embeddings without any supervision while achieving competitive performance for the first time. This succeeded the work of Zhang et al. (2017), who also used an adversarial method but did not achieve the same performance. Also like our aligning method, Xu et al. (2018) took advantage of the similarities in embedding distributions and cross-lingually transferred monolingual word embeddings by simultaneously optimizing based on distributional similarity in the embedding space and the back-translation loss. Several works have also explored adapting the knowledge of large contextualized pre-trained language models to more languages, which pose a much more complicated problem compared to transferring non-contextualized word embeddings. The previous mainstream approach for accommodating more languages is using mPrLMs. Implicitly joint multilingual models, such as m-BERT (Devlin et al., 2019), XLM (Conneau & Lample, 2019), XLM-R (Conneau et al., 2019), and mBART (Liu et al., 2020), are usually evaluated on multi-lingual benchmarks such as XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020), while some works use bilingual dictionaries or sentences for explicit cross-lingual modeling with mPrLMs (Schuster et al., 2019; Mulcaire et al., 2019; Liu et al., 2019a; Cao et al., 2020). Transferring monolingual PrLMs, another research branch, is relatively new. Artetxe et al. (2020) presented a monolingual transformer-based masked language model that was competitive with multilingual BERT when transferred to a second language. To facilitate this, they did not rely on a shared vocabulary or joint training (to which multilingual models’ performance is often attributed) and instead simply learned a new embedding matrix through MLM in the new language while freezing parameters of all other layers. Tran (2020) used a similar approach, though instead of randomly initialized embeddings, he used a sparse word translation matrix on English embeddings to create word embeddings in the target language, reducing the training cost of the model. 3 TRELM Cross-lingual Transfer Learning for Language Modeling (TRELM) is a framework that rapidly migrates existing PrLMs. In this framework, the embedding space of a source language is linearly aligned with that of a target using an adversarial embedding alignment, which we experimentally verified was effective due to shared spatial structure similarities (refer to Appendix A.1 for details). Leveraging joint learning, we propose a novel pre-training objective, CdLM, and unify it with MLM into one format. In regards to model structure, we proposed TRILayer, an intermediary transfer layer, to support language conversion during the CdLM training process. 3.1 TRILAYER AND CdLM For the disparities in symbol sets of different languages and different pre-trained models, we employ embedding space alignment, while for the issues of the symbol order and sequence length, unlike previous work, we do not assume that the model can implicitly learn these differences, and we instead leverage language embeddings and explicit alignment information and propose a novel Cross-Lingual Language Modeling (CdLM) training objective and a Transfer Learning Intermediate Layer (TRILayer) structure as a pivot layer in the model to bridge the differences of the two languages. To clearly explain our training approach, we take the popular PrLM BERT as a basis for introduction. In the original BERT (as shown in Figure5(a)), Transformer (Vaswani et al., 2017) is taken as the backbone of model, which takes tokens and their positions in a sequence as input before encoding this sequence into a contextualized representation using multiple stacked multi-head self-attention layers. During the pre-training process, BERT predominantly adopts an MLM training objective, in which a [MASK] (also written as [M]) token is used to replace a token in the sequence selected by a predetermined probability, and the original token is predicted as the gold target. Formally speaking, given a sentence X = {x1, x2, ..., xT } andM, the set of masked positions, the training loss LMLM for the MLM objective is: LMLM(θLM) = − |M|∑ i=1 logPθLM(xMi |X\M), where θLM are the parameters of BERT, |M| is the length of setM, andX\M indicates the sequence after masking. An example of MLM training is shown in the top-left region of Figure 5. Much work in the field of machine translation suggests that the best way to transfer learning across languages is through translation learning because the machine translation model must address all three of the above-described language differences in the training process. Therefore, we take inspiration from the design of machine translation, especially the design of non-autoregressive machine translation, and propose a Cross-Lingual Language Modeling (CdLM) objective. CdLM is just like a traditional language modeling objective, except across languages, so given an input of source tokens, it generates tokens in a separate target language. We describe the differences between CdLM and related MLM variants (such as Translation Language Modeling (TLM) and BRidge Language Modeling (BRLM)) in Appendix A.4. With this proposed objective, we aim to make as few changes as possible to the existing PrLM and thus introduce a Translation/Transfer Intermediate Layer (“TRILayer”) structure, which bridges two opposing half-models to create our final model. First, in the modified version of BERT for transfer learning, we add a language embedding Elng following the practice of (Conneau & Lample, 2019) to indicate the current language being processed by the model. This is important because the model will handle both the source and target languages simultaneously in 2 of our 3 training phases (described in next subsection). The new input embedding is: Einp = Ewrd + Eseg + Epos + Elng, where Ewrd, Eseg , and Epos are the word (token) embedding, segment embedding, and position embedding, respectively. Next, we denote N as the number of stacked Transformer layers (L = {l1, l2, ..., lN}) in BERT and split the BERT layers into two halves L≤N2 = {11, ..., lN2 } and L>N2 = {lN2 +1, lN2 +2, ..., lN}. The TRILayer is placed between the two halves (making the total number of layers N +1) and functions as a pivot. In the L≤N2 half, the input embedding is encoded by its Transformer layers to hidden states Hi = TRANSFORMERi(Hi−1), in which H0 = Einp and TRANSFORMERi indicates the i-th Transformer layer in the model. Before the outputs of the L≤N2 half are fed into the TRILayer, the source hidden representation HN 2 is reordered according to new order O. During CdLM training, for source language sentence X = {x1, x2, ..., xT }, a possible translation sentence Y = {y1, y2, ..., yT ′} is provided. To find the new order, explicit alignment information between the transfer source and target sentences is obtained using an unsupervised external aligner tool. We define the source-to-target alignment pair set as: AX→Y = ALIGN(X,Y ) = {(xALNIDX(y1), y1), (xALNIDX(y2), y2), ..., (xALNIDX(yT ′ ), yT ′ )}, where ALNIDX(·) is a function that returns the alignment index in the source language or xnull when there is no explicit alignment between the token in the target language and any source language token. xnull represents a special placeholder token [P] that is always appended to the inputs. Finally, the source hidden representation HN 2 is reordered according to the new order O = {ALNIDX(y1),ALNIDX(y2), ...,ALNIDX(yT ′ )} from alignment set AX→Y , creating HON 2 . Thus, the resultant hidden representation HON 2 is in the order of the target language and is consistent with the target sequence in length, making it usable for language modeling prediction. Unfortunately, the position information is lost in reordering. To combat this, the position embedding and language embedding will be reintegrated as follows: HTL = TRANSFORMERTL(HON 2 + ElngY + Epos), where HTL is the output of TRILayer, TRANSFORMERTL is the Transformer structure inside the TRILayer, and ElngY is the target language embedding. Next, the HTL is encoded in the L> 2N half as done for the L≤N2 half (let HN2 = HTL for the L>N2 half) to predict the final full sequence of the target language. The model is trained to minimize the loss LCdLM, which is: LCdLM(θLM) = − T ′∑ i=1 logPθLM(yi|X,AX→Y ). To enable MLM and CdLM to train models simultaneously rather than through successive optimization, we provide a unified view for MLM and CdLM language modeling: LULM(θLM) = − Tmax∑ i=1 1(i ∈ C) logPθLM(wi|S,A), where Tmax denotes the maximum sequence length for language modeling, S is the input sequence, wi is the i-th token in output sequence W , C is the set of positions to be predicted, and A is the alignment between the input and output sequence. Both the input and output sequences are padded to the maximum sequence length Tmax during training. 1(i ∈ C) represents the indicator function and equals 1 when i-th position exists in the set for the parse to be predicted and 0 otherwise. In MLM, S = X\C , A = {(1, 1), (2, 2), ..., (Tmax, Tmax)} is a successive alignment, and W = X , while in CdLM, S = X , A = AX→Y , and W = Y . Due to the unified language modeling abstractions of MLM and CdLM, the input and output forms, as well as the internal logic of their models, are the same. Therefore, models can be trained with the two objectives in the same mini-batch, which enhances the stability of transfer training. 3.2 TRIPLE-PHASE TRAINING In our TRELM framework, the whole training process is divided into three phases with different purposes but the same design goal: minimize the number of parameter updates as much as possible to speed up convergence and enhance training stability. The three phases are commonality training, transfer training, and language-specific training. In the commonality learning phase, only the target language MLM objective is used, while in the transfer learning phase, CdLM and target language MLM objectives are both used at the same time, and in the final language-specific learning phase, target language MLM and other secondary language modeling objectives are adopted. Commonality Training Though languages are very different on the surface, they also share a lot of underlying commonalities, often called linguistic universals or cross-linguistic generalizations. We therefore take advantage of these commonalities between languages and jointly learn the transferring source and target languages. In this phase, the parameters of the position, segment embedding, and Transformer layers are initialized with original BERT, the TRILayer is initialized with parameters of Transformer layer LN 2 , the word embedding is initialized with the output of the adversarial embedding aligning, and orthogonal weight initializations are adopted for the language embedding. For this phase, the model is trained by joint MLM with monolingual inputs from both the source and target languages. Moreover, in this training process, to make convergence fast and stable, the parameters of BERT’s backbone (Transformer) layers are fixed; only the embeddings and TRILayer are updated by the gradient-based optimization based on the joint MLM loss. The final model obtained in this phase is denoted as θctLM. Transfer Training Since the model is not pre-trained from scratch, making the model aware of changes in inputs is a critical factor for a maximally rapid and accurate migration in the case of limited data. Since there is not enough monolingual data in the target language to allow the model to adapt to the new language, we use the supervisory signal from the two languages’ differences and leverage parallel corpora to directly train the model. Specifically, we split the original BERT transformer layers into two halves. With a parallel corpus from the source language to the target language and one from the target language to the source language, we train two corresponding models, both of which are initialized using the parameters learned in the previous phase. In the source-to-target model, only the upper half of the encoder layers is trained, and the lower half is kept fixed, while the converse is true for the target-to-source model. TRILayer then provides crosslingual order and length adjustment, which is similar to the behavior of a neural machine translation model. Thus, we create two reciprocal models: one whose upper half can handle the target language, and one whose lower half can handle it, which we connect via the TRILayer. Finally, the two trained models are combined as θttLM. We describe the full procedure in Algorithm 1. Algorithm 1 Transfer Training of Pre-trained Contextualized Language Models Input: The commonality pre-trained model parameters θctLM, Languages L = {lngX , lngY }, Parallel training set P = {(XL0i , X L1 i )} |P| i=1, Number of training steps K 1: for j in 0, 1 do 2: Initialize model parameters θ Lj→L(1−j) LM ← θ ct LM 3: if j == 0 then 4: Fix the parameters of L≤N 2 half of θ Lj→L(1−j) LM 5: else 6: Fix the parameters of L>N 2 half of θ Lj→L(1−j) LM 7: end if 8: for step in 1, 2, 3, ..., K do 9: Sample batch (XLj , XL(1−j)) from P. 10: Alignment information A: ALj→L(1−j) ← ALIGN(X Lj , XL(1−j)) 11: CdLM Loss: LCdLM ← − ∑ logP θ Lj→L(1−j) LM (XL(1−j) |XLj ,ALj→L(1−j)) 12: Masked version of XL1 : XL1\M ← MASK(X L1) 13: MLM Loss: LMLM ← − ∑ logP θ Lj→L(1−j) LM (XL1M |X L1 \M ) 14: CdLM+MLM Update: θ Lj→L(1−j) LM ← optimizer update(θ Lj→L(1−j) LM ,LCdLM,LMLM) 15: end for 16: end for 17: Combine the two obtained models as θttLM by choosing the L>N 2 half model parameters from model θL0→L1LM and L≤N 2 half model parameters from model θL1→L0LM and average the other parameters (such as embedding and TRILayer parameters) of the two models Output: Learned model θttLM Language-specific Training During the language-specific training phase, we only use the monolingual corpus of the target language and further strengthen the target language features for the model obtained in the transfer training phase. We accomplish this by using the MLM objective and other secondary objectives such as Next Sentence Prediction (NSP). 4 EXPERIMENTS In this section, we discuss the details of the experiments undertaken for this work. We conduct experiments based on English PrLMs1. We transfer via English-to-Chinese and English-to-Indonesian directions for the purpose of comparing with previous recent work. We describe the training details and parameters in Appendix A.5. From English to Chinese and English to Indonesian, we transfer two pre-trained contextualized language models: BERT and RoBERTa. Our performance evaluation on the migrated models is mainly conducted on two types of downstream tasks: language understanding and language structure parsing. Please refer to Appendix A.6 for introductions of tasks and baselines and Appendix A.7 for an ablation study. We note that the comparisons between models trained using TRELM and the monolingual and multilingual PrLMs trained from scratch on the target language (see Table 1) is only for illustrating the relative performance loss of the model 1Our code is available at https://github.com/agcbi2017/TreLM. produced by TRELM. These models are not directly comparable, as we intentionally use less data to train models when using TRELM. Continuing to pre-train the PrLMs on the target language would also obviously further improve their performance, but this is not our main focus. Language Understanding We first compare the PrLMs transferred by TRELM alongside the results the existing monolingual pre-trained BERT-base-chinese and the multilingual pre-trained BERT-base-multilingual in Table 1 using the CLUE benchmark. When comparing with the same model architecture, taking BERT as an example, our model TRIBERT-base exceeds m-BERT-base and BERT-small and is slightly weaker than original BERT-base. Compared with BERT-small, which is trained from scratch for a longer time, our TRI-BERT-base generally achieves better results on these NLU tasks. This demonstrates that because of the commonalities of languages, models for languages with relatively few resources can benefit from language models pre-trained on languages with richer resources, which confirms our cross-lingual transfer learning framework’s effectiveness. m-BERT is another potential language model migration scheme and has the advantage of supporting multiple languages at the same time; however, in order to be compatible with multiple languages, the unique characteristics of each language are neglected. Our TRI-BERT, which is built on top of BERT-base, instead focuses on and highlights language differences during the transfer learning process, which leads to an increase in performance compared to m-BERT. When TRI-BERT and TRI-RoBERTa have the same model size, TRI-RoBERTa outperforms TRI-BERT, which is consistent with the performance differences between the original RoBERTa and BERT, indicating that our migration approach maintains the performance advantages of PrLMs. Models CoNLL-09 P R F1 (Cai et al., 2018) 84.7 84.0 84.3 +BERT-base 86.86 87.48 87.17 +m-BERT-base 85.17 85.53 85.34 +TRI-BERT-base 86.15 85.58 85.86 +TRI-RoBERTa-base 87.08 86.99 87.03 +TRI-RoBERTa-base 85.77 85.62 85.69(w/o CdLM) Table 3: Dependency SRL results on the CoNLL-2009 Chinese benchmark. 0 1K 10K 100K 500K 1M 2 3 4 Parallel Data BP W 86 86.5 87 Se m -F 1 Figure 2: Language modeling effects vs. Parallel data size on the evaluation set. Language Structure Parsing We report results on dependency parsing for Chinese and Indonesian in Table 2. As shown in the results, the baseline model has been greatly improved for the PrLM. In Chinese, the performance of BERT-base is far superior to m-BERT-base, which highlights the importance of the unique nature of the language for downstream tasks, especially for refined structural analysis tasks. In Indonesian, IndoBERT (Wilie et al., 2020) performs worse than m-BERT, which we suspect is due to IndoBERT’s insufficient pre-training. We also compare TRI-BERT-base and IndoBERT-base on Indonesian, whose ready-to-use language resources are relatively small compared to English. We find that although pre-training PrLMs on the available corpora is possible, because of the size of language resources, engineering implementation, etc., our migrated model is more effective than the model pre-trained from scratch. This shows that migrating from the readymade language models produced from large-scale language training and extensively validated by the community is more effective than pre-training on relatively small and limited language resources. In addition, we also conduct experiments for these pre-trained and migrated models on Chinese SRL. mPrLMs are another important and competitive approach that can adapt to cross-lingual PrLM applications, so we also include several mPrLMs in our comparison on dependency parsing. Specifically, we used XLM, a monolingual and multilingual PrLM pre-training framework, as our basis. For TRELM, we used XLM-en-2048, officially provided by Conneau & Lample (2019), as the source model. The data amount used and the number of training steps are consistent with TRI-BERT/TRIRoBERTa. In mPrLM, we combined EN, ID, and ZH sentences (including monolingual and parallel sentences) together (10M sentences in total) to train an EN-ID-ZH mPrLM with MLM and TLM objectives. The performance comparison of these three PrLMs on the dependency parsing task is shown in the lower part of Table 2. From the results, we see mPrLMs pre-trained from scratch have no special performance advantage over TRELM when corpus size is constant, and especially when not using the cross-lingual transfer learning objective TLM, which models parallel sentences. In fact, our TRI-XLM-en-2048 solidly outperforms its two multilingual XLM counterparts. Monolingual PrLMs generally outperform mPrLMs, which likely leads to the performance advantages shown with monolingual migration. Additionally, like our TRELM, mPrLMs can also finetune on only the target language to improve performance, and leveraging TRELM to transfer an mPrLM leads to even further gains, as seen in Table 9 in the appendix. While the two approaches can compete with each other, they have their own advantages in general. In particular, TRELM is more suitable for transferring additional languages that were not considered in the initial pre-training phase and for low-resource scenarios, while mPrLMs have the advantage of being able to train and adapt to multiple languages at once. In Table 3, we compared a model migrated without CdLM to the full one. To compensate for the removal of CdLM, we added a monolingual corpus with the same size as the parallel corpora and trained the model with an extra 80K steps, but despite using more target monolingual data and training steps, the performance was still much better when CdLM was included. 5 DISCUSSION Effects of Parallel Data Scale Since the proposed TRELM framework relies on parallel corpora to learn the language differences explicitly, the sizes of the parallel corpora used are also of concern. We explored the influence of different parallel corpus sizes on the performance of the models transferred with the TRI-RoBERTa-base architecture. The variation curve of BPW score with the size of parallel data is shown in Figure 2. We see that with increasingly more parallel data, BPW gradually decreases, but this decrease slows as the data grows. The effect of the parallel corpora for cross-lingual transfer therefore has a upper bound because when the parallel corpora reaches a certain size, the errors from the alignment extraction tools cannot be ignored, and additionally, due to how lightweight the TRILayer structure is, TRILayers can only contain so much cross-lingual transfer information, which further restricts the growth of the migration performance. Pre-training Cost vs. Migration Training Cost The training cost is an important factor for choosing whether to pre-train from scratch or to migrate from an existing PrLM. We listed the training data size, model parameters, training hardware, and training time of several public PrLM models and compared them with our models. The comparisons are shown in Table 4. Although the training hardware and engineering implementation of various PrLM models are different, this can still be used as a general reference. When model size is the same, our proposed transfer learning is much faster than pre-training from scratch, and less data is used in the transfer learning process. In addition, the total training time of our large model migration training is less than that of even the base model pre-training when hardware is kept the same. Therefore, the framework we proposed can be used as a good supplementary scheme for the PrLM in situations when time or computing resources are restricted. Model Data BSZ Steps Params Hardware Train Time G/TPU·Days 6 CONCLUSION AND FUTURE WORK In this work, we present an effective method of transferring knowledge from a given language’s pre-trained contextualized language model to a model in another language. This is an important accomplishment because it allows more languages to benefit from the massive improvements arising from these models, which have been primarily concentrated in English. As a further plus, this method also enables more efficient model training, as languages have commonalities, and models in the target language can exploit these commonalities and quickly adopt these common features rather than learning them from scratch. In future work, we plan to use our framework to transfer other models such as ALBERT and models for more languages. We also aim to develop an unsupervised cross-lingual transfer learning objective to remove the reliance on parallel sentences. A APPENDIX A.1 ADVERSARIAL EMBEDDING ALIGNING Since the symbol sets in different languages are different, the first step in the cross-lingual migration of PrLMs is to supplement or even replace their vocabularies. In our proposed framework, to make the best use of the commonalities between languages, we choose to use a shared vocabulary with multiple languages rather than replace the original language vocabulary with one for the new language. In addition, in current PrLMs, a subword vocabulary is generally adopted in order to better mitigate out-of-vocabulary (OOV) problems caused by limited vocabulary size. To accommodate the introduction of a shared vocabulary, it is necessary to jointly re-train the subword model to ensure that some common words in different languages are consistent in subword segmentation, which leads to the problem that some tokens in the newly acquired subword vocabulary are different from those in the original subword vocabulary, though they belong to the same language. To address this issue, we consider the most complicated case, in which the vocabulary is completely replaced by a new one. Consequently, we assume that there are two embedding spaces: one is the embedding of the original vocabulary, which is well-trained in the language pre-training process, and the other is the embedding of the new vocabulary, yet to be trained. When considering raw embeddings and non-contextualized embeddings (e.g. Word2vec), it is easy to see their training objectives are similar in theory. The only differences are the addition of context and the change in model structure to accommodate language prediction. Despite these differences, non-contextualized embeddings can be used to simulate the raw embeddings in a PrLM that we aim to replace (refer to Appendix A.2 for a detailed explanation). Although the two embedding spaces we consider are similar in structure, they may be at different positions in the whole real embedding space, so an extra alignment process is required, and although common tokens may exist, due to the inconsistent token granularity from using byte-level byte-pair encoding (BBPE) (Radford et al., 2019), a matching token of the two embedding spaces cannot be utilized for embedding space alignment, as it is likely to represent different meanings. Therefore, inspired by (Lample et al., 2018), we present an adversarial approach for aligning the word2vec embedding space to the PrLM’s raw embedding space without supervision. With this approach, we aim to minimize the differences between the two embedding spaces brought about by different similarity forms. We define U = {u1, u2, ..., um} and V = {v1, v2, ..., vn} as the two embedding spaces of m and n tokens from the PrLM and word2vec training, respectively. In the adversarial training approach, a linear mapping W is trained to make the spaces WV = {Wv1,Wv2, ...,Wvn} and U close as possible, while a discriminator D is employed to discriminate between tokens randomly sampled from spaces WV and U . Let θadv denote the parameters of the adversarial training model and the probabilities Pθadv (1(z)|z) and Pθadv (0(z)|z) indicate whether or not the sampling source prediction is the same as its real space for a vector z. Therefore, the discrimination training loss LD(θD|W ) and the mapping training loss LD(W |θD) are defined as: LD(θD|W ) = − 1 n n∑ i=1 logPθadv (1(Wvi)|Wvi)− 1 m m∑ i=0 logPθadv (1(ui)|ui), LW (W |θD) = − 1 n n∑ i=1 logPθadv (0(Wvi)|Wvi)− 1 m m∑ i=0 logPθadv (0(ui)|ui), where θD are the parameters of discriminator D, which is implemented as a multilayer perceptron (MLP) with two hidden layers and Leaky-ReLU as the activation function. During the adversarial training, the discriminator parameters θD and W are optimized successively with discrimination training loss and mapping training loss. To enhance the effect of embedding space alignment, we adopted the same techniques of iterative refinement and cross-domain similarity local scaling as Lample et al. (2018) did. While the two embedding spaces in (Lample et al., 2018) both can be updated by gradient, we consider U as the goal spatial structure and hence fix U throughout the training process, and we updateW to better align V . A.2 ANALYZING NON-CONTEXTUALIZED EMBEDDINGS AND PrLMS’ RAW EMBEDDINGS Bidirectional PrLMs such as BERT (Devlin et al., 2019) use Masked Language Modeling (MLM) as the training objective, in which the model is required to predict a masked part of the sentence. This training paradigm has no essential difference with word2vec (Mikolov et al., 2013). Word2vec employed a simple single-layer perceptron neural network and restricted the context for the masked part to the sliding window, while recent mainstream PrLMs adopted self-attention-based Transformer as the context encoder, which can utilize the whole sentence as context. Because of this, we speculate that BERT’s raw embeddings and word2vec embeddings have a similar nature, and that we can simulate BERT’s raw embeddings with the word2vec embeddings through some special designs. To verify our theory, we studied the important relational nature of embeddings. Specifically, we chose BERT-base-cased’s raw embeddings and word2vec-based FastText cc.en.300d embeddings (Grave et al., 2018) and evaluated the cosine similarity of single terms compared to other terms in their vocabularies. An example histogram for the term “genes” is shown in Figure 3. Examining the two types of embeddings, we found that the learned vectors, regardless of the type of similarity (semantic/syntactic/inflections/spelling/etc.) they capture, have a very similar distribution shape. This showed us that the two embedding spaces are similar, and words within them may just have different relations to each other. Thus, our work focuses on aligning the new word2vec embedding space by learning a mapping to the original embedding space to simulate the original embedding allow for a cross-lingual migration of the PrLM. To illustrate the necessity of embedding alignment, we also took out the top-50 terms closest to the term “genes” in the two embedding spaces, used principal component analysis (PCA) to reduce the vector dimension to 2, and presented it in a two-dimensional figure, as shown in Figure 4. As can be seen from the figure, due to the different language modeling architectures and contexts in FastText and BERT, corresponding points are distributed at different locations in the embedding space. This is why compatibility problems exist when we use the original non-contextualized embeddings to simulate the new embedding and hence why we need to align the embeddings. A.3 MODEL ARCHITECTURE IN TRELM A.4 MLM, TLM, BRLM, AND CdLM As stated in the original MLM objective, the model can only learn from monolingual data. Though a joint MLM training can be performed across languages, there is still a lack of explicit language cues for guiding the model in distinguishing language differences. Conneau & Lample (2019) proposed a Translation Language Modeling (TLM) objective as an extension of the MLM objective. The TLM objective leverages bilingual parallel sentences by concatenating them into single sequences as in the original BERT and predicts the tokens masked in the concatenated sequence. This encourages the model to predict the masked part in a bilingual context. Ji et al. (2020) further proposed a BRidge Language Modeling (BRLM) built on the TLM, benefiting from explicit alignment information or additional attention layers that encourage word representation alignment across different languages. These MLM variants drive models to learn explicit or implicit token alignment information across languages and have been shown effective in machine translation compared to the original MLM, but for the cross-lingual transfer learning of PrLMs, modeling the order difference and semantic equivalence in different languages is still not enough. Since both contexts in MLM variants have been exposed to the model, whether the prediction of the masked part depends on the cross-lingual context or the context of its own language is unknown, as it lacks explicit clues for cross-lingual training. In our proposed CdLM, we use sentence alignment information for explicit ordering. The model is exposed to both the transfer source and transfer target languages at the same time, during which the input is a sequence of the source language, and the prediction goal is a sequence of the target language. Thus, we convert translation into a cross-language modeling objective, which gives a clear supervision signal for cross-lingual transfer learning. A.5 TRAINING DETAILS The initial weights for the migration are BERT-base-cased, BERT-large-cased, RoBERTa-base, and RoBERTa-large, which are taken from their official sources. We use English Wikipedia, Chinese Wikipedia, Chinese News, and Indonesian CommonCrawl Corpora for the monolingual pre-training data. For all models migrated in the same direction, regardless of their original vocabulary, we used the same single vocabulary that we trained on the joint language data using the WordPiece Subword scheme (Schuster & Nakajima, 2012). In English-to-Chinese, the vocabulary size is set to 80K and the alphabet size is limited to 30K, while in English-to-Indonesian, the vocabulary size is set to 50K, and the alphabet size is limited to 1K. With the WordPiece vocabulary, we tokenized the monolingual corpus to train the non-contextualized word2vec embedding of subwords. Using the fastText (Bojanowski et al., 2017) tool and skipgram representation mode, three embedding sizes 128, 768, and 1024 were trained to be compatible the respective pre-trained language models. In the “commonality” training phase, we sampled 1M sentences of English Wikipedia and either 1M sentences of Chinese Wikipedia or 1M sentences of Indonesian CommonCrawl for the English-toChinese and English-to-Indonesian models. We trained the model with 20K update steps with total batch size 128 and set the peak learning rate to 3e-5. For the “transfer” training phase, we sampled 1M parallel sentences from the UN Corpus (Ziemski et al., 2016) for English-to-Chinese and 1M parallel sentences from OpenSubtitles Corpus (Lison & Tiedemann, 2016) for English-to-Indonesian. We use the fastalign toolkit (Dyer et al., 2013) to extract the tokenized subword alignments for CdLM. The two half models are optimized over 20K update steps, and the batch size and peak learning rate are set to 128 and 3e-5, respectively. In the final phase, “language-specific” training, 2M Chinese and Indonesian sentences were sampled to update their respective models, training for 80K steps with total batch size 128 and initial learning rate 2e-5. In all the above training phases, the maximum sequence length was set to 512, weight decay was 0.01, and we used Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999. In addition to our migrated pre-trained models, we also pre-trained a BERT-small2 model from scratch with data of the same size as our migration process to compare the performance differences between migration and scratch training. For the BERT-small model, we started with the BERT-base hyper-parameters and vocabulary but shortened the maximum sequence length from 512 to 128, reduced the model’s hidden and token embedding dimension size from 768 to 256, set the batch size to 256, and extended the training steps to 240K. Our TRI-BERT -* and TRI-RoBERTa-* all used the same amount of training data (2M target language monolingual sentences, 1M source language monolingual sentences, and 1M parallel sentences). BERT-small pre-trained from scratch on only the target language, using 5M target language sentences to ensure the training data amount was the same. Compared with the original model, the TRI-* model only has an extra TRI-layer added and some changes in the embedding layer. BERTbase-chinese and m-BERT-base models were downloaded from the official repository, which trained with 25M sentence (much more than our 5M sentences) and more training steps. 2The performance of BERT-base for pre-training from scratch with this limited data is inferior to that of BERT-small, so we do not compare it with our migrated models. A.6 DOWNSTREAM TASKS Following previous contextualized language model pre-training, we evaluated the English-toChinese migrated language models on the CLUE benchmark. The Chinese Language Understanding Evaluation (CLUE) benchmark (Xu et al., 2020) consists of six different natural language understanding tasks: Ant Financial Question Matching (AFQMC), TouTiao Text Classification for News Titles (TNEWS), IFLYTEK (CO, 2019), Chinese-translated Multi-Genre Natural Language Inference (CMNLI), Chinese Winograd Schema Challenge (WSC), and Chinese Scientific Literature (CSL) and three machine reading comprehension tasks: Chinese Machine Reading Comprehension (CMRC) 2018 (Cui et al., 2019), Chinese IDiom cloze test (CHID) Zheng et al. (2019), and Chinese multiple-Choice machine reading Comprehension (C3) (Sun et al., 2019). We built baselines for the natural language understanding tasks by adding a linear classifier on top of the “[CLS]” token to predict label probabilities. For the extractive question answering task, CMRC, we packed the question and passage tokens together with special tokens to form the input: “[CLS] Question [SEP] Passage [SEP]”, and employed two linear output layers to predict the probability of each token being the start and end positions of the answer span following the practice for BERT (Devlin et al., 2019). Finally, in the multi-choice reading comprehension tasks, CHILD and C3, we concatenated the passage, question, and each candidate answer (“[CLS] Question || Answer [SEP] Passage [SEP]”), input this to the models, and also predicted the probability of each answer on the representations from the “[CLS]” token following prior works (Yang et al., 2019; Liu et al., 2019b). In addition to these language understanding tasks, language structure analysis tasks are also a very important part of natural language processing. Therefore, we also evaluated the PrLMs on syntactic dependency parsing and semantic role labeling, a type of semantic parsing. The baselines we selected for dependency parsing and semantic role labeling are from (Dozat & Manning, 2016) and Cai et al. (2018), respectively. These two baseline models are very strong and efficient and rely only on pure model structures to obtain advanced parsing performance. Our approach to integrate the PrLM with the two baselines is to replace the BiLSTM encoder in the baseline with the encoder of the PrLM. We took the first subword or character representation of a word as the representation of a word, which solved the PrLM’s inconsistent granularity issue that impeded parsing. For the English-to-Indonesian migrated language models, since the language understanding tasks in Indonesian are very limited, we chose to use the Universal Dependency (UD) parsing task (v2.3, Zeman et al., 2018), in which the treebanks of the world’s languages were built by an international cooperative project, as the downstream task for evaluation. A.7 ABLATIONS Effects of Different Embedding Initialization To show the effectiveness of non-contextualized simulation and adversarial embedding space alignment, we compare the TRI-RoBERTa-base models obtained in the commonality training phase of our framework under four different embedding initialization configurations: random, random+adversarial align, fastText pre-trained, and fastText pre-trained+adversarial align. In addition, to lessen the influence of training different amounts during different initializations, we trained an additional 40K update steps in the commonality training phase. We selected newstest2020-enzh.ref.zh in the WMT-20 news translation task as the evaluation set with a total of 1418 sentences to avoid potential overlapping with the training set. The subwordlevel bits-per-word (BPW) was used as the evaluation metric for the model’s MLM performance3. The BPW results on the evaluation set are presented in Table 5. The non-contextualized fastText embedding simulation and adversarial embedding alignment setting achieves better BPW scores than other configurations, which shows the effectiveness of our proposed approach. In addition, comparing the embedding initialization of random+adversarial align and fastText pre-trained shows that pre-training non-contextualized embeddings using language data is more effective than direct embedding space alignment. Considering different training 20K steps versus 40K steps, longer training leads to lower BPW, but the performance gains are less than what our method brings. 3We do this because these models in comparison use the same vocabulary, and the masked parts on the evaluation set are identical, making the BPW scores comparable. Effects of Cross-lingual Transfer Learning in TRELM We conduct further ablation studies to analyze our proposed TRELM framework’s cross-lingual transfer learning design choices, including introducing the novel training objective, CdLM, and the TRILayer structure. The translation performance evaluation results are shown in Table 6. Using the newstest2020 en-zh and zh-en test sets, we evaluate the TRI-RoBERTa-base and TRI-RoBERTa-large models at the end of their transfer training phases. Since there is no alignment information available during the evaluation phase, we use the same successive alignment that MLM uses. For the sequence generated by the model, continuous repetitions were removed and the [SEP] token was taken as the stop mark to obtain the final translation sequence. In the EN→ZH translation direction, we report character-level BLEU, while in ZH→EN, we report word-level BLEU. The Transformer-base NMT models for comparison are from Tiedemann & Thottingal (2020) and were trained on the OPUS corpora (Tiedemann, 2012). As seen from the results, our TRI-RoBERTa-base and TRI-RoBERTa-large with CdLM were able to obtain very good BLEU-1 scores, indicating that the mapping between the transferring source language and target language was explicitly captured by the model. When CdLM is removed and we only use the traditional joint MLM and TLM for training on the same size parallel data, we find that the BLEU-1 score significantly decreases, demonstrating that joint MLM and TLM do not learn explicit alignment information. The BLEU-1 score is lower than that of the Transformer-base NMT model, but this is because the Transformer-base model uses more parallel corpora as well as a more complex model design compared to our non-autoregressive translation pattern and lightweight TRILayer structure. In addition, compared with BLEU-2/3/4, it can be seen that although Transformerbase can accurately translate some tokens, many tokens are not translated or are translated in the wrong order due to the lack of word ordering information and the differing sequence lengths, which result in a very low score. This also shows that word order is a very important factor in translation. Since the TRELM framework is evaluated using the existed pre-trained models, our migrated models are always larger than the original ones. Additional parameters arise in two places: embedding layer parameters grow due to a larger vocabulary and language embeddings, and the TRILayer structure adds parameters. The embedding layer growth is necessary, but the TRILayer structure is optional, as it is only used for cross-lingual transfer training. Therefore, for this ablation, we test removing the TRILayer structure for a fairer comparison4 and show the results in Table 7. Comparing the evaluation set BPW scores of the final models obtained from RoBERTa-base under different migration methods, we found that our TRELM framework is stronger in cross-lingual transfer learning compared to jointly using MLM and TLM, and it does not simply rely on the extra parameters of the TRILayer. Furthermore, applying these pre-trained language models to the downstream task, dependency parsing on the CTB 5.1 treebank, achieves corresponding effects in BPW, which shows that the BPW score does describe the performance of PrLMs and that the pre-training performance will greatly affect performance in downstream tasks. Comparison of Different Cross-lingual Transfer Learning Objectives As discussed in Appendix A.4, CdLM, TLM, and TLM variants such as BRLM are typical objectives of cross-lingual transfer learning, in which parallel sentences are utilized for cross-lingual optimization. In order to compare the differences between these objectives empirically, we conducted a comparative experiment on TRI-RoBERTa-base. For this experiment, instead of using the transfer learning objective CdLM in the second stage of training like our other models, we use TLM or BRLM instead. In addition, we follow (Artetxe et al., 2020) in experimenting with the effects of joint vocabulary versus a separate vocabulary in cross-lingual transfer learning, and we include a model, CdLM∗, with a separate vocabulary in this comparison as well. Specifically, for this model, we forego language embeddings and adopt independent token embeddings for difference languages. CdLM and MLM alternately optimize the model. The empirical comparison of these objectives is listed in Table 8. The migration target language is Chinese, and BPW score is used to compare the performance of the migrated model. We also show the dependency parsing performance on the CTB 5.1 dataset for the obtained model. Looking at CdLM and CdLM∗, in our TRELM framework, using a joint vocabulary leads to better performance than using a separate vocabulary strategy, which is not consistent with Artetxe et al. (2020) ’s conclusion. We attribute this difference to the fact that (Artetxe et al., 2020)’s model uses joint MLM pre-training of multiple languages to achieve implicit transfer learning, so maintaining independent embeddings is important for distinguishing the language. In TRELM, because it trains two half-models, the explicit conversion signal guides the model’s migration training in discerning the language. When using separate vocabularies, some common information (such as punctuation, loanwords, etc.) are ignored, lessening the impact of CdLM. Second, comparing TLM, BRLM, and CdLM, we note that CdLM takes the source and target language sequences as input and output, respectively, which cooperates with the TRILayer and half-model training strategy much better, whereas TL and BRLM combine the source and target sentences as input and predict a masked sentence as in MLM, which is much less conducive to the half-model training strategy. Because the source and target language sentences are separate in CdLM, the model is much more able to differentiate the two languages, which makes CdLM a stronger cross-lingual transfer learning objective. Comparison with Cross-lingual Transfer Learning Related Works on mPrLM Although we propose our method as an alternative to mPrLMs for cross-lingual transferring, it can also be applied to transfer the learning of mPrLMs. When transferring mPrLMs, the vocabulary replacement and embedding re-initialization are no longer needed, which makes our framework more simple. 4In this setting, we train the model with same number of update steps using joint MLM and TLM when leveraging parallel sentences. We examine four main related approaches in the line of cross-lingual transfer learning based on PrLMs. The first approach is trivial: using data from the target language and MLM to finetune a mPrLM. This helps specify the mPrLM as a PrLM specifically for the target language. The second is ROSITAWORD (Mulcaire et al., 2019). In this method, the contextualized embeddings of mPrLM are concatenated with non-contextualized multilingual word embeddings. This representation is then aligned across languages in a supervisory manner using a parallel corpus, biasing the model toward cross-lingual feature sharing. The third, proposed by Liu et al. (2019a), makes use of MIM (Meeting-In-the-Middle) (Doval et al., 2018), which uses a linear mapping to refine the embedding alignment, and is somewhat similar to our first step’s adversarial embedding alignment, but because (Liu et al., 2019a) only migrate the contextualized embedding of an mPrLM, it is not a true migration of the model. Specifically, their post-processing trained linear mapping after the contextualized embedding of mPrLM is completely different from our new initialization of the raw embedding of PrLM. The fourth approach, Word-alignment Finetune, is similar in motivation to our CdLM, which uses the alignment information of the parallel corpora to perform finetuning training on the model (whereas ROSITAWORD and MIM focus on language-specific post-processing on the contextualized embedding of mPrLM). The difference is that Word-Alignment Finetune uses contextualized embedding similarity measurement for alignment to calculate the loss, and our method is inspired by machine translation, which uses language-to-language sequence translation for crosslingual language modeling. We evaluate the effectiveness of these methods on dependency parsing as shown in Table 9. We chose the widely used m-BERT-base as the base mPrLM and Chinese as the target language for these experiments. The resulting models were evaluated on the CTB 5.1 data of the dependency parsing task. For ROSITAWORD, we used the word-level embedding trained by Fastext and aligned by MUSE, as done in the original paper. For MIM, the number of training steps for the linear mapping is kept the same as in our first stage’s adversarial embedding alignment training, and both train for 5 epochs. Target-Language Finetuning and Word-Alignment Finetuning use the same data as our main experiments and the same 120K update as well. We also listed a model migrated from a monolingual PrLM (TRI-BERT) to compare the performance differences between transfer learning from monolinguals and multilingual PrLMs. Since the migrated mPrLM is simpler - it does not need to re-initialize or train embeddings and can converge faster, we train the migrated PrLM model longer steps (400K total training steps) to more fairly compare them. Comparing our TRELM with similar methods, the concatenation of cross-lingual aligned wordlevel embeddings in ROSITAWORD seems to have limited effect. MIM, which uses mapping for post-processing, leads to some improvement, but compared to Target-Language Finetune and Word-Alignment Finetune, it is obviously a weaker option. The results of TRI-m-BERT-base, Word-Alignment Finetune, and Target-Languagde Finetune suggest that using explicit alignment signals is advantageous compared to using the target language monolingual data when finetuning a limited amount of update steps, though when data is sufficient and training time is long enough, the performance for cross-lingually transferred models will approach the performance of monolingually pre-trained models regardless of transfer method. Thus, the methods primarily differ in how they perform with limited data, computing resources, or time. Our TRI-m-BERT-base outperforms +Word-Alignment Finetune, which shows that our CdLM, a language sequence modeling method inspired by machine translation, is more effective than solely deriving loss from an embedding space alignment. The results of TRI-BERT-base and TRI-m-BERT-base demonstrate that the simpler migration for m-BERT-base provides an initial performance boost when both models are trained 120k steps due to its faster convergence, but when they are trained to a longer 400K steps, TRI-BERT-base actually shows better performance compared to TRI-m-BERT-base. More Languages for a More Comprehensive Evaluation In order to demonstrate the generalization ability of the cross-lingual transfer learning of the proposed TRELM framework, we also migrate to German (DE) and Japanese (JA) in addition to Chinese and Indonesian. We also experimented with these languages on the Universal dependency parsing task. The migrated German and Japanese TRI-BERT-base and TRI-RoBERTa-base use the same corpus size and training steps as their respective Chinese and Indonesian models. We show the results of German, Indonesian, and Japanese on UD in Table 10. Since there are no official BERT-base models for these three language, we use third-party pre-trained models: Deepset BERT-base-german5, IndoBERT-base (Wilie et al., 2020), CL-TOHOKU BERT-base-japanese6, and NICT BERT-basejapanese7. First, according to the results in the table, our TRI-BERT-base achieves quite similar performance compared to the third-party BERT-base models and even exceeds the third-party models in some instances. This demonstrates that our TRELM is a general cross-lingual transfer learning framework. Second, comparing third-party pre-trained BERT-base models and the official m-BERT-base, we found that some third-party BERTs are even less effective than m-BERT (Generally speaking, m-BERT is not as good as monolingual BERT when the data and training time are sufficient). This shows that in some scenarios, pre-training from scratch is not a very good choice, potentially due to insufficient data, unsatisfactory pre-training resource quality, and/or insufficient pre-training time. Compared with the well-trained monolingual BERT models, our migrated models are very competitive and can exceed PrLMs suffering from poor pre-training. In addition, in DE and JA, we also observed that the effect of TRI-RoBERTa was stronger than that of the TRI-BERT, indicating that our migration process maintained the performance advantage of the original model. 5https://deepset.ai/german-bert 6https://github.com/cl-tohoku/bert-japanese 7https://alaginrc.nict.go.jp/nict-bert/index.html
1. What is the focus of the paper regarding contextual language model transfer? 2. What are the strengths and weaknesses of the proposed method, particularly in aligning embedding spaces and relying on bitext? 3. Do you have any suggestions or concerns regarding the experiment conditions and comparisons with other methods? 4. How does the reviewer assess the clarity and focus of the paper's contributions?
Review
Review This paper presents a method for transfering a contextual language model from English to e.g. Chinese. First, one aligns the embedding spaces. Then, one trains on the new language (but relying on bitext) and an external word aligner reorders the internal hidden states of the contextual language model to respect the new language's word order. Pros: I think the general intuition makes a lot of sense: we should indeed introduce reordering into contextual language models when doing cross-lingual transfer. Cons: I found the paper somewhat difficult to follow. See suggestions below. The reliance on bitext is quite a large assumption. As such, I think it's necessary to compare with other methods that more directly utilizes the bitext. Also, it is good that Fig 2 examines the effect of bitext amount on BPW, but it would be more convincing to directly show the UAS, F1, or accuracy results as you have in Tables 1-3. Suggestions: The main motivation of the paper is not extremely clear. It took me a while to realize that your goal is to transfer an existing English LM to a new language to avoide pre-training from scratch on a new language (this is the pre-training cost vs. migration training cost in Section 5). At first I thought you simply wanted to build LMs in a new language (without this pre-training cost issue), and were more concerned with multilingual transferability issues like those addressed by multilingual BERT and its variants. Contributions should be more clearly delineated. For example, you discuss the adversarial word embedding alignment in the abstract but it is just borrowed from previous work. I would suggest focusing just on the TRIlayer, which is your main contribution. There are many results but the exact experimental conditions were not clear. What were the datasets (and their sizes and characteristics) for the downstream task vs the pre-training task, and how they relate to the models in the tables. For example in Table 1, do TRI-BERT-base, BERT-small, BERT-base, m-BERT-base all use the same pre-trained data? Or is BERT-base and m-BERT-base downloaded, while BERT-small and the TRI-BERT-base have new pretrained data? In general these conditions are unclear for all the experiments, so more clarification would help.
ICLR
Title Cross-lingual Transfer Learning for Pre-trained Contextualized Language Models Abstract Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant. In this work, building upon the recent works connecting cross-lingual transfer learning and neural machine translation, we thus propose a novel crosslingual transfer learning framework for PrLMs: TRELM. To handle the symbol order and sequence length differences between languages, we propose an intermediate “TRILayer” structure that learns from these differences and creates a better transfer in our primary translation direction, as well as a new cross-lingual language modeling objective for transfer training. Additionally, we showcase an embedding aligning that adversarially adapts a PrLM’s non-contextualized embedding space and the TRILayer structure to learn a text transformation network across languages, which addresses the vocabulary difference between languages. Experiments on both language understanding and structure parsing tasks show the proposed framework significantly outperforms language models trained from scratch with limited data in both performance and efficiency. Moreover, despite an insignificant performance loss compared to pre-training from scratch in resourcerich scenarios, our transfer learning framework is significantly more economical. 1 INTRODUCTION Recently, the pre-trained contextualized language model has greatly improved performance in natural language processing tasks and allowed the development of natural language processing to extend beyond the ivory tower of research to more practical scenarios. Despite their convenience of use, PrLMs currently consume and require increasingly more resources and time. In addition, most of these PrLMs are concentrated in English, which prevents the users of different languages from enjoying the fruits of large PrLMs. Thus, the task of transferring the knowledge of language models from one language to another is an important task for two reasons. First, many languages do not have the data resources that English uses to train such massive and data-dependent models. This causes a disparity in the quality of models available to English users and users of other languages. Second, languages share many commonalities - for efficiency’s sake, transferring knowledge between models rather than wasting resources training new ones is preferable. Multilingual PrLMs (mPrLMs) also aim to leverage languages’ shared commonalities and lessen the amount of language models needed, but they accomplish this by jointly pre-training on multiple languages, which means when they encounter new languages, they need to be pre-trained from scratch again, which causes a waste of resources. This is distinct from using TreLM to adapt models to new languages because TreLM foregoes redoing massive pre-training and instead presents a much more lightweight approach for transferring a PrLM. mPrLMs can risk their multilingualism and finetune on a specific target language, but we will demonstrate that using TreLM to transfer an mPrLM actually leads to better performance than solely finetuning. Therefore, in order to allow more people to benefit from the PrLM, we aim to transfer the knowledge stored in English PrLMs to models for other languages. The differences in training for new languages with mPrLMs and TRELM are shown in Figure 1. Machine translation, perhaps the most common cross-lingual task, is the task of automatically converting source text in one language to text in another language; that is, the machine translation model converts the input consisting of a sequence of symbols in some language into a sequence of symbols in another language; i.e., it follows a sequence-to-sequence paradigm. Language has been defined as “a sequence that is an enumerated collection of symbols in which repetitions are allowed and order does matter” (Chomsky, 2002). From this definition, we can derive three important differences in the sequences of different languages: symbol sets, symbol order, and sequence length, which can also be seen as three challenges for machine translation and three critical issues that we need to address in migrating a PrLM across languages. In this work, to resolve these critical differences in language sequences, we propose a novel framework that enables rapid cross-lingual transfer learning for PrLMs and reduces loss when only limited monolingual and bilingual data are available. To address the first aforementioned issue, symbol sets, we employ a new shared vocabulary and adversarially align our target embedding space with the raw embedding of the original PrLMs. For the symbol order and sequence length issues, our approach draws inspiration from neural machine translation methods that overcome the differences between languages (Bahdanau et al., 2014), and we thus propose a new cross-lingual language modeling objective, CdLM, which tasks our model with predicting the tokens for a from its parallel sentence in the target language. To facilitate this, we also propose a new “TRILayer” structure, which acts as an intermediary layer that evenly splits our models’ encoder layers set into two halves and serves to convert the source representations to the length and order of the target language. Using parallel corpora for a given language pair, we train two models (one in each translation direction) initialized with the desired pre-trained language model’s parameters. Combining the first half of our target-tosource model’s encoder layer set and the second half of our source-to-target model’s encoder layer set, we are thus able to create a full target-to-target language model. During training, we use three separate phases for the proposed framework, where combinations of Masked Language Modeling (MLM), the proposed CdLM, and other secondary language modeling objectives are used. We conduct extensive experiments on Chinese and Indonesian, as well as German and Japanese (shown in Appendix 10), in challenging situations with limited data and transfer knowledge from English PrLMs. On several natural language understanding and structure parsing tasks, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) PrLM models that we migrate using our proposed framework improve the performance of downstream tasks compared to monolingual models trained from scratch and models pre-trained in a multilingual setting. Moreover, statistics show that our framework also has advantages in terms of training costs. 2 RELATED WORK Because of neural networks’ reliance on heavy amounts of data, transfer learning has been an increasingly popular method of exploiting otherwise irrelevant data in recent years. It has seen many applications and has been used particularly often in Machine Translation (Zoph et al., 2016; Dabre et al., 2017; Qi et al., 2018; Nguyen & Chiang, 2017; Gu et al., 2018; Kocmi & Bojar, 2018; Neubig & Hu, 2018; Kim et al., 2019; Aji et al., 2020), in which transfer learning is generally used to improve translation performance in a low resource scenario using the knowledge of a model trained in a high resource scenario. In addition to cross-lingual situations, transfer learning has also been applied to adapt across domains in the POS tagging (Schnabel & Schütze, 2013) and syntactic parsing (McClosky et al., 2010; Rush et al., 2012) tasks, for example, as well as specifically for adapting language models to downstream tasks (Chronopoulou et al., 2019; Houlsby et al., 2019). One particular difference between our method and many transfer learning methods is that we do not exactly use the popular ”Teacher-Student” framework of transfer learning, which is particularly often used in knowledge distillation (Hinton et al., 2015; Sanh et al., 2020) - transferring knowledge from a larger model to a smaller model. We instead use two ”student” models, and unlike traditional methods, these student models do not share a target space with their teacher (the language is different), and their parameters are initialized with the teacher’s parameters rather than being probabilistically guided by the teacher during training. When using transfer learning for cross-lingual training, there have been various solutions for the vocabulary mismatch. Zoph et al. (2016) did not find vocabulary alignment to be necessary, while Nguyen & Chiang (2017) and Kocmi & Bojar (2018) used joint vocabularies, and Kim et al. (2019) made use of cross-lingual word embeddings. One particular work that inspired us is that of Lample et al. (2018), who also used an adversarial approach to align word embeddings without any supervision while achieving competitive performance for the first time. This succeeded the work of Zhang et al. (2017), who also used an adversarial method but did not achieve the same performance. Also like our aligning method, Xu et al. (2018) took advantage of the similarities in embedding distributions and cross-lingually transferred monolingual word embeddings by simultaneously optimizing based on distributional similarity in the embedding space and the back-translation loss. Several works have also explored adapting the knowledge of large contextualized pre-trained language models to more languages, which pose a much more complicated problem compared to transferring non-contextualized word embeddings. The previous mainstream approach for accommodating more languages is using mPrLMs. Implicitly joint multilingual models, such as m-BERT (Devlin et al., 2019), XLM (Conneau & Lample, 2019), XLM-R (Conneau et al., 2019), and mBART (Liu et al., 2020), are usually evaluated on multi-lingual benchmarks such as XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020), while some works use bilingual dictionaries or sentences for explicit cross-lingual modeling with mPrLMs (Schuster et al., 2019; Mulcaire et al., 2019; Liu et al., 2019a; Cao et al., 2020). Transferring monolingual PrLMs, another research branch, is relatively new. Artetxe et al. (2020) presented a monolingual transformer-based masked language model that was competitive with multilingual BERT when transferred to a second language. To facilitate this, they did not rely on a shared vocabulary or joint training (to which multilingual models’ performance is often attributed) and instead simply learned a new embedding matrix through MLM in the new language while freezing parameters of all other layers. Tran (2020) used a similar approach, though instead of randomly initialized embeddings, he used a sparse word translation matrix on English embeddings to create word embeddings in the target language, reducing the training cost of the model. 3 TRELM Cross-lingual Transfer Learning for Language Modeling (TRELM) is a framework that rapidly migrates existing PrLMs. In this framework, the embedding space of a source language is linearly aligned with that of a target using an adversarial embedding alignment, which we experimentally verified was effective due to shared spatial structure similarities (refer to Appendix A.1 for details). Leveraging joint learning, we propose a novel pre-training objective, CdLM, and unify it with MLM into one format. In regards to model structure, we proposed TRILayer, an intermediary transfer layer, to support language conversion during the CdLM training process. 3.1 TRILAYER AND CdLM For the disparities in symbol sets of different languages and different pre-trained models, we employ embedding space alignment, while for the issues of the symbol order and sequence length, unlike previous work, we do not assume that the model can implicitly learn these differences, and we instead leverage language embeddings and explicit alignment information and propose a novel Cross-Lingual Language Modeling (CdLM) training objective and a Transfer Learning Intermediate Layer (TRILayer) structure as a pivot layer in the model to bridge the differences of the two languages. To clearly explain our training approach, we take the popular PrLM BERT as a basis for introduction. In the original BERT (as shown in Figure5(a)), Transformer (Vaswani et al., 2017) is taken as the backbone of model, which takes tokens and their positions in a sequence as input before encoding this sequence into a contextualized representation using multiple stacked multi-head self-attention layers. During the pre-training process, BERT predominantly adopts an MLM training objective, in which a [MASK] (also written as [M]) token is used to replace a token in the sequence selected by a predetermined probability, and the original token is predicted as the gold target. Formally speaking, given a sentence X = {x1, x2, ..., xT } andM, the set of masked positions, the training loss LMLM for the MLM objective is: LMLM(θLM) = − |M|∑ i=1 logPθLM(xMi |X\M), where θLM are the parameters of BERT, |M| is the length of setM, andX\M indicates the sequence after masking. An example of MLM training is shown in the top-left region of Figure 5. Much work in the field of machine translation suggests that the best way to transfer learning across languages is through translation learning because the machine translation model must address all three of the above-described language differences in the training process. Therefore, we take inspiration from the design of machine translation, especially the design of non-autoregressive machine translation, and propose a Cross-Lingual Language Modeling (CdLM) objective. CdLM is just like a traditional language modeling objective, except across languages, so given an input of source tokens, it generates tokens in a separate target language. We describe the differences between CdLM and related MLM variants (such as Translation Language Modeling (TLM) and BRidge Language Modeling (BRLM)) in Appendix A.4. With this proposed objective, we aim to make as few changes as possible to the existing PrLM and thus introduce a Translation/Transfer Intermediate Layer (“TRILayer”) structure, which bridges two opposing half-models to create our final model. First, in the modified version of BERT for transfer learning, we add a language embedding Elng following the practice of (Conneau & Lample, 2019) to indicate the current language being processed by the model. This is important because the model will handle both the source and target languages simultaneously in 2 of our 3 training phases (described in next subsection). The new input embedding is: Einp = Ewrd + Eseg + Epos + Elng, where Ewrd, Eseg , and Epos are the word (token) embedding, segment embedding, and position embedding, respectively. Next, we denote N as the number of stacked Transformer layers (L = {l1, l2, ..., lN}) in BERT and split the BERT layers into two halves L≤N2 = {11, ..., lN2 } and L>N2 = {lN2 +1, lN2 +2, ..., lN}. The TRILayer is placed between the two halves (making the total number of layers N +1) and functions as a pivot. In the L≤N2 half, the input embedding is encoded by its Transformer layers to hidden states Hi = TRANSFORMERi(Hi−1), in which H0 = Einp and TRANSFORMERi indicates the i-th Transformer layer in the model. Before the outputs of the L≤N2 half are fed into the TRILayer, the source hidden representation HN 2 is reordered according to new order O. During CdLM training, for source language sentence X = {x1, x2, ..., xT }, a possible translation sentence Y = {y1, y2, ..., yT ′} is provided. To find the new order, explicit alignment information between the transfer source and target sentences is obtained using an unsupervised external aligner tool. We define the source-to-target alignment pair set as: AX→Y = ALIGN(X,Y ) = {(xALNIDX(y1), y1), (xALNIDX(y2), y2), ..., (xALNIDX(yT ′ ), yT ′ )}, where ALNIDX(·) is a function that returns the alignment index in the source language or xnull when there is no explicit alignment between the token in the target language and any source language token. xnull represents a special placeholder token [P] that is always appended to the inputs. Finally, the source hidden representation HN 2 is reordered according to the new order O = {ALNIDX(y1),ALNIDX(y2), ...,ALNIDX(yT ′ )} from alignment set AX→Y , creating HON 2 . Thus, the resultant hidden representation HON 2 is in the order of the target language and is consistent with the target sequence in length, making it usable for language modeling prediction. Unfortunately, the position information is lost in reordering. To combat this, the position embedding and language embedding will be reintegrated as follows: HTL = TRANSFORMERTL(HON 2 + ElngY + Epos), where HTL is the output of TRILayer, TRANSFORMERTL is the Transformer structure inside the TRILayer, and ElngY is the target language embedding. Next, the HTL is encoded in the L> 2N half as done for the L≤N2 half (let HN2 = HTL for the L>N2 half) to predict the final full sequence of the target language. The model is trained to minimize the loss LCdLM, which is: LCdLM(θLM) = − T ′∑ i=1 logPθLM(yi|X,AX→Y ). To enable MLM and CdLM to train models simultaneously rather than through successive optimization, we provide a unified view for MLM and CdLM language modeling: LULM(θLM) = − Tmax∑ i=1 1(i ∈ C) logPθLM(wi|S,A), where Tmax denotes the maximum sequence length for language modeling, S is the input sequence, wi is the i-th token in output sequence W , C is the set of positions to be predicted, and A is the alignment between the input and output sequence. Both the input and output sequences are padded to the maximum sequence length Tmax during training. 1(i ∈ C) represents the indicator function and equals 1 when i-th position exists in the set for the parse to be predicted and 0 otherwise. In MLM, S = X\C , A = {(1, 1), (2, 2), ..., (Tmax, Tmax)} is a successive alignment, and W = X , while in CdLM, S = X , A = AX→Y , and W = Y . Due to the unified language modeling abstractions of MLM and CdLM, the input and output forms, as well as the internal logic of their models, are the same. Therefore, models can be trained with the two objectives in the same mini-batch, which enhances the stability of transfer training. 3.2 TRIPLE-PHASE TRAINING In our TRELM framework, the whole training process is divided into three phases with different purposes but the same design goal: minimize the number of parameter updates as much as possible to speed up convergence and enhance training stability. The three phases are commonality training, transfer training, and language-specific training. In the commonality learning phase, only the target language MLM objective is used, while in the transfer learning phase, CdLM and target language MLM objectives are both used at the same time, and in the final language-specific learning phase, target language MLM and other secondary language modeling objectives are adopted. Commonality Training Though languages are very different on the surface, they also share a lot of underlying commonalities, often called linguistic universals or cross-linguistic generalizations. We therefore take advantage of these commonalities between languages and jointly learn the transferring source and target languages. In this phase, the parameters of the position, segment embedding, and Transformer layers are initialized with original BERT, the TRILayer is initialized with parameters of Transformer layer LN 2 , the word embedding is initialized with the output of the adversarial embedding aligning, and orthogonal weight initializations are adopted for the language embedding. For this phase, the model is trained by joint MLM with monolingual inputs from both the source and target languages. Moreover, in this training process, to make convergence fast and stable, the parameters of BERT’s backbone (Transformer) layers are fixed; only the embeddings and TRILayer are updated by the gradient-based optimization based on the joint MLM loss. The final model obtained in this phase is denoted as θctLM. Transfer Training Since the model is not pre-trained from scratch, making the model aware of changes in inputs is a critical factor for a maximally rapid and accurate migration in the case of limited data. Since there is not enough monolingual data in the target language to allow the model to adapt to the new language, we use the supervisory signal from the two languages’ differences and leverage parallel corpora to directly train the model. Specifically, we split the original BERT transformer layers into two halves. With a parallel corpus from the source language to the target language and one from the target language to the source language, we train two corresponding models, both of which are initialized using the parameters learned in the previous phase. In the source-to-target model, only the upper half of the encoder layers is trained, and the lower half is kept fixed, while the converse is true for the target-to-source model. TRILayer then provides crosslingual order and length adjustment, which is similar to the behavior of a neural machine translation model. Thus, we create two reciprocal models: one whose upper half can handle the target language, and one whose lower half can handle it, which we connect via the TRILayer. Finally, the two trained models are combined as θttLM. We describe the full procedure in Algorithm 1. Algorithm 1 Transfer Training of Pre-trained Contextualized Language Models Input: The commonality pre-trained model parameters θctLM, Languages L = {lngX , lngY }, Parallel training set P = {(XL0i , X L1 i )} |P| i=1, Number of training steps K 1: for j in 0, 1 do 2: Initialize model parameters θ Lj→L(1−j) LM ← θ ct LM 3: if j == 0 then 4: Fix the parameters of L≤N 2 half of θ Lj→L(1−j) LM 5: else 6: Fix the parameters of L>N 2 half of θ Lj→L(1−j) LM 7: end if 8: for step in 1, 2, 3, ..., K do 9: Sample batch (XLj , XL(1−j)) from P. 10: Alignment information A: ALj→L(1−j) ← ALIGN(X Lj , XL(1−j)) 11: CdLM Loss: LCdLM ← − ∑ logP θ Lj→L(1−j) LM (XL(1−j) |XLj ,ALj→L(1−j)) 12: Masked version of XL1 : XL1\M ← MASK(X L1) 13: MLM Loss: LMLM ← − ∑ logP θ Lj→L(1−j) LM (XL1M |X L1 \M ) 14: CdLM+MLM Update: θ Lj→L(1−j) LM ← optimizer update(θ Lj→L(1−j) LM ,LCdLM,LMLM) 15: end for 16: end for 17: Combine the two obtained models as θttLM by choosing the L>N 2 half model parameters from model θL0→L1LM and L≤N 2 half model parameters from model θL1→L0LM and average the other parameters (such as embedding and TRILayer parameters) of the two models Output: Learned model θttLM Language-specific Training During the language-specific training phase, we only use the monolingual corpus of the target language and further strengthen the target language features for the model obtained in the transfer training phase. We accomplish this by using the MLM objective and other secondary objectives such as Next Sentence Prediction (NSP). 4 EXPERIMENTS In this section, we discuss the details of the experiments undertaken for this work. We conduct experiments based on English PrLMs1. We transfer via English-to-Chinese and English-to-Indonesian directions for the purpose of comparing with previous recent work. We describe the training details and parameters in Appendix A.5. From English to Chinese and English to Indonesian, we transfer two pre-trained contextualized language models: BERT and RoBERTa. Our performance evaluation on the migrated models is mainly conducted on two types of downstream tasks: language understanding and language structure parsing. Please refer to Appendix A.6 for introductions of tasks and baselines and Appendix A.7 for an ablation study. We note that the comparisons between models trained using TRELM and the monolingual and multilingual PrLMs trained from scratch on the target language (see Table 1) is only for illustrating the relative performance loss of the model 1Our code is available at https://github.com/agcbi2017/TreLM. produced by TRELM. These models are not directly comparable, as we intentionally use less data to train models when using TRELM. Continuing to pre-train the PrLMs on the target language would also obviously further improve their performance, but this is not our main focus. Language Understanding We first compare the PrLMs transferred by TRELM alongside the results the existing monolingual pre-trained BERT-base-chinese and the multilingual pre-trained BERT-base-multilingual in Table 1 using the CLUE benchmark. When comparing with the same model architecture, taking BERT as an example, our model TRIBERT-base exceeds m-BERT-base and BERT-small and is slightly weaker than original BERT-base. Compared with BERT-small, which is trained from scratch for a longer time, our TRI-BERT-base generally achieves better results on these NLU tasks. This demonstrates that because of the commonalities of languages, models for languages with relatively few resources can benefit from language models pre-trained on languages with richer resources, which confirms our cross-lingual transfer learning framework’s effectiveness. m-BERT is another potential language model migration scheme and has the advantage of supporting multiple languages at the same time; however, in order to be compatible with multiple languages, the unique characteristics of each language are neglected. Our TRI-BERT, which is built on top of BERT-base, instead focuses on and highlights language differences during the transfer learning process, which leads to an increase in performance compared to m-BERT. When TRI-BERT and TRI-RoBERTa have the same model size, TRI-RoBERTa outperforms TRI-BERT, which is consistent with the performance differences between the original RoBERTa and BERT, indicating that our migration approach maintains the performance advantages of PrLMs. Models CoNLL-09 P R F1 (Cai et al., 2018) 84.7 84.0 84.3 +BERT-base 86.86 87.48 87.17 +m-BERT-base 85.17 85.53 85.34 +TRI-BERT-base 86.15 85.58 85.86 +TRI-RoBERTa-base 87.08 86.99 87.03 +TRI-RoBERTa-base 85.77 85.62 85.69(w/o CdLM) Table 3: Dependency SRL results on the CoNLL-2009 Chinese benchmark. 0 1K 10K 100K 500K 1M 2 3 4 Parallel Data BP W 86 86.5 87 Se m -F 1 Figure 2: Language modeling effects vs. Parallel data size on the evaluation set. Language Structure Parsing We report results on dependency parsing for Chinese and Indonesian in Table 2. As shown in the results, the baseline model has been greatly improved for the PrLM. In Chinese, the performance of BERT-base is far superior to m-BERT-base, which highlights the importance of the unique nature of the language for downstream tasks, especially for refined structural analysis tasks. In Indonesian, IndoBERT (Wilie et al., 2020) performs worse than m-BERT, which we suspect is due to IndoBERT’s insufficient pre-training. We also compare TRI-BERT-base and IndoBERT-base on Indonesian, whose ready-to-use language resources are relatively small compared to English. We find that although pre-training PrLMs on the available corpora is possible, because of the size of language resources, engineering implementation, etc., our migrated model is more effective than the model pre-trained from scratch. This shows that migrating from the readymade language models produced from large-scale language training and extensively validated by the community is more effective than pre-training on relatively small and limited language resources. In addition, we also conduct experiments for these pre-trained and migrated models on Chinese SRL. mPrLMs are another important and competitive approach that can adapt to cross-lingual PrLM applications, so we also include several mPrLMs in our comparison on dependency parsing. Specifically, we used XLM, a monolingual and multilingual PrLM pre-training framework, as our basis. For TRELM, we used XLM-en-2048, officially provided by Conneau & Lample (2019), as the source model. The data amount used and the number of training steps are consistent with TRI-BERT/TRIRoBERTa. In mPrLM, we combined EN, ID, and ZH sentences (including monolingual and parallel sentences) together (10M sentences in total) to train an EN-ID-ZH mPrLM with MLM and TLM objectives. The performance comparison of these three PrLMs on the dependency parsing task is shown in the lower part of Table 2. From the results, we see mPrLMs pre-trained from scratch have no special performance advantage over TRELM when corpus size is constant, and especially when not using the cross-lingual transfer learning objective TLM, which models parallel sentences. In fact, our TRI-XLM-en-2048 solidly outperforms its two multilingual XLM counterparts. Monolingual PrLMs generally outperform mPrLMs, which likely leads to the performance advantages shown with monolingual migration. Additionally, like our TRELM, mPrLMs can also finetune on only the target language to improve performance, and leveraging TRELM to transfer an mPrLM leads to even further gains, as seen in Table 9 in the appendix. While the two approaches can compete with each other, they have their own advantages in general. In particular, TRELM is more suitable for transferring additional languages that were not considered in the initial pre-training phase and for low-resource scenarios, while mPrLMs have the advantage of being able to train and adapt to multiple languages at once. In Table 3, we compared a model migrated without CdLM to the full one. To compensate for the removal of CdLM, we added a monolingual corpus with the same size as the parallel corpora and trained the model with an extra 80K steps, but despite using more target monolingual data and training steps, the performance was still much better when CdLM was included. 5 DISCUSSION Effects of Parallel Data Scale Since the proposed TRELM framework relies on parallel corpora to learn the language differences explicitly, the sizes of the parallel corpora used are also of concern. We explored the influence of different parallel corpus sizes on the performance of the models transferred with the TRI-RoBERTa-base architecture. The variation curve of BPW score with the size of parallel data is shown in Figure 2. We see that with increasingly more parallel data, BPW gradually decreases, but this decrease slows as the data grows. The effect of the parallel corpora for cross-lingual transfer therefore has a upper bound because when the parallel corpora reaches a certain size, the errors from the alignment extraction tools cannot be ignored, and additionally, due to how lightweight the TRILayer structure is, TRILayers can only contain so much cross-lingual transfer information, which further restricts the growth of the migration performance. Pre-training Cost vs. Migration Training Cost The training cost is an important factor for choosing whether to pre-train from scratch or to migrate from an existing PrLM. We listed the training data size, model parameters, training hardware, and training time of several public PrLM models and compared them with our models. The comparisons are shown in Table 4. Although the training hardware and engineering implementation of various PrLM models are different, this can still be used as a general reference. When model size is the same, our proposed transfer learning is much faster than pre-training from scratch, and less data is used in the transfer learning process. In addition, the total training time of our large model migration training is less than that of even the base model pre-training when hardware is kept the same. Therefore, the framework we proposed can be used as a good supplementary scheme for the PrLM in situations when time or computing resources are restricted. Model Data BSZ Steps Params Hardware Train Time G/TPU·Days 6 CONCLUSION AND FUTURE WORK In this work, we present an effective method of transferring knowledge from a given language’s pre-trained contextualized language model to a model in another language. This is an important accomplishment because it allows more languages to benefit from the massive improvements arising from these models, which have been primarily concentrated in English. As a further plus, this method also enables more efficient model training, as languages have commonalities, and models in the target language can exploit these commonalities and quickly adopt these common features rather than learning them from scratch. In future work, we plan to use our framework to transfer other models such as ALBERT and models for more languages. We also aim to develop an unsupervised cross-lingual transfer learning objective to remove the reliance on parallel sentences. A APPENDIX A.1 ADVERSARIAL EMBEDDING ALIGNING Since the symbol sets in different languages are different, the first step in the cross-lingual migration of PrLMs is to supplement or even replace their vocabularies. In our proposed framework, to make the best use of the commonalities between languages, we choose to use a shared vocabulary with multiple languages rather than replace the original language vocabulary with one for the new language. In addition, in current PrLMs, a subword vocabulary is generally adopted in order to better mitigate out-of-vocabulary (OOV) problems caused by limited vocabulary size. To accommodate the introduction of a shared vocabulary, it is necessary to jointly re-train the subword model to ensure that some common words in different languages are consistent in subword segmentation, which leads to the problem that some tokens in the newly acquired subword vocabulary are different from those in the original subword vocabulary, though they belong to the same language. To address this issue, we consider the most complicated case, in which the vocabulary is completely replaced by a new one. Consequently, we assume that there are two embedding spaces: one is the embedding of the original vocabulary, which is well-trained in the language pre-training process, and the other is the embedding of the new vocabulary, yet to be trained. When considering raw embeddings and non-contextualized embeddings (e.g. Word2vec), it is easy to see their training objectives are similar in theory. The only differences are the addition of context and the change in model structure to accommodate language prediction. Despite these differences, non-contextualized embeddings can be used to simulate the raw embeddings in a PrLM that we aim to replace (refer to Appendix A.2 for a detailed explanation). Although the two embedding spaces we consider are similar in structure, they may be at different positions in the whole real embedding space, so an extra alignment process is required, and although common tokens may exist, due to the inconsistent token granularity from using byte-level byte-pair encoding (BBPE) (Radford et al., 2019), a matching token of the two embedding spaces cannot be utilized for embedding space alignment, as it is likely to represent different meanings. Therefore, inspired by (Lample et al., 2018), we present an adversarial approach for aligning the word2vec embedding space to the PrLM’s raw embedding space without supervision. With this approach, we aim to minimize the differences between the two embedding spaces brought about by different similarity forms. We define U = {u1, u2, ..., um} and V = {v1, v2, ..., vn} as the two embedding spaces of m and n tokens from the PrLM and word2vec training, respectively. In the adversarial training approach, a linear mapping W is trained to make the spaces WV = {Wv1,Wv2, ...,Wvn} and U close as possible, while a discriminator D is employed to discriminate between tokens randomly sampled from spaces WV and U . Let θadv denote the parameters of the adversarial training model and the probabilities Pθadv (1(z)|z) and Pθadv (0(z)|z) indicate whether or not the sampling source prediction is the same as its real space for a vector z. Therefore, the discrimination training loss LD(θD|W ) and the mapping training loss LD(W |θD) are defined as: LD(θD|W ) = − 1 n n∑ i=1 logPθadv (1(Wvi)|Wvi)− 1 m m∑ i=0 logPθadv (1(ui)|ui), LW (W |θD) = − 1 n n∑ i=1 logPθadv (0(Wvi)|Wvi)− 1 m m∑ i=0 logPθadv (0(ui)|ui), where θD are the parameters of discriminator D, which is implemented as a multilayer perceptron (MLP) with two hidden layers and Leaky-ReLU as the activation function. During the adversarial training, the discriminator parameters θD and W are optimized successively with discrimination training loss and mapping training loss. To enhance the effect of embedding space alignment, we adopted the same techniques of iterative refinement and cross-domain similarity local scaling as Lample et al. (2018) did. While the two embedding spaces in (Lample et al., 2018) both can be updated by gradient, we consider U as the goal spatial structure and hence fix U throughout the training process, and we updateW to better align V . A.2 ANALYZING NON-CONTEXTUALIZED EMBEDDINGS AND PrLMS’ RAW EMBEDDINGS Bidirectional PrLMs such as BERT (Devlin et al., 2019) use Masked Language Modeling (MLM) as the training objective, in which the model is required to predict a masked part of the sentence. This training paradigm has no essential difference with word2vec (Mikolov et al., 2013). Word2vec employed a simple single-layer perceptron neural network and restricted the context for the masked part to the sliding window, while recent mainstream PrLMs adopted self-attention-based Transformer as the context encoder, which can utilize the whole sentence as context. Because of this, we speculate that BERT’s raw embeddings and word2vec embeddings have a similar nature, and that we can simulate BERT’s raw embeddings with the word2vec embeddings through some special designs. To verify our theory, we studied the important relational nature of embeddings. Specifically, we chose BERT-base-cased’s raw embeddings and word2vec-based FastText cc.en.300d embeddings (Grave et al., 2018) and evaluated the cosine similarity of single terms compared to other terms in their vocabularies. An example histogram for the term “genes” is shown in Figure 3. Examining the two types of embeddings, we found that the learned vectors, regardless of the type of similarity (semantic/syntactic/inflections/spelling/etc.) they capture, have a very similar distribution shape. This showed us that the two embedding spaces are similar, and words within them may just have different relations to each other. Thus, our work focuses on aligning the new word2vec embedding space by learning a mapping to the original embedding space to simulate the original embedding allow for a cross-lingual migration of the PrLM. To illustrate the necessity of embedding alignment, we also took out the top-50 terms closest to the term “genes” in the two embedding spaces, used principal component analysis (PCA) to reduce the vector dimension to 2, and presented it in a two-dimensional figure, as shown in Figure 4. As can be seen from the figure, due to the different language modeling architectures and contexts in FastText and BERT, corresponding points are distributed at different locations in the embedding space. This is why compatibility problems exist when we use the original non-contextualized embeddings to simulate the new embedding and hence why we need to align the embeddings. A.3 MODEL ARCHITECTURE IN TRELM A.4 MLM, TLM, BRLM, AND CdLM As stated in the original MLM objective, the model can only learn from monolingual data. Though a joint MLM training can be performed across languages, there is still a lack of explicit language cues for guiding the model in distinguishing language differences. Conneau & Lample (2019) proposed a Translation Language Modeling (TLM) objective as an extension of the MLM objective. The TLM objective leverages bilingual parallel sentences by concatenating them into single sequences as in the original BERT and predicts the tokens masked in the concatenated sequence. This encourages the model to predict the masked part in a bilingual context. Ji et al. (2020) further proposed a BRidge Language Modeling (BRLM) built on the TLM, benefiting from explicit alignment information or additional attention layers that encourage word representation alignment across different languages. These MLM variants drive models to learn explicit or implicit token alignment information across languages and have been shown effective in machine translation compared to the original MLM, but for the cross-lingual transfer learning of PrLMs, modeling the order difference and semantic equivalence in different languages is still not enough. Since both contexts in MLM variants have been exposed to the model, whether the prediction of the masked part depends on the cross-lingual context or the context of its own language is unknown, as it lacks explicit clues for cross-lingual training. In our proposed CdLM, we use sentence alignment information for explicit ordering. The model is exposed to both the transfer source and transfer target languages at the same time, during which the input is a sequence of the source language, and the prediction goal is a sequence of the target language. Thus, we convert translation into a cross-language modeling objective, which gives a clear supervision signal for cross-lingual transfer learning. A.5 TRAINING DETAILS The initial weights for the migration are BERT-base-cased, BERT-large-cased, RoBERTa-base, and RoBERTa-large, which are taken from their official sources. We use English Wikipedia, Chinese Wikipedia, Chinese News, and Indonesian CommonCrawl Corpora for the monolingual pre-training data. For all models migrated in the same direction, regardless of their original vocabulary, we used the same single vocabulary that we trained on the joint language data using the WordPiece Subword scheme (Schuster & Nakajima, 2012). In English-to-Chinese, the vocabulary size is set to 80K and the alphabet size is limited to 30K, while in English-to-Indonesian, the vocabulary size is set to 50K, and the alphabet size is limited to 1K. With the WordPiece vocabulary, we tokenized the monolingual corpus to train the non-contextualized word2vec embedding of subwords. Using the fastText (Bojanowski et al., 2017) tool and skipgram representation mode, three embedding sizes 128, 768, and 1024 were trained to be compatible the respective pre-trained language models. In the “commonality” training phase, we sampled 1M sentences of English Wikipedia and either 1M sentences of Chinese Wikipedia or 1M sentences of Indonesian CommonCrawl for the English-toChinese and English-to-Indonesian models. We trained the model with 20K update steps with total batch size 128 and set the peak learning rate to 3e-5. For the “transfer” training phase, we sampled 1M parallel sentences from the UN Corpus (Ziemski et al., 2016) for English-to-Chinese and 1M parallel sentences from OpenSubtitles Corpus (Lison & Tiedemann, 2016) for English-to-Indonesian. We use the fastalign toolkit (Dyer et al., 2013) to extract the tokenized subword alignments for CdLM. The two half models are optimized over 20K update steps, and the batch size and peak learning rate are set to 128 and 3e-5, respectively. In the final phase, “language-specific” training, 2M Chinese and Indonesian sentences were sampled to update their respective models, training for 80K steps with total batch size 128 and initial learning rate 2e-5. In all the above training phases, the maximum sequence length was set to 512, weight decay was 0.01, and we used Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999. In addition to our migrated pre-trained models, we also pre-trained a BERT-small2 model from scratch with data of the same size as our migration process to compare the performance differences between migration and scratch training. For the BERT-small model, we started with the BERT-base hyper-parameters and vocabulary but shortened the maximum sequence length from 512 to 128, reduced the model’s hidden and token embedding dimension size from 768 to 256, set the batch size to 256, and extended the training steps to 240K. Our TRI-BERT -* and TRI-RoBERTa-* all used the same amount of training data (2M target language monolingual sentences, 1M source language monolingual sentences, and 1M parallel sentences). BERT-small pre-trained from scratch on only the target language, using 5M target language sentences to ensure the training data amount was the same. Compared with the original model, the TRI-* model only has an extra TRI-layer added and some changes in the embedding layer. BERTbase-chinese and m-BERT-base models were downloaded from the official repository, which trained with 25M sentence (much more than our 5M sentences) and more training steps. 2The performance of BERT-base for pre-training from scratch with this limited data is inferior to that of BERT-small, so we do not compare it with our migrated models. A.6 DOWNSTREAM TASKS Following previous contextualized language model pre-training, we evaluated the English-toChinese migrated language models on the CLUE benchmark. The Chinese Language Understanding Evaluation (CLUE) benchmark (Xu et al., 2020) consists of six different natural language understanding tasks: Ant Financial Question Matching (AFQMC), TouTiao Text Classification for News Titles (TNEWS), IFLYTEK (CO, 2019), Chinese-translated Multi-Genre Natural Language Inference (CMNLI), Chinese Winograd Schema Challenge (WSC), and Chinese Scientific Literature (CSL) and three machine reading comprehension tasks: Chinese Machine Reading Comprehension (CMRC) 2018 (Cui et al., 2019), Chinese IDiom cloze test (CHID) Zheng et al. (2019), and Chinese multiple-Choice machine reading Comprehension (C3) (Sun et al., 2019). We built baselines for the natural language understanding tasks by adding a linear classifier on top of the “[CLS]” token to predict label probabilities. For the extractive question answering task, CMRC, we packed the question and passage tokens together with special tokens to form the input: “[CLS] Question [SEP] Passage [SEP]”, and employed two linear output layers to predict the probability of each token being the start and end positions of the answer span following the practice for BERT (Devlin et al., 2019). Finally, in the multi-choice reading comprehension tasks, CHILD and C3, we concatenated the passage, question, and each candidate answer (“[CLS] Question || Answer [SEP] Passage [SEP]”), input this to the models, and also predicted the probability of each answer on the representations from the “[CLS]” token following prior works (Yang et al., 2019; Liu et al., 2019b). In addition to these language understanding tasks, language structure analysis tasks are also a very important part of natural language processing. Therefore, we also evaluated the PrLMs on syntactic dependency parsing and semantic role labeling, a type of semantic parsing. The baselines we selected for dependency parsing and semantic role labeling are from (Dozat & Manning, 2016) and Cai et al. (2018), respectively. These two baseline models are very strong and efficient and rely only on pure model structures to obtain advanced parsing performance. Our approach to integrate the PrLM with the two baselines is to replace the BiLSTM encoder in the baseline with the encoder of the PrLM. We took the first subword or character representation of a word as the representation of a word, which solved the PrLM’s inconsistent granularity issue that impeded parsing. For the English-to-Indonesian migrated language models, since the language understanding tasks in Indonesian are very limited, we chose to use the Universal Dependency (UD) parsing task (v2.3, Zeman et al., 2018), in which the treebanks of the world’s languages were built by an international cooperative project, as the downstream task for evaluation. A.7 ABLATIONS Effects of Different Embedding Initialization To show the effectiveness of non-contextualized simulation and adversarial embedding space alignment, we compare the TRI-RoBERTa-base models obtained in the commonality training phase of our framework under four different embedding initialization configurations: random, random+adversarial align, fastText pre-trained, and fastText pre-trained+adversarial align. In addition, to lessen the influence of training different amounts during different initializations, we trained an additional 40K update steps in the commonality training phase. We selected newstest2020-enzh.ref.zh in the WMT-20 news translation task as the evaluation set with a total of 1418 sentences to avoid potential overlapping with the training set. The subwordlevel bits-per-word (BPW) was used as the evaluation metric for the model’s MLM performance3. The BPW results on the evaluation set are presented in Table 5. The non-contextualized fastText embedding simulation and adversarial embedding alignment setting achieves better BPW scores than other configurations, which shows the effectiveness of our proposed approach. In addition, comparing the embedding initialization of random+adversarial align and fastText pre-trained shows that pre-training non-contextualized embeddings using language data is more effective than direct embedding space alignment. Considering different training 20K steps versus 40K steps, longer training leads to lower BPW, but the performance gains are less than what our method brings. 3We do this because these models in comparison use the same vocabulary, and the masked parts on the evaluation set are identical, making the BPW scores comparable. Effects of Cross-lingual Transfer Learning in TRELM We conduct further ablation studies to analyze our proposed TRELM framework’s cross-lingual transfer learning design choices, including introducing the novel training objective, CdLM, and the TRILayer structure. The translation performance evaluation results are shown in Table 6. Using the newstest2020 en-zh and zh-en test sets, we evaluate the TRI-RoBERTa-base and TRI-RoBERTa-large models at the end of their transfer training phases. Since there is no alignment information available during the evaluation phase, we use the same successive alignment that MLM uses. For the sequence generated by the model, continuous repetitions were removed and the [SEP] token was taken as the stop mark to obtain the final translation sequence. In the EN→ZH translation direction, we report character-level BLEU, while in ZH→EN, we report word-level BLEU. The Transformer-base NMT models for comparison are from Tiedemann & Thottingal (2020) and were trained on the OPUS corpora (Tiedemann, 2012). As seen from the results, our TRI-RoBERTa-base and TRI-RoBERTa-large with CdLM were able to obtain very good BLEU-1 scores, indicating that the mapping between the transferring source language and target language was explicitly captured by the model. When CdLM is removed and we only use the traditional joint MLM and TLM for training on the same size parallel data, we find that the BLEU-1 score significantly decreases, demonstrating that joint MLM and TLM do not learn explicit alignment information. The BLEU-1 score is lower than that of the Transformer-base NMT model, but this is because the Transformer-base model uses more parallel corpora as well as a more complex model design compared to our non-autoregressive translation pattern and lightweight TRILayer structure. In addition, compared with BLEU-2/3/4, it can be seen that although Transformerbase can accurately translate some tokens, many tokens are not translated or are translated in the wrong order due to the lack of word ordering information and the differing sequence lengths, which result in a very low score. This also shows that word order is a very important factor in translation. Since the TRELM framework is evaluated using the existed pre-trained models, our migrated models are always larger than the original ones. Additional parameters arise in two places: embedding layer parameters grow due to a larger vocabulary and language embeddings, and the TRILayer structure adds parameters. The embedding layer growth is necessary, but the TRILayer structure is optional, as it is only used for cross-lingual transfer training. Therefore, for this ablation, we test removing the TRILayer structure for a fairer comparison4 and show the results in Table 7. Comparing the evaluation set BPW scores of the final models obtained from RoBERTa-base under different migration methods, we found that our TRELM framework is stronger in cross-lingual transfer learning compared to jointly using MLM and TLM, and it does not simply rely on the extra parameters of the TRILayer. Furthermore, applying these pre-trained language models to the downstream task, dependency parsing on the CTB 5.1 treebank, achieves corresponding effects in BPW, which shows that the BPW score does describe the performance of PrLMs and that the pre-training performance will greatly affect performance in downstream tasks. Comparison of Different Cross-lingual Transfer Learning Objectives As discussed in Appendix A.4, CdLM, TLM, and TLM variants such as BRLM are typical objectives of cross-lingual transfer learning, in which parallel sentences are utilized for cross-lingual optimization. In order to compare the differences between these objectives empirically, we conducted a comparative experiment on TRI-RoBERTa-base. For this experiment, instead of using the transfer learning objective CdLM in the second stage of training like our other models, we use TLM or BRLM instead. In addition, we follow (Artetxe et al., 2020) in experimenting with the effects of joint vocabulary versus a separate vocabulary in cross-lingual transfer learning, and we include a model, CdLM∗, with a separate vocabulary in this comparison as well. Specifically, for this model, we forego language embeddings and adopt independent token embeddings for difference languages. CdLM and MLM alternately optimize the model. The empirical comparison of these objectives is listed in Table 8. The migration target language is Chinese, and BPW score is used to compare the performance of the migrated model. We also show the dependency parsing performance on the CTB 5.1 dataset for the obtained model. Looking at CdLM and CdLM∗, in our TRELM framework, using a joint vocabulary leads to better performance than using a separate vocabulary strategy, which is not consistent with Artetxe et al. (2020) ’s conclusion. We attribute this difference to the fact that (Artetxe et al., 2020)’s model uses joint MLM pre-training of multiple languages to achieve implicit transfer learning, so maintaining independent embeddings is important for distinguishing the language. In TRELM, because it trains two half-models, the explicit conversion signal guides the model’s migration training in discerning the language. When using separate vocabularies, some common information (such as punctuation, loanwords, etc.) are ignored, lessening the impact of CdLM. Second, comparing TLM, BRLM, and CdLM, we note that CdLM takes the source and target language sequences as input and output, respectively, which cooperates with the TRILayer and half-model training strategy much better, whereas TL and BRLM combine the source and target sentences as input and predict a masked sentence as in MLM, which is much less conducive to the half-model training strategy. Because the source and target language sentences are separate in CdLM, the model is much more able to differentiate the two languages, which makes CdLM a stronger cross-lingual transfer learning objective. Comparison with Cross-lingual Transfer Learning Related Works on mPrLM Although we propose our method as an alternative to mPrLMs for cross-lingual transferring, it can also be applied to transfer the learning of mPrLMs. When transferring mPrLMs, the vocabulary replacement and embedding re-initialization are no longer needed, which makes our framework more simple. 4In this setting, we train the model with same number of update steps using joint MLM and TLM when leveraging parallel sentences. We examine four main related approaches in the line of cross-lingual transfer learning based on PrLMs. The first approach is trivial: using data from the target language and MLM to finetune a mPrLM. This helps specify the mPrLM as a PrLM specifically for the target language. The second is ROSITAWORD (Mulcaire et al., 2019). In this method, the contextualized embeddings of mPrLM are concatenated with non-contextualized multilingual word embeddings. This representation is then aligned across languages in a supervisory manner using a parallel corpus, biasing the model toward cross-lingual feature sharing. The third, proposed by Liu et al. (2019a), makes use of MIM (Meeting-In-the-Middle) (Doval et al., 2018), which uses a linear mapping to refine the embedding alignment, and is somewhat similar to our first step’s adversarial embedding alignment, but because (Liu et al., 2019a) only migrate the contextualized embedding of an mPrLM, it is not a true migration of the model. Specifically, their post-processing trained linear mapping after the contextualized embedding of mPrLM is completely different from our new initialization of the raw embedding of PrLM. The fourth approach, Word-alignment Finetune, is similar in motivation to our CdLM, which uses the alignment information of the parallel corpora to perform finetuning training on the model (whereas ROSITAWORD and MIM focus on language-specific post-processing on the contextualized embedding of mPrLM). The difference is that Word-Alignment Finetune uses contextualized embedding similarity measurement for alignment to calculate the loss, and our method is inspired by machine translation, which uses language-to-language sequence translation for crosslingual language modeling. We evaluate the effectiveness of these methods on dependency parsing as shown in Table 9. We chose the widely used m-BERT-base as the base mPrLM and Chinese as the target language for these experiments. The resulting models were evaluated on the CTB 5.1 data of the dependency parsing task. For ROSITAWORD, we used the word-level embedding trained by Fastext and aligned by MUSE, as done in the original paper. For MIM, the number of training steps for the linear mapping is kept the same as in our first stage’s adversarial embedding alignment training, and both train for 5 epochs. Target-Language Finetuning and Word-Alignment Finetuning use the same data as our main experiments and the same 120K update as well. We also listed a model migrated from a monolingual PrLM (TRI-BERT) to compare the performance differences between transfer learning from monolinguals and multilingual PrLMs. Since the migrated mPrLM is simpler - it does not need to re-initialize or train embeddings and can converge faster, we train the migrated PrLM model longer steps (400K total training steps) to more fairly compare them. Comparing our TRELM with similar methods, the concatenation of cross-lingual aligned wordlevel embeddings in ROSITAWORD seems to have limited effect. MIM, which uses mapping for post-processing, leads to some improvement, but compared to Target-Language Finetune and Word-Alignment Finetune, it is obviously a weaker option. The results of TRI-m-BERT-base, Word-Alignment Finetune, and Target-Languagde Finetune suggest that using explicit alignment signals is advantageous compared to using the target language monolingual data when finetuning a limited amount of update steps, though when data is sufficient and training time is long enough, the performance for cross-lingually transferred models will approach the performance of monolingually pre-trained models regardless of transfer method. Thus, the methods primarily differ in how they perform with limited data, computing resources, or time. Our TRI-m-BERT-base outperforms +Word-Alignment Finetune, which shows that our CdLM, a language sequence modeling method inspired by machine translation, is more effective than solely deriving loss from an embedding space alignment. The results of TRI-BERT-base and TRI-m-BERT-base demonstrate that the simpler migration for m-BERT-base provides an initial performance boost when both models are trained 120k steps due to its faster convergence, but when they are trained to a longer 400K steps, TRI-BERT-base actually shows better performance compared to TRI-m-BERT-base. More Languages for a More Comprehensive Evaluation In order to demonstrate the generalization ability of the cross-lingual transfer learning of the proposed TRELM framework, we also migrate to German (DE) and Japanese (JA) in addition to Chinese and Indonesian. We also experimented with these languages on the Universal dependency parsing task. The migrated German and Japanese TRI-BERT-base and TRI-RoBERTa-base use the same corpus size and training steps as their respective Chinese and Indonesian models. We show the results of German, Indonesian, and Japanese on UD in Table 10. Since there are no official BERT-base models for these three language, we use third-party pre-trained models: Deepset BERT-base-german5, IndoBERT-base (Wilie et al., 2020), CL-TOHOKU BERT-base-japanese6, and NICT BERT-basejapanese7. First, according to the results in the table, our TRI-BERT-base achieves quite similar performance compared to the third-party BERT-base models and even exceeds the third-party models in some instances. This demonstrates that our TRELM is a general cross-lingual transfer learning framework. Second, comparing third-party pre-trained BERT-base models and the official m-BERT-base, we found that some third-party BERTs are even less effective than m-BERT (Generally speaking, m-BERT is not as good as monolingual BERT when the data and training time are sufficient). This shows that in some scenarios, pre-training from scratch is not a very good choice, potentially due to insufficient data, unsatisfactory pre-training resource quality, and/or insufficient pre-training time. Compared with the well-trained monolingual BERT models, our migrated models are very competitive and can exceed PrLMs suffering from poor pre-training. In addition, in DE and JA, we also observed that the effect of TRI-RoBERTa was stronger than that of the TRI-BERT, indicating that our migration process maintained the performance advantage of the original model. 5https://deepset.ai/german-bert 6https://github.com/cl-tohoku/bert-japanese 7https://alaginrc.nict.go.jp/nict-bert/index.html
1. What are the main contributions and novel aspects introduced by the paper regarding transfer learning via pretrained language models? 2. How does the reviewer assess the limited novelty of the paper compared to prior works, particularly XLM's TLM objective and bilingual XLMs? 3. What are the weaknesses of the paper regarding its experimental setup, comparisons with baseline models, and inconclusive results? 4. How does the reviewer evaluate the economic benefits of the proposed approach compared to training target-language models such as RoBERTa, XLNet, or ELECTRA directly? 5. Why does the reviewer suggest that the paper should discuss previous work adequately and compare to approaches leveraging parallel data?
Review
Review === UPDATE === I would like to thank the authors for their work and effort invested into revising the paper. However, some of my concerns (e.g., comparison to relevant baselines, linking core motivations with the experimental setup) still remain. I'd suggest the authors to start from the revised version and further embed some of the great feedback received from all the reviewers, and produce a stronger paper with a clearer presentation (and a clearer motivation and list of contributions) for some other venue in the future. This papers aims to tap into the very populated area of transfer learning via pretrained language models (such as BERT, RoBERTa, XLM-R). The main idea is to allow for easy adaptation of pretrained language models in the source language to the target language via the usage of parallel data (integrated into model adaptation through a cross-lingual language modeling (CdLM) objective), without the need to retrain the whole model from scratch. In other words, the idea seems to leverage some cross-linguistic similarities to perform better on target language tasks, while keeping the costs of pretraining manageable. Overall, while the paper does an okay work in describing the main idea, profiling the new model in a range of tasks (in two languages only though), and running side experiments and ablation studies, I have several major concerns with the paper, and I am unsure what exact novelty and true contributions this paper brings. Limited novelty A. The main idea largely resembles the idea behind cross-lingual LM pretraining implemented for the XLM model. While the authors do discuss some minor difference to XLM's TLM objective in the appendix, this must be scrutinized more carefully. I am also unsure why the authors never compare to bilingual XLMs for their target languages, and it remains unanswered why this particular model should be preferred over some existing solutions based on XLM, XLM-R, etc. Limited novelty B; missing baselines and related work. The fact that 'bilingual tying' of pretrained LMs (both monolingual and multilingual ones) can be improved by leveraging some bilingual signal: e.g., see the work of Cao et al. (ICLR 2020), K et al. (ICLR 2020), Mulcaire et al. (CoNLL 2019), Liu et al. (CoNLL 2019), among others. However, the paper does not recognise and does not compare to this line of work at all. Therefore, leveraging parallel data as the signal for quick adaptations of pretrained LMs is definitely not new. The paper should discuss previous work adequately and also compare to these approaches leveraging parallel data. Inconclusive results. Besides lacking several baseline models, even the provided comparisons do not really show the core benefits of the proposed approach - the results, if at all, are only marginally above baselines (from BERT and m-BERT). Considering that recent larger models (such as XLM-R) offer even stronger results on a range of well established tasks, I wonder if these marginal improvements would be visible in that setup as well. Even the argument on the 'economic benefits' (from Table 4) does not hold completely - given training times, why should one opt for the new approach instead of training target-language models such as RoBERTa, XLNet or ELECTRA directly? I fail to see a true benefit here. Further, given that some previous work on improving/aligning monolingual and multilingual LMs applies the 'transfer adaptation' post-hoc (i.e., after training), the whole argument is even more fragile. A small number of target languages. Given recent evaluation initiatives in cross-lingual transfer learning (e.g., the work on XTREME and XGLUE benchmarks, not cited at all), I wonder why the authors do not provide a wider-scale evaluation on a larger set of languages. This would also allow the authors to analyse the benefits of their method (which takes into account vocabulary alignment and order information) with respect to different target languages and their properties. As said, while the paper works on a very important problem, I currently do not see it delivering anything beyond what already exists in this crowded area, and minor and inconsistent improvements over an incomplete set of baselines fail to fully convince the reader on the efficacy and usefulness of the proposed method.
ICLR
Title Cross-lingual Transfer Learning for Pre-trained Contextualized Language Models Abstract Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant. In this work, building upon the recent works connecting cross-lingual transfer learning and neural machine translation, we thus propose a novel crosslingual transfer learning framework for PrLMs: TRELM. To handle the symbol order and sequence length differences between languages, we propose an intermediate “TRILayer” structure that learns from these differences and creates a better transfer in our primary translation direction, as well as a new cross-lingual language modeling objective for transfer training. Additionally, we showcase an embedding aligning that adversarially adapts a PrLM’s non-contextualized embedding space and the TRILayer structure to learn a text transformation network across languages, which addresses the vocabulary difference between languages. Experiments on both language understanding and structure parsing tasks show the proposed framework significantly outperforms language models trained from scratch with limited data in both performance and efficiency. Moreover, despite an insignificant performance loss compared to pre-training from scratch in resourcerich scenarios, our transfer learning framework is significantly more economical. 1 INTRODUCTION Recently, the pre-trained contextualized language model has greatly improved performance in natural language processing tasks and allowed the development of natural language processing to extend beyond the ivory tower of research to more practical scenarios. Despite their convenience of use, PrLMs currently consume and require increasingly more resources and time. In addition, most of these PrLMs are concentrated in English, which prevents the users of different languages from enjoying the fruits of large PrLMs. Thus, the task of transferring the knowledge of language models from one language to another is an important task for two reasons. First, many languages do not have the data resources that English uses to train such massive and data-dependent models. This causes a disparity in the quality of models available to English users and users of other languages. Second, languages share many commonalities - for efficiency’s sake, transferring knowledge between models rather than wasting resources training new ones is preferable. Multilingual PrLMs (mPrLMs) also aim to leverage languages’ shared commonalities and lessen the amount of language models needed, but they accomplish this by jointly pre-training on multiple languages, which means when they encounter new languages, they need to be pre-trained from scratch again, which causes a waste of resources. This is distinct from using TreLM to adapt models to new languages because TreLM foregoes redoing massive pre-training and instead presents a much more lightweight approach for transferring a PrLM. mPrLMs can risk their multilingualism and finetune on a specific target language, but we will demonstrate that using TreLM to transfer an mPrLM actually leads to better performance than solely finetuning. Therefore, in order to allow more people to benefit from the PrLM, we aim to transfer the knowledge stored in English PrLMs to models for other languages. The differences in training for new languages with mPrLMs and TRELM are shown in Figure 1. Machine translation, perhaps the most common cross-lingual task, is the task of automatically converting source text in one language to text in another language; that is, the machine translation model converts the input consisting of a sequence of symbols in some language into a sequence of symbols in another language; i.e., it follows a sequence-to-sequence paradigm. Language has been defined as “a sequence that is an enumerated collection of symbols in which repetitions are allowed and order does matter” (Chomsky, 2002). From this definition, we can derive three important differences in the sequences of different languages: symbol sets, symbol order, and sequence length, which can also be seen as three challenges for machine translation and three critical issues that we need to address in migrating a PrLM across languages. In this work, to resolve these critical differences in language sequences, we propose a novel framework that enables rapid cross-lingual transfer learning for PrLMs and reduces loss when only limited monolingual and bilingual data are available. To address the first aforementioned issue, symbol sets, we employ a new shared vocabulary and adversarially align our target embedding space with the raw embedding of the original PrLMs. For the symbol order and sequence length issues, our approach draws inspiration from neural machine translation methods that overcome the differences between languages (Bahdanau et al., 2014), and we thus propose a new cross-lingual language modeling objective, CdLM, which tasks our model with predicting the tokens for a from its parallel sentence in the target language. To facilitate this, we also propose a new “TRILayer” structure, which acts as an intermediary layer that evenly splits our models’ encoder layers set into two halves and serves to convert the source representations to the length and order of the target language. Using parallel corpora for a given language pair, we train two models (one in each translation direction) initialized with the desired pre-trained language model’s parameters. Combining the first half of our target-tosource model’s encoder layer set and the second half of our source-to-target model’s encoder layer set, we are thus able to create a full target-to-target language model. During training, we use three separate phases for the proposed framework, where combinations of Masked Language Modeling (MLM), the proposed CdLM, and other secondary language modeling objectives are used. We conduct extensive experiments on Chinese and Indonesian, as well as German and Japanese (shown in Appendix 10), in challenging situations with limited data and transfer knowledge from English PrLMs. On several natural language understanding and structure parsing tasks, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) PrLM models that we migrate using our proposed framework improve the performance of downstream tasks compared to monolingual models trained from scratch and models pre-trained in a multilingual setting. Moreover, statistics show that our framework also has advantages in terms of training costs. 2 RELATED WORK Because of neural networks’ reliance on heavy amounts of data, transfer learning has been an increasingly popular method of exploiting otherwise irrelevant data in recent years. It has seen many applications and has been used particularly often in Machine Translation (Zoph et al., 2016; Dabre et al., 2017; Qi et al., 2018; Nguyen & Chiang, 2017; Gu et al., 2018; Kocmi & Bojar, 2018; Neubig & Hu, 2018; Kim et al., 2019; Aji et al., 2020), in which transfer learning is generally used to improve translation performance in a low resource scenario using the knowledge of a model trained in a high resource scenario. In addition to cross-lingual situations, transfer learning has also been applied to adapt across domains in the POS tagging (Schnabel & Schütze, 2013) and syntactic parsing (McClosky et al., 2010; Rush et al., 2012) tasks, for example, as well as specifically for adapting language models to downstream tasks (Chronopoulou et al., 2019; Houlsby et al., 2019). One particular difference between our method and many transfer learning methods is that we do not exactly use the popular ”Teacher-Student” framework of transfer learning, which is particularly often used in knowledge distillation (Hinton et al., 2015; Sanh et al., 2020) - transferring knowledge from a larger model to a smaller model. We instead use two ”student” models, and unlike traditional methods, these student models do not share a target space with their teacher (the language is different), and their parameters are initialized with the teacher’s parameters rather than being probabilistically guided by the teacher during training. When using transfer learning for cross-lingual training, there have been various solutions for the vocabulary mismatch. Zoph et al. (2016) did not find vocabulary alignment to be necessary, while Nguyen & Chiang (2017) and Kocmi & Bojar (2018) used joint vocabularies, and Kim et al. (2019) made use of cross-lingual word embeddings. One particular work that inspired us is that of Lample et al. (2018), who also used an adversarial approach to align word embeddings without any supervision while achieving competitive performance for the first time. This succeeded the work of Zhang et al. (2017), who also used an adversarial method but did not achieve the same performance. Also like our aligning method, Xu et al. (2018) took advantage of the similarities in embedding distributions and cross-lingually transferred monolingual word embeddings by simultaneously optimizing based on distributional similarity in the embedding space and the back-translation loss. Several works have also explored adapting the knowledge of large contextualized pre-trained language models to more languages, which pose a much more complicated problem compared to transferring non-contextualized word embeddings. The previous mainstream approach for accommodating more languages is using mPrLMs. Implicitly joint multilingual models, such as m-BERT (Devlin et al., 2019), XLM (Conneau & Lample, 2019), XLM-R (Conneau et al., 2019), and mBART (Liu et al., 2020), are usually evaluated on multi-lingual benchmarks such as XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020), while some works use bilingual dictionaries or sentences for explicit cross-lingual modeling with mPrLMs (Schuster et al., 2019; Mulcaire et al., 2019; Liu et al., 2019a; Cao et al., 2020). Transferring monolingual PrLMs, another research branch, is relatively new. Artetxe et al. (2020) presented a monolingual transformer-based masked language model that was competitive with multilingual BERT when transferred to a second language. To facilitate this, they did not rely on a shared vocabulary or joint training (to which multilingual models’ performance is often attributed) and instead simply learned a new embedding matrix through MLM in the new language while freezing parameters of all other layers. Tran (2020) used a similar approach, though instead of randomly initialized embeddings, he used a sparse word translation matrix on English embeddings to create word embeddings in the target language, reducing the training cost of the model. 3 TRELM Cross-lingual Transfer Learning for Language Modeling (TRELM) is a framework that rapidly migrates existing PrLMs. In this framework, the embedding space of a source language is linearly aligned with that of a target using an adversarial embedding alignment, which we experimentally verified was effective due to shared spatial structure similarities (refer to Appendix A.1 for details). Leveraging joint learning, we propose a novel pre-training objective, CdLM, and unify it with MLM into one format. In regards to model structure, we proposed TRILayer, an intermediary transfer layer, to support language conversion during the CdLM training process. 3.1 TRILAYER AND CdLM For the disparities in symbol sets of different languages and different pre-trained models, we employ embedding space alignment, while for the issues of the symbol order and sequence length, unlike previous work, we do not assume that the model can implicitly learn these differences, and we instead leverage language embeddings and explicit alignment information and propose a novel Cross-Lingual Language Modeling (CdLM) training objective and a Transfer Learning Intermediate Layer (TRILayer) structure as a pivot layer in the model to bridge the differences of the two languages. To clearly explain our training approach, we take the popular PrLM BERT as a basis for introduction. In the original BERT (as shown in Figure5(a)), Transformer (Vaswani et al., 2017) is taken as the backbone of model, which takes tokens and their positions in a sequence as input before encoding this sequence into a contextualized representation using multiple stacked multi-head self-attention layers. During the pre-training process, BERT predominantly adopts an MLM training objective, in which a [MASK] (also written as [M]) token is used to replace a token in the sequence selected by a predetermined probability, and the original token is predicted as the gold target. Formally speaking, given a sentence X = {x1, x2, ..., xT } andM, the set of masked positions, the training loss LMLM for the MLM objective is: LMLM(θLM) = − |M|∑ i=1 logPθLM(xMi |X\M), where θLM are the parameters of BERT, |M| is the length of setM, andX\M indicates the sequence after masking. An example of MLM training is shown in the top-left region of Figure 5. Much work in the field of machine translation suggests that the best way to transfer learning across languages is through translation learning because the machine translation model must address all three of the above-described language differences in the training process. Therefore, we take inspiration from the design of machine translation, especially the design of non-autoregressive machine translation, and propose a Cross-Lingual Language Modeling (CdLM) objective. CdLM is just like a traditional language modeling objective, except across languages, so given an input of source tokens, it generates tokens in a separate target language. We describe the differences between CdLM and related MLM variants (such as Translation Language Modeling (TLM) and BRidge Language Modeling (BRLM)) in Appendix A.4. With this proposed objective, we aim to make as few changes as possible to the existing PrLM and thus introduce a Translation/Transfer Intermediate Layer (“TRILayer”) structure, which bridges two opposing half-models to create our final model. First, in the modified version of BERT for transfer learning, we add a language embedding Elng following the practice of (Conneau & Lample, 2019) to indicate the current language being processed by the model. This is important because the model will handle both the source and target languages simultaneously in 2 of our 3 training phases (described in next subsection). The new input embedding is: Einp = Ewrd + Eseg + Epos + Elng, where Ewrd, Eseg , and Epos are the word (token) embedding, segment embedding, and position embedding, respectively. Next, we denote N as the number of stacked Transformer layers (L = {l1, l2, ..., lN}) in BERT and split the BERT layers into two halves L≤N2 = {11, ..., lN2 } and L>N2 = {lN2 +1, lN2 +2, ..., lN}. The TRILayer is placed between the two halves (making the total number of layers N +1) and functions as a pivot. In the L≤N2 half, the input embedding is encoded by its Transformer layers to hidden states Hi = TRANSFORMERi(Hi−1), in which H0 = Einp and TRANSFORMERi indicates the i-th Transformer layer in the model. Before the outputs of the L≤N2 half are fed into the TRILayer, the source hidden representation HN 2 is reordered according to new order O. During CdLM training, for source language sentence X = {x1, x2, ..., xT }, a possible translation sentence Y = {y1, y2, ..., yT ′} is provided. To find the new order, explicit alignment information between the transfer source and target sentences is obtained using an unsupervised external aligner tool. We define the source-to-target alignment pair set as: AX→Y = ALIGN(X,Y ) = {(xALNIDX(y1), y1), (xALNIDX(y2), y2), ..., (xALNIDX(yT ′ ), yT ′ )}, where ALNIDX(·) is a function that returns the alignment index in the source language or xnull when there is no explicit alignment between the token in the target language and any source language token. xnull represents a special placeholder token [P] that is always appended to the inputs. Finally, the source hidden representation HN 2 is reordered according to the new order O = {ALNIDX(y1),ALNIDX(y2), ...,ALNIDX(yT ′ )} from alignment set AX→Y , creating HON 2 . Thus, the resultant hidden representation HON 2 is in the order of the target language and is consistent with the target sequence in length, making it usable for language modeling prediction. Unfortunately, the position information is lost in reordering. To combat this, the position embedding and language embedding will be reintegrated as follows: HTL = TRANSFORMERTL(HON 2 + ElngY + Epos), where HTL is the output of TRILayer, TRANSFORMERTL is the Transformer structure inside the TRILayer, and ElngY is the target language embedding. Next, the HTL is encoded in the L> 2N half as done for the L≤N2 half (let HN2 = HTL for the L>N2 half) to predict the final full sequence of the target language. The model is trained to minimize the loss LCdLM, which is: LCdLM(θLM) = − T ′∑ i=1 logPθLM(yi|X,AX→Y ). To enable MLM and CdLM to train models simultaneously rather than through successive optimization, we provide a unified view for MLM and CdLM language modeling: LULM(θLM) = − Tmax∑ i=1 1(i ∈ C) logPθLM(wi|S,A), where Tmax denotes the maximum sequence length for language modeling, S is the input sequence, wi is the i-th token in output sequence W , C is the set of positions to be predicted, and A is the alignment between the input and output sequence. Both the input and output sequences are padded to the maximum sequence length Tmax during training. 1(i ∈ C) represents the indicator function and equals 1 when i-th position exists in the set for the parse to be predicted and 0 otherwise. In MLM, S = X\C , A = {(1, 1), (2, 2), ..., (Tmax, Tmax)} is a successive alignment, and W = X , while in CdLM, S = X , A = AX→Y , and W = Y . Due to the unified language modeling abstractions of MLM and CdLM, the input and output forms, as well as the internal logic of their models, are the same. Therefore, models can be trained with the two objectives in the same mini-batch, which enhances the stability of transfer training. 3.2 TRIPLE-PHASE TRAINING In our TRELM framework, the whole training process is divided into three phases with different purposes but the same design goal: minimize the number of parameter updates as much as possible to speed up convergence and enhance training stability. The three phases are commonality training, transfer training, and language-specific training. In the commonality learning phase, only the target language MLM objective is used, while in the transfer learning phase, CdLM and target language MLM objectives are both used at the same time, and in the final language-specific learning phase, target language MLM and other secondary language modeling objectives are adopted. Commonality Training Though languages are very different on the surface, they also share a lot of underlying commonalities, often called linguistic universals or cross-linguistic generalizations. We therefore take advantage of these commonalities between languages and jointly learn the transferring source and target languages. In this phase, the parameters of the position, segment embedding, and Transformer layers are initialized with original BERT, the TRILayer is initialized with parameters of Transformer layer LN 2 , the word embedding is initialized with the output of the adversarial embedding aligning, and orthogonal weight initializations are adopted for the language embedding. For this phase, the model is trained by joint MLM with monolingual inputs from both the source and target languages. Moreover, in this training process, to make convergence fast and stable, the parameters of BERT’s backbone (Transformer) layers are fixed; only the embeddings and TRILayer are updated by the gradient-based optimization based on the joint MLM loss. The final model obtained in this phase is denoted as θctLM. Transfer Training Since the model is not pre-trained from scratch, making the model aware of changes in inputs is a critical factor for a maximally rapid and accurate migration in the case of limited data. Since there is not enough monolingual data in the target language to allow the model to adapt to the new language, we use the supervisory signal from the two languages’ differences and leverage parallel corpora to directly train the model. Specifically, we split the original BERT transformer layers into two halves. With a parallel corpus from the source language to the target language and one from the target language to the source language, we train two corresponding models, both of which are initialized using the parameters learned in the previous phase. In the source-to-target model, only the upper half of the encoder layers is trained, and the lower half is kept fixed, while the converse is true for the target-to-source model. TRILayer then provides crosslingual order and length adjustment, which is similar to the behavior of a neural machine translation model. Thus, we create two reciprocal models: one whose upper half can handle the target language, and one whose lower half can handle it, which we connect via the TRILayer. Finally, the two trained models are combined as θttLM. We describe the full procedure in Algorithm 1. Algorithm 1 Transfer Training of Pre-trained Contextualized Language Models Input: The commonality pre-trained model parameters θctLM, Languages L = {lngX , lngY }, Parallel training set P = {(XL0i , X L1 i )} |P| i=1, Number of training steps K 1: for j in 0, 1 do 2: Initialize model parameters θ Lj→L(1−j) LM ← θ ct LM 3: if j == 0 then 4: Fix the parameters of L≤N 2 half of θ Lj→L(1−j) LM 5: else 6: Fix the parameters of L>N 2 half of θ Lj→L(1−j) LM 7: end if 8: for step in 1, 2, 3, ..., K do 9: Sample batch (XLj , XL(1−j)) from P. 10: Alignment information A: ALj→L(1−j) ← ALIGN(X Lj , XL(1−j)) 11: CdLM Loss: LCdLM ← − ∑ logP θ Lj→L(1−j) LM (XL(1−j) |XLj ,ALj→L(1−j)) 12: Masked version of XL1 : XL1\M ← MASK(X L1) 13: MLM Loss: LMLM ← − ∑ logP θ Lj→L(1−j) LM (XL1M |X L1 \M ) 14: CdLM+MLM Update: θ Lj→L(1−j) LM ← optimizer update(θ Lj→L(1−j) LM ,LCdLM,LMLM) 15: end for 16: end for 17: Combine the two obtained models as θttLM by choosing the L>N 2 half model parameters from model θL0→L1LM and L≤N 2 half model parameters from model θL1→L0LM and average the other parameters (such as embedding and TRILayer parameters) of the two models Output: Learned model θttLM Language-specific Training During the language-specific training phase, we only use the monolingual corpus of the target language and further strengthen the target language features for the model obtained in the transfer training phase. We accomplish this by using the MLM objective and other secondary objectives such as Next Sentence Prediction (NSP). 4 EXPERIMENTS In this section, we discuss the details of the experiments undertaken for this work. We conduct experiments based on English PrLMs1. We transfer via English-to-Chinese and English-to-Indonesian directions for the purpose of comparing with previous recent work. We describe the training details and parameters in Appendix A.5. From English to Chinese and English to Indonesian, we transfer two pre-trained contextualized language models: BERT and RoBERTa. Our performance evaluation on the migrated models is mainly conducted on two types of downstream tasks: language understanding and language structure parsing. Please refer to Appendix A.6 for introductions of tasks and baselines and Appendix A.7 for an ablation study. We note that the comparisons between models trained using TRELM and the monolingual and multilingual PrLMs trained from scratch on the target language (see Table 1) is only for illustrating the relative performance loss of the model 1Our code is available at https://github.com/agcbi2017/TreLM. produced by TRELM. These models are not directly comparable, as we intentionally use less data to train models when using TRELM. Continuing to pre-train the PrLMs on the target language would also obviously further improve their performance, but this is not our main focus. Language Understanding We first compare the PrLMs transferred by TRELM alongside the results the existing monolingual pre-trained BERT-base-chinese and the multilingual pre-trained BERT-base-multilingual in Table 1 using the CLUE benchmark. When comparing with the same model architecture, taking BERT as an example, our model TRIBERT-base exceeds m-BERT-base and BERT-small and is slightly weaker than original BERT-base. Compared with BERT-small, which is trained from scratch for a longer time, our TRI-BERT-base generally achieves better results on these NLU tasks. This demonstrates that because of the commonalities of languages, models for languages with relatively few resources can benefit from language models pre-trained on languages with richer resources, which confirms our cross-lingual transfer learning framework’s effectiveness. m-BERT is another potential language model migration scheme and has the advantage of supporting multiple languages at the same time; however, in order to be compatible with multiple languages, the unique characteristics of each language are neglected. Our TRI-BERT, which is built on top of BERT-base, instead focuses on and highlights language differences during the transfer learning process, which leads to an increase in performance compared to m-BERT. When TRI-BERT and TRI-RoBERTa have the same model size, TRI-RoBERTa outperforms TRI-BERT, which is consistent with the performance differences between the original RoBERTa and BERT, indicating that our migration approach maintains the performance advantages of PrLMs. Models CoNLL-09 P R F1 (Cai et al., 2018) 84.7 84.0 84.3 +BERT-base 86.86 87.48 87.17 +m-BERT-base 85.17 85.53 85.34 +TRI-BERT-base 86.15 85.58 85.86 +TRI-RoBERTa-base 87.08 86.99 87.03 +TRI-RoBERTa-base 85.77 85.62 85.69(w/o CdLM) Table 3: Dependency SRL results on the CoNLL-2009 Chinese benchmark. 0 1K 10K 100K 500K 1M 2 3 4 Parallel Data BP W 86 86.5 87 Se m -F 1 Figure 2: Language modeling effects vs. Parallel data size on the evaluation set. Language Structure Parsing We report results on dependency parsing for Chinese and Indonesian in Table 2. As shown in the results, the baseline model has been greatly improved for the PrLM. In Chinese, the performance of BERT-base is far superior to m-BERT-base, which highlights the importance of the unique nature of the language for downstream tasks, especially for refined structural analysis tasks. In Indonesian, IndoBERT (Wilie et al., 2020) performs worse than m-BERT, which we suspect is due to IndoBERT’s insufficient pre-training. We also compare TRI-BERT-base and IndoBERT-base on Indonesian, whose ready-to-use language resources are relatively small compared to English. We find that although pre-training PrLMs on the available corpora is possible, because of the size of language resources, engineering implementation, etc., our migrated model is more effective than the model pre-trained from scratch. This shows that migrating from the readymade language models produced from large-scale language training and extensively validated by the community is more effective than pre-training on relatively small and limited language resources. In addition, we also conduct experiments for these pre-trained and migrated models on Chinese SRL. mPrLMs are another important and competitive approach that can adapt to cross-lingual PrLM applications, so we also include several mPrLMs in our comparison on dependency parsing. Specifically, we used XLM, a monolingual and multilingual PrLM pre-training framework, as our basis. For TRELM, we used XLM-en-2048, officially provided by Conneau & Lample (2019), as the source model. The data amount used and the number of training steps are consistent with TRI-BERT/TRIRoBERTa. In mPrLM, we combined EN, ID, and ZH sentences (including monolingual and parallel sentences) together (10M sentences in total) to train an EN-ID-ZH mPrLM with MLM and TLM objectives. The performance comparison of these three PrLMs on the dependency parsing task is shown in the lower part of Table 2. From the results, we see mPrLMs pre-trained from scratch have no special performance advantage over TRELM when corpus size is constant, and especially when not using the cross-lingual transfer learning objective TLM, which models parallel sentences. In fact, our TRI-XLM-en-2048 solidly outperforms its two multilingual XLM counterparts. Monolingual PrLMs generally outperform mPrLMs, which likely leads to the performance advantages shown with monolingual migration. Additionally, like our TRELM, mPrLMs can also finetune on only the target language to improve performance, and leveraging TRELM to transfer an mPrLM leads to even further gains, as seen in Table 9 in the appendix. While the two approaches can compete with each other, they have their own advantages in general. In particular, TRELM is more suitable for transferring additional languages that were not considered in the initial pre-training phase and for low-resource scenarios, while mPrLMs have the advantage of being able to train and adapt to multiple languages at once. In Table 3, we compared a model migrated without CdLM to the full one. To compensate for the removal of CdLM, we added a monolingual corpus with the same size as the parallel corpora and trained the model with an extra 80K steps, but despite using more target monolingual data and training steps, the performance was still much better when CdLM was included. 5 DISCUSSION Effects of Parallel Data Scale Since the proposed TRELM framework relies on parallel corpora to learn the language differences explicitly, the sizes of the parallel corpora used are also of concern. We explored the influence of different parallel corpus sizes on the performance of the models transferred with the TRI-RoBERTa-base architecture. The variation curve of BPW score with the size of parallel data is shown in Figure 2. We see that with increasingly more parallel data, BPW gradually decreases, but this decrease slows as the data grows. The effect of the parallel corpora for cross-lingual transfer therefore has a upper bound because when the parallel corpora reaches a certain size, the errors from the alignment extraction tools cannot be ignored, and additionally, due to how lightweight the TRILayer structure is, TRILayers can only contain so much cross-lingual transfer information, which further restricts the growth of the migration performance. Pre-training Cost vs. Migration Training Cost The training cost is an important factor for choosing whether to pre-train from scratch or to migrate from an existing PrLM. We listed the training data size, model parameters, training hardware, and training time of several public PrLM models and compared them with our models. The comparisons are shown in Table 4. Although the training hardware and engineering implementation of various PrLM models are different, this can still be used as a general reference. When model size is the same, our proposed transfer learning is much faster than pre-training from scratch, and less data is used in the transfer learning process. In addition, the total training time of our large model migration training is less than that of even the base model pre-training when hardware is kept the same. Therefore, the framework we proposed can be used as a good supplementary scheme for the PrLM in situations when time or computing resources are restricted. Model Data BSZ Steps Params Hardware Train Time G/TPU·Days 6 CONCLUSION AND FUTURE WORK In this work, we present an effective method of transferring knowledge from a given language’s pre-trained contextualized language model to a model in another language. This is an important accomplishment because it allows more languages to benefit from the massive improvements arising from these models, which have been primarily concentrated in English. As a further plus, this method also enables more efficient model training, as languages have commonalities, and models in the target language can exploit these commonalities and quickly adopt these common features rather than learning them from scratch. In future work, we plan to use our framework to transfer other models such as ALBERT and models for more languages. We also aim to develop an unsupervised cross-lingual transfer learning objective to remove the reliance on parallel sentences. A APPENDIX A.1 ADVERSARIAL EMBEDDING ALIGNING Since the symbol sets in different languages are different, the first step in the cross-lingual migration of PrLMs is to supplement or even replace their vocabularies. In our proposed framework, to make the best use of the commonalities between languages, we choose to use a shared vocabulary with multiple languages rather than replace the original language vocabulary with one for the new language. In addition, in current PrLMs, a subword vocabulary is generally adopted in order to better mitigate out-of-vocabulary (OOV) problems caused by limited vocabulary size. To accommodate the introduction of a shared vocabulary, it is necessary to jointly re-train the subword model to ensure that some common words in different languages are consistent in subword segmentation, which leads to the problem that some tokens in the newly acquired subword vocabulary are different from those in the original subword vocabulary, though they belong to the same language. To address this issue, we consider the most complicated case, in which the vocabulary is completely replaced by a new one. Consequently, we assume that there are two embedding spaces: one is the embedding of the original vocabulary, which is well-trained in the language pre-training process, and the other is the embedding of the new vocabulary, yet to be trained. When considering raw embeddings and non-contextualized embeddings (e.g. Word2vec), it is easy to see their training objectives are similar in theory. The only differences are the addition of context and the change in model structure to accommodate language prediction. Despite these differences, non-contextualized embeddings can be used to simulate the raw embeddings in a PrLM that we aim to replace (refer to Appendix A.2 for a detailed explanation). Although the two embedding spaces we consider are similar in structure, they may be at different positions in the whole real embedding space, so an extra alignment process is required, and although common tokens may exist, due to the inconsistent token granularity from using byte-level byte-pair encoding (BBPE) (Radford et al., 2019), a matching token of the two embedding spaces cannot be utilized for embedding space alignment, as it is likely to represent different meanings. Therefore, inspired by (Lample et al., 2018), we present an adversarial approach for aligning the word2vec embedding space to the PrLM’s raw embedding space without supervision. With this approach, we aim to minimize the differences between the two embedding spaces brought about by different similarity forms. We define U = {u1, u2, ..., um} and V = {v1, v2, ..., vn} as the two embedding spaces of m and n tokens from the PrLM and word2vec training, respectively. In the adversarial training approach, a linear mapping W is trained to make the spaces WV = {Wv1,Wv2, ...,Wvn} and U close as possible, while a discriminator D is employed to discriminate between tokens randomly sampled from spaces WV and U . Let θadv denote the parameters of the adversarial training model and the probabilities Pθadv (1(z)|z) and Pθadv (0(z)|z) indicate whether or not the sampling source prediction is the same as its real space for a vector z. Therefore, the discrimination training loss LD(θD|W ) and the mapping training loss LD(W |θD) are defined as: LD(θD|W ) = − 1 n n∑ i=1 logPθadv (1(Wvi)|Wvi)− 1 m m∑ i=0 logPθadv (1(ui)|ui), LW (W |θD) = − 1 n n∑ i=1 logPθadv (0(Wvi)|Wvi)− 1 m m∑ i=0 logPθadv (0(ui)|ui), where θD are the parameters of discriminator D, which is implemented as a multilayer perceptron (MLP) with two hidden layers and Leaky-ReLU as the activation function. During the adversarial training, the discriminator parameters θD and W are optimized successively with discrimination training loss and mapping training loss. To enhance the effect of embedding space alignment, we adopted the same techniques of iterative refinement and cross-domain similarity local scaling as Lample et al. (2018) did. While the two embedding spaces in (Lample et al., 2018) both can be updated by gradient, we consider U as the goal spatial structure and hence fix U throughout the training process, and we updateW to better align V . A.2 ANALYZING NON-CONTEXTUALIZED EMBEDDINGS AND PrLMS’ RAW EMBEDDINGS Bidirectional PrLMs such as BERT (Devlin et al., 2019) use Masked Language Modeling (MLM) as the training objective, in which the model is required to predict a masked part of the sentence. This training paradigm has no essential difference with word2vec (Mikolov et al., 2013). Word2vec employed a simple single-layer perceptron neural network and restricted the context for the masked part to the sliding window, while recent mainstream PrLMs adopted self-attention-based Transformer as the context encoder, which can utilize the whole sentence as context. Because of this, we speculate that BERT’s raw embeddings and word2vec embeddings have a similar nature, and that we can simulate BERT’s raw embeddings with the word2vec embeddings through some special designs. To verify our theory, we studied the important relational nature of embeddings. Specifically, we chose BERT-base-cased’s raw embeddings and word2vec-based FastText cc.en.300d embeddings (Grave et al., 2018) and evaluated the cosine similarity of single terms compared to other terms in their vocabularies. An example histogram for the term “genes” is shown in Figure 3. Examining the two types of embeddings, we found that the learned vectors, regardless of the type of similarity (semantic/syntactic/inflections/spelling/etc.) they capture, have a very similar distribution shape. This showed us that the two embedding spaces are similar, and words within them may just have different relations to each other. Thus, our work focuses on aligning the new word2vec embedding space by learning a mapping to the original embedding space to simulate the original embedding allow for a cross-lingual migration of the PrLM. To illustrate the necessity of embedding alignment, we also took out the top-50 terms closest to the term “genes” in the two embedding spaces, used principal component analysis (PCA) to reduce the vector dimension to 2, and presented it in a two-dimensional figure, as shown in Figure 4. As can be seen from the figure, due to the different language modeling architectures and contexts in FastText and BERT, corresponding points are distributed at different locations in the embedding space. This is why compatibility problems exist when we use the original non-contextualized embeddings to simulate the new embedding and hence why we need to align the embeddings. A.3 MODEL ARCHITECTURE IN TRELM A.4 MLM, TLM, BRLM, AND CdLM As stated in the original MLM objective, the model can only learn from monolingual data. Though a joint MLM training can be performed across languages, there is still a lack of explicit language cues for guiding the model in distinguishing language differences. Conneau & Lample (2019) proposed a Translation Language Modeling (TLM) objective as an extension of the MLM objective. The TLM objective leverages bilingual parallel sentences by concatenating them into single sequences as in the original BERT and predicts the tokens masked in the concatenated sequence. This encourages the model to predict the masked part in a bilingual context. Ji et al. (2020) further proposed a BRidge Language Modeling (BRLM) built on the TLM, benefiting from explicit alignment information or additional attention layers that encourage word representation alignment across different languages. These MLM variants drive models to learn explicit or implicit token alignment information across languages and have been shown effective in machine translation compared to the original MLM, but for the cross-lingual transfer learning of PrLMs, modeling the order difference and semantic equivalence in different languages is still not enough. Since both contexts in MLM variants have been exposed to the model, whether the prediction of the masked part depends on the cross-lingual context or the context of its own language is unknown, as it lacks explicit clues for cross-lingual training. In our proposed CdLM, we use sentence alignment information for explicit ordering. The model is exposed to both the transfer source and transfer target languages at the same time, during which the input is a sequence of the source language, and the prediction goal is a sequence of the target language. Thus, we convert translation into a cross-language modeling objective, which gives a clear supervision signal for cross-lingual transfer learning. A.5 TRAINING DETAILS The initial weights for the migration are BERT-base-cased, BERT-large-cased, RoBERTa-base, and RoBERTa-large, which are taken from their official sources. We use English Wikipedia, Chinese Wikipedia, Chinese News, and Indonesian CommonCrawl Corpora for the monolingual pre-training data. For all models migrated in the same direction, regardless of their original vocabulary, we used the same single vocabulary that we trained on the joint language data using the WordPiece Subword scheme (Schuster & Nakajima, 2012). In English-to-Chinese, the vocabulary size is set to 80K and the alphabet size is limited to 30K, while in English-to-Indonesian, the vocabulary size is set to 50K, and the alphabet size is limited to 1K. With the WordPiece vocabulary, we tokenized the monolingual corpus to train the non-contextualized word2vec embedding of subwords. Using the fastText (Bojanowski et al., 2017) tool and skipgram representation mode, three embedding sizes 128, 768, and 1024 were trained to be compatible the respective pre-trained language models. In the “commonality” training phase, we sampled 1M sentences of English Wikipedia and either 1M sentences of Chinese Wikipedia or 1M sentences of Indonesian CommonCrawl for the English-toChinese and English-to-Indonesian models. We trained the model with 20K update steps with total batch size 128 and set the peak learning rate to 3e-5. For the “transfer” training phase, we sampled 1M parallel sentences from the UN Corpus (Ziemski et al., 2016) for English-to-Chinese and 1M parallel sentences from OpenSubtitles Corpus (Lison & Tiedemann, 2016) for English-to-Indonesian. We use the fastalign toolkit (Dyer et al., 2013) to extract the tokenized subword alignments for CdLM. The two half models are optimized over 20K update steps, and the batch size and peak learning rate are set to 128 and 3e-5, respectively. In the final phase, “language-specific” training, 2M Chinese and Indonesian sentences were sampled to update their respective models, training for 80K steps with total batch size 128 and initial learning rate 2e-5. In all the above training phases, the maximum sequence length was set to 512, weight decay was 0.01, and we used Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999. In addition to our migrated pre-trained models, we also pre-trained a BERT-small2 model from scratch with data of the same size as our migration process to compare the performance differences between migration and scratch training. For the BERT-small model, we started with the BERT-base hyper-parameters and vocabulary but shortened the maximum sequence length from 512 to 128, reduced the model’s hidden and token embedding dimension size from 768 to 256, set the batch size to 256, and extended the training steps to 240K. Our TRI-BERT -* and TRI-RoBERTa-* all used the same amount of training data (2M target language monolingual sentences, 1M source language monolingual sentences, and 1M parallel sentences). BERT-small pre-trained from scratch on only the target language, using 5M target language sentences to ensure the training data amount was the same. Compared with the original model, the TRI-* model only has an extra TRI-layer added and some changes in the embedding layer. BERTbase-chinese and m-BERT-base models were downloaded from the official repository, which trained with 25M sentence (much more than our 5M sentences) and more training steps. 2The performance of BERT-base for pre-training from scratch with this limited data is inferior to that of BERT-small, so we do not compare it with our migrated models. A.6 DOWNSTREAM TASKS Following previous contextualized language model pre-training, we evaluated the English-toChinese migrated language models on the CLUE benchmark. The Chinese Language Understanding Evaluation (CLUE) benchmark (Xu et al., 2020) consists of six different natural language understanding tasks: Ant Financial Question Matching (AFQMC), TouTiao Text Classification for News Titles (TNEWS), IFLYTEK (CO, 2019), Chinese-translated Multi-Genre Natural Language Inference (CMNLI), Chinese Winograd Schema Challenge (WSC), and Chinese Scientific Literature (CSL) and three machine reading comprehension tasks: Chinese Machine Reading Comprehension (CMRC) 2018 (Cui et al., 2019), Chinese IDiom cloze test (CHID) Zheng et al. (2019), and Chinese multiple-Choice machine reading Comprehension (C3) (Sun et al., 2019). We built baselines for the natural language understanding tasks by adding a linear classifier on top of the “[CLS]” token to predict label probabilities. For the extractive question answering task, CMRC, we packed the question and passage tokens together with special tokens to form the input: “[CLS] Question [SEP] Passage [SEP]”, and employed two linear output layers to predict the probability of each token being the start and end positions of the answer span following the practice for BERT (Devlin et al., 2019). Finally, in the multi-choice reading comprehension tasks, CHILD and C3, we concatenated the passage, question, and each candidate answer (“[CLS] Question || Answer [SEP] Passage [SEP]”), input this to the models, and also predicted the probability of each answer on the representations from the “[CLS]” token following prior works (Yang et al., 2019; Liu et al., 2019b). In addition to these language understanding tasks, language structure analysis tasks are also a very important part of natural language processing. Therefore, we also evaluated the PrLMs on syntactic dependency parsing and semantic role labeling, a type of semantic parsing. The baselines we selected for dependency parsing and semantic role labeling are from (Dozat & Manning, 2016) and Cai et al. (2018), respectively. These two baseline models are very strong and efficient and rely only on pure model structures to obtain advanced parsing performance. Our approach to integrate the PrLM with the two baselines is to replace the BiLSTM encoder in the baseline with the encoder of the PrLM. We took the first subword or character representation of a word as the representation of a word, which solved the PrLM’s inconsistent granularity issue that impeded parsing. For the English-to-Indonesian migrated language models, since the language understanding tasks in Indonesian are very limited, we chose to use the Universal Dependency (UD) parsing task (v2.3, Zeman et al., 2018), in which the treebanks of the world’s languages were built by an international cooperative project, as the downstream task for evaluation. A.7 ABLATIONS Effects of Different Embedding Initialization To show the effectiveness of non-contextualized simulation and adversarial embedding space alignment, we compare the TRI-RoBERTa-base models obtained in the commonality training phase of our framework under four different embedding initialization configurations: random, random+adversarial align, fastText pre-trained, and fastText pre-trained+adversarial align. In addition, to lessen the influence of training different amounts during different initializations, we trained an additional 40K update steps in the commonality training phase. We selected newstest2020-enzh.ref.zh in the WMT-20 news translation task as the evaluation set with a total of 1418 sentences to avoid potential overlapping with the training set. The subwordlevel bits-per-word (BPW) was used as the evaluation metric for the model’s MLM performance3. The BPW results on the evaluation set are presented in Table 5. The non-contextualized fastText embedding simulation and adversarial embedding alignment setting achieves better BPW scores than other configurations, which shows the effectiveness of our proposed approach. In addition, comparing the embedding initialization of random+adversarial align and fastText pre-trained shows that pre-training non-contextualized embeddings using language data is more effective than direct embedding space alignment. Considering different training 20K steps versus 40K steps, longer training leads to lower BPW, but the performance gains are less than what our method brings. 3We do this because these models in comparison use the same vocabulary, and the masked parts on the evaluation set are identical, making the BPW scores comparable. Effects of Cross-lingual Transfer Learning in TRELM We conduct further ablation studies to analyze our proposed TRELM framework’s cross-lingual transfer learning design choices, including introducing the novel training objective, CdLM, and the TRILayer structure. The translation performance evaluation results are shown in Table 6. Using the newstest2020 en-zh and zh-en test sets, we evaluate the TRI-RoBERTa-base and TRI-RoBERTa-large models at the end of their transfer training phases. Since there is no alignment information available during the evaluation phase, we use the same successive alignment that MLM uses. For the sequence generated by the model, continuous repetitions were removed and the [SEP] token was taken as the stop mark to obtain the final translation sequence. In the EN→ZH translation direction, we report character-level BLEU, while in ZH→EN, we report word-level BLEU. The Transformer-base NMT models for comparison are from Tiedemann & Thottingal (2020) and were trained on the OPUS corpora (Tiedemann, 2012). As seen from the results, our TRI-RoBERTa-base and TRI-RoBERTa-large with CdLM were able to obtain very good BLEU-1 scores, indicating that the mapping between the transferring source language and target language was explicitly captured by the model. When CdLM is removed and we only use the traditional joint MLM and TLM for training on the same size parallel data, we find that the BLEU-1 score significantly decreases, demonstrating that joint MLM and TLM do not learn explicit alignment information. The BLEU-1 score is lower than that of the Transformer-base NMT model, but this is because the Transformer-base model uses more parallel corpora as well as a more complex model design compared to our non-autoregressive translation pattern and lightweight TRILayer structure. In addition, compared with BLEU-2/3/4, it can be seen that although Transformerbase can accurately translate some tokens, many tokens are not translated or are translated in the wrong order due to the lack of word ordering information and the differing sequence lengths, which result in a very low score. This also shows that word order is a very important factor in translation. Since the TRELM framework is evaluated using the existed pre-trained models, our migrated models are always larger than the original ones. Additional parameters arise in two places: embedding layer parameters grow due to a larger vocabulary and language embeddings, and the TRILayer structure adds parameters. The embedding layer growth is necessary, but the TRILayer structure is optional, as it is only used for cross-lingual transfer training. Therefore, for this ablation, we test removing the TRILayer structure for a fairer comparison4 and show the results in Table 7. Comparing the evaluation set BPW scores of the final models obtained from RoBERTa-base under different migration methods, we found that our TRELM framework is stronger in cross-lingual transfer learning compared to jointly using MLM and TLM, and it does not simply rely on the extra parameters of the TRILayer. Furthermore, applying these pre-trained language models to the downstream task, dependency parsing on the CTB 5.1 treebank, achieves corresponding effects in BPW, which shows that the BPW score does describe the performance of PrLMs and that the pre-training performance will greatly affect performance in downstream tasks. Comparison of Different Cross-lingual Transfer Learning Objectives As discussed in Appendix A.4, CdLM, TLM, and TLM variants such as BRLM are typical objectives of cross-lingual transfer learning, in which parallel sentences are utilized for cross-lingual optimization. In order to compare the differences between these objectives empirically, we conducted a comparative experiment on TRI-RoBERTa-base. For this experiment, instead of using the transfer learning objective CdLM in the second stage of training like our other models, we use TLM or BRLM instead. In addition, we follow (Artetxe et al., 2020) in experimenting with the effects of joint vocabulary versus a separate vocabulary in cross-lingual transfer learning, and we include a model, CdLM∗, with a separate vocabulary in this comparison as well. Specifically, for this model, we forego language embeddings and adopt independent token embeddings for difference languages. CdLM and MLM alternately optimize the model. The empirical comparison of these objectives is listed in Table 8. The migration target language is Chinese, and BPW score is used to compare the performance of the migrated model. We also show the dependency parsing performance on the CTB 5.1 dataset for the obtained model. Looking at CdLM and CdLM∗, in our TRELM framework, using a joint vocabulary leads to better performance than using a separate vocabulary strategy, which is not consistent with Artetxe et al. (2020) ’s conclusion. We attribute this difference to the fact that (Artetxe et al., 2020)’s model uses joint MLM pre-training of multiple languages to achieve implicit transfer learning, so maintaining independent embeddings is important for distinguishing the language. In TRELM, because it trains two half-models, the explicit conversion signal guides the model’s migration training in discerning the language. When using separate vocabularies, some common information (such as punctuation, loanwords, etc.) are ignored, lessening the impact of CdLM. Second, comparing TLM, BRLM, and CdLM, we note that CdLM takes the source and target language sequences as input and output, respectively, which cooperates with the TRILayer and half-model training strategy much better, whereas TL and BRLM combine the source and target sentences as input and predict a masked sentence as in MLM, which is much less conducive to the half-model training strategy. Because the source and target language sentences are separate in CdLM, the model is much more able to differentiate the two languages, which makes CdLM a stronger cross-lingual transfer learning objective. Comparison with Cross-lingual Transfer Learning Related Works on mPrLM Although we propose our method as an alternative to mPrLMs for cross-lingual transferring, it can also be applied to transfer the learning of mPrLMs. When transferring mPrLMs, the vocabulary replacement and embedding re-initialization are no longer needed, which makes our framework more simple. 4In this setting, we train the model with same number of update steps using joint MLM and TLM when leveraging parallel sentences. We examine four main related approaches in the line of cross-lingual transfer learning based on PrLMs. The first approach is trivial: using data from the target language and MLM to finetune a mPrLM. This helps specify the mPrLM as a PrLM specifically for the target language. The second is ROSITAWORD (Mulcaire et al., 2019). In this method, the contextualized embeddings of mPrLM are concatenated with non-contextualized multilingual word embeddings. This representation is then aligned across languages in a supervisory manner using a parallel corpus, biasing the model toward cross-lingual feature sharing. The third, proposed by Liu et al. (2019a), makes use of MIM (Meeting-In-the-Middle) (Doval et al., 2018), which uses a linear mapping to refine the embedding alignment, and is somewhat similar to our first step’s adversarial embedding alignment, but because (Liu et al., 2019a) only migrate the contextualized embedding of an mPrLM, it is not a true migration of the model. Specifically, their post-processing trained linear mapping after the contextualized embedding of mPrLM is completely different from our new initialization of the raw embedding of PrLM. The fourth approach, Word-alignment Finetune, is similar in motivation to our CdLM, which uses the alignment information of the parallel corpora to perform finetuning training on the model (whereas ROSITAWORD and MIM focus on language-specific post-processing on the contextualized embedding of mPrLM). The difference is that Word-Alignment Finetune uses contextualized embedding similarity measurement for alignment to calculate the loss, and our method is inspired by machine translation, which uses language-to-language sequence translation for crosslingual language modeling. We evaluate the effectiveness of these methods on dependency parsing as shown in Table 9. We chose the widely used m-BERT-base as the base mPrLM and Chinese as the target language for these experiments. The resulting models were evaluated on the CTB 5.1 data of the dependency parsing task. For ROSITAWORD, we used the word-level embedding trained by Fastext and aligned by MUSE, as done in the original paper. For MIM, the number of training steps for the linear mapping is kept the same as in our first stage’s adversarial embedding alignment training, and both train for 5 epochs. Target-Language Finetuning and Word-Alignment Finetuning use the same data as our main experiments and the same 120K update as well. We also listed a model migrated from a monolingual PrLM (TRI-BERT) to compare the performance differences between transfer learning from monolinguals and multilingual PrLMs. Since the migrated mPrLM is simpler - it does not need to re-initialize or train embeddings and can converge faster, we train the migrated PrLM model longer steps (400K total training steps) to more fairly compare them. Comparing our TRELM with similar methods, the concatenation of cross-lingual aligned wordlevel embeddings in ROSITAWORD seems to have limited effect. MIM, which uses mapping for post-processing, leads to some improvement, but compared to Target-Language Finetune and Word-Alignment Finetune, it is obviously a weaker option. The results of TRI-m-BERT-base, Word-Alignment Finetune, and Target-Languagde Finetune suggest that using explicit alignment signals is advantageous compared to using the target language monolingual data when finetuning a limited amount of update steps, though when data is sufficient and training time is long enough, the performance for cross-lingually transferred models will approach the performance of monolingually pre-trained models regardless of transfer method. Thus, the methods primarily differ in how they perform with limited data, computing resources, or time. Our TRI-m-BERT-base outperforms +Word-Alignment Finetune, which shows that our CdLM, a language sequence modeling method inspired by machine translation, is more effective than solely deriving loss from an embedding space alignment. The results of TRI-BERT-base and TRI-m-BERT-base demonstrate that the simpler migration for m-BERT-base provides an initial performance boost when both models are trained 120k steps due to its faster convergence, but when they are trained to a longer 400K steps, TRI-BERT-base actually shows better performance compared to TRI-m-BERT-base. More Languages for a More Comprehensive Evaluation In order to demonstrate the generalization ability of the cross-lingual transfer learning of the proposed TRELM framework, we also migrate to German (DE) and Japanese (JA) in addition to Chinese and Indonesian. We also experimented with these languages on the Universal dependency parsing task. The migrated German and Japanese TRI-BERT-base and TRI-RoBERTa-base use the same corpus size and training steps as their respective Chinese and Indonesian models. We show the results of German, Indonesian, and Japanese on UD in Table 10. Since there are no official BERT-base models for these three language, we use third-party pre-trained models: Deepset BERT-base-german5, IndoBERT-base (Wilie et al., 2020), CL-TOHOKU BERT-base-japanese6, and NICT BERT-basejapanese7. First, according to the results in the table, our TRI-BERT-base achieves quite similar performance compared to the third-party BERT-base models and even exceeds the third-party models in some instances. This demonstrates that our TRELM is a general cross-lingual transfer learning framework. Second, comparing third-party pre-trained BERT-base models and the official m-BERT-base, we found that some third-party BERTs are even less effective than m-BERT (Generally speaking, m-BERT is not as good as monolingual BERT when the data and training time are sufficient). This shows that in some scenarios, pre-training from scratch is not a very good choice, potentially due to insufficient data, unsatisfactory pre-training resource quality, and/or insufficient pre-training time. Compared with the well-trained monolingual BERT models, our migrated models are very competitive and can exceed PrLMs suffering from poor pre-training. In addition, in DE and JA, we also observed that the effect of TRI-RoBERTa was stronger than that of the TRI-BERT, indicating that our migration process maintained the performance advantage of the original model. 5https://deepset.ai/german-bert 6https://github.com/cl-tohoku/bert-japanese 7https://alaginrc.nict.go.jp/nict-bert/index.html
1. What are the strengths and weaknesses of the paper regarding transferring monolingual BERT to other languages? 2. How does the reviewer assess the contribution of the paper compared to prior works like Artetxe et al. (2020) and Tran (2020)? 3. What are the concerns regarding the necessity and effectiveness of the proposed components, such as adversarial embeddings alignment and reordering layer? 4. Why did the reviewer suggest comparing the proposed method with multilingual BERT instead of English BERT? 5. How could the authors improve their work to address the reviewer's concerns and provide a clearer picture of their contributions?
Review
Review Summary Research Problem: Training BERT from scratch for a language is expensive, transferring knowledge from existing BERT could be more efficient. This paper proposes a set of techniques to transfer monolingual BERT to other languages. It includes adversarial embeddings alignment, MLM loss with unsupervised word-level cross-lingual signal from bitext, and reordering layer. It considers transferring English BERT to two languages: Indonesian and Chinese. It shows reasonably good monolingual downstream performance while more efficient than training from scratch. Pros It presents various components for transferring BERT, and presents ablation study on some of the components and modeling decisions. Cons While it discusses closely related work like Artetxe et al. (2020) and Tran (2020), and certain ablation is presented in the appendix, downstream performance comparison against prior work is not presented. Additionally, while this paper proposes a number of techniques, it’s unclear how much each technique contributes and the necessity of each proposed component. For example, is the proposed adversarial embedding alignment better than the separate vocabulary approach in Tran (2020)? Without grounding this work in terms of prior work in the experiment, it is hard to assess its contribution in pushing the progress on the main research problem. The model description is a bit hard to follow. Questions during rebuttal period Why pick English BERT as the source model as opposed to multilingual BERT? Reasons for score Overall, I advocate rejecting. While I find the idea interesting, it is unclear whether it is necessary as no apple-to-apple comparison against previous work is presented. Hopefully the authors can clarify and address my concern in the rebuttal period. After revision Thank you for answering my questions! However, I still find that the current version does not fully address my concern. I would recommend the authors include baselines of Artetxe et al. (2020) and Tran (2020), and fine-tuning mBERT in the main table in future revision.
ICLR
Title Cross-lingual Transfer Learning for Pre-trained Contextualized Language Models Abstract Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant. In this work, building upon the recent works connecting cross-lingual transfer learning and neural machine translation, we thus propose a novel crosslingual transfer learning framework for PrLMs: TRELM. To handle the symbol order and sequence length differences between languages, we propose an intermediate “TRILayer” structure that learns from these differences and creates a better transfer in our primary translation direction, as well as a new cross-lingual language modeling objective for transfer training. Additionally, we showcase an embedding aligning that adversarially adapts a PrLM’s non-contextualized embedding space and the TRILayer structure to learn a text transformation network across languages, which addresses the vocabulary difference between languages. Experiments on both language understanding and structure parsing tasks show the proposed framework significantly outperforms language models trained from scratch with limited data in both performance and efficiency. Moreover, despite an insignificant performance loss compared to pre-training from scratch in resourcerich scenarios, our transfer learning framework is significantly more economical. 1 INTRODUCTION Recently, the pre-trained contextualized language model has greatly improved performance in natural language processing tasks and allowed the development of natural language processing to extend beyond the ivory tower of research to more practical scenarios. Despite their convenience of use, PrLMs currently consume and require increasingly more resources and time. In addition, most of these PrLMs are concentrated in English, which prevents the users of different languages from enjoying the fruits of large PrLMs. Thus, the task of transferring the knowledge of language models from one language to another is an important task for two reasons. First, many languages do not have the data resources that English uses to train such massive and data-dependent models. This causes a disparity in the quality of models available to English users and users of other languages. Second, languages share many commonalities - for efficiency’s sake, transferring knowledge between models rather than wasting resources training new ones is preferable. Multilingual PrLMs (mPrLMs) also aim to leverage languages’ shared commonalities and lessen the amount of language models needed, but they accomplish this by jointly pre-training on multiple languages, which means when they encounter new languages, they need to be pre-trained from scratch again, which causes a waste of resources. This is distinct from using TreLM to adapt models to new languages because TreLM foregoes redoing massive pre-training and instead presents a much more lightweight approach for transferring a PrLM. mPrLMs can risk their multilingualism and finetune on a specific target language, but we will demonstrate that using TreLM to transfer an mPrLM actually leads to better performance than solely finetuning. Therefore, in order to allow more people to benefit from the PrLM, we aim to transfer the knowledge stored in English PrLMs to models for other languages. The differences in training for new languages with mPrLMs and TRELM are shown in Figure 1. Machine translation, perhaps the most common cross-lingual task, is the task of automatically converting source text in one language to text in another language; that is, the machine translation model converts the input consisting of a sequence of symbols in some language into a sequence of symbols in another language; i.e., it follows a sequence-to-sequence paradigm. Language has been defined as “a sequence that is an enumerated collection of symbols in which repetitions are allowed and order does matter” (Chomsky, 2002). From this definition, we can derive three important differences in the sequences of different languages: symbol sets, symbol order, and sequence length, which can also be seen as three challenges for machine translation and three critical issues that we need to address in migrating a PrLM across languages. In this work, to resolve these critical differences in language sequences, we propose a novel framework that enables rapid cross-lingual transfer learning for PrLMs and reduces loss when only limited monolingual and bilingual data are available. To address the first aforementioned issue, symbol sets, we employ a new shared vocabulary and adversarially align our target embedding space with the raw embedding of the original PrLMs. For the symbol order and sequence length issues, our approach draws inspiration from neural machine translation methods that overcome the differences between languages (Bahdanau et al., 2014), and we thus propose a new cross-lingual language modeling objective, CdLM, which tasks our model with predicting the tokens for a from its parallel sentence in the target language. To facilitate this, we also propose a new “TRILayer” structure, which acts as an intermediary layer that evenly splits our models’ encoder layers set into two halves and serves to convert the source representations to the length and order of the target language. Using parallel corpora for a given language pair, we train two models (one in each translation direction) initialized with the desired pre-trained language model’s parameters. Combining the first half of our target-tosource model’s encoder layer set and the second half of our source-to-target model’s encoder layer set, we are thus able to create a full target-to-target language model. During training, we use three separate phases for the proposed framework, where combinations of Masked Language Modeling (MLM), the proposed CdLM, and other secondary language modeling objectives are used. We conduct extensive experiments on Chinese and Indonesian, as well as German and Japanese (shown in Appendix 10), in challenging situations with limited data and transfer knowledge from English PrLMs. On several natural language understanding and structure parsing tasks, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) PrLM models that we migrate using our proposed framework improve the performance of downstream tasks compared to monolingual models trained from scratch and models pre-trained in a multilingual setting. Moreover, statistics show that our framework also has advantages in terms of training costs. 2 RELATED WORK Because of neural networks’ reliance on heavy amounts of data, transfer learning has been an increasingly popular method of exploiting otherwise irrelevant data in recent years. It has seen many applications and has been used particularly often in Machine Translation (Zoph et al., 2016; Dabre et al., 2017; Qi et al., 2018; Nguyen & Chiang, 2017; Gu et al., 2018; Kocmi & Bojar, 2018; Neubig & Hu, 2018; Kim et al., 2019; Aji et al., 2020), in which transfer learning is generally used to improve translation performance in a low resource scenario using the knowledge of a model trained in a high resource scenario. In addition to cross-lingual situations, transfer learning has also been applied to adapt across domains in the POS tagging (Schnabel & Schütze, 2013) and syntactic parsing (McClosky et al., 2010; Rush et al., 2012) tasks, for example, as well as specifically for adapting language models to downstream tasks (Chronopoulou et al., 2019; Houlsby et al., 2019). One particular difference between our method and many transfer learning methods is that we do not exactly use the popular ”Teacher-Student” framework of transfer learning, which is particularly often used in knowledge distillation (Hinton et al., 2015; Sanh et al., 2020) - transferring knowledge from a larger model to a smaller model. We instead use two ”student” models, and unlike traditional methods, these student models do not share a target space with their teacher (the language is different), and their parameters are initialized with the teacher’s parameters rather than being probabilistically guided by the teacher during training. When using transfer learning for cross-lingual training, there have been various solutions for the vocabulary mismatch. Zoph et al. (2016) did not find vocabulary alignment to be necessary, while Nguyen & Chiang (2017) and Kocmi & Bojar (2018) used joint vocabularies, and Kim et al. (2019) made use of cross-lingual word embeddings. One particular work that inspired us is that of Lample et al. (2018), who also used an adversarial approach to align word embeddings without any supervision while achieving competitive performance for the first time. This succeeded the work of Zhang et al. (2017), who also used an adversarial method but did not achieve the same performance. Also like our aligning method, Xu et al. (2018) took advantage of the similarities in embedding distributions and cross-lingually transferred monolingual word embeddings by simultaneously optimizing based on distributional similarity in the embedding space and the back-translation loss. Several works have also explored adapting the knowledge of large contextualized pre-trained language models to more languages, which pose a much more complicated problem compared to transferring non-contextualized word embeddings. The previous mainstream approach for accommodating more languages is using mPrLMs. Implicitly joint multilingual models, such as m-BERT (Devlin et al., 2019), XLM (Conneau & Lample, 2019), XLM-R (Conneau et al., 2019), and mBART (Liu et al., 2020), are usually evaluated on multi-lingual benchmarks such as XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020), while some works use bilingual dictionaries or sentences for explicit cross-lingual modeling with mPrLMs (Schuster et al., 2019; Mulcaire et al., 2019; Liu et al., 2019a; Cao et al., 2020). Transferring monolingual PrLMs, another research branch, is relatively new. Artetxe et al. (2020) presented a monolingual transformer-based masked language model that was competitive with multilingual BERT when transferred to a second language. To facilitate this, they did not rely on a shared vocabulary or joint training (to which multilingual models’ performance is often attributed) and instead simply learned a new embedding matrix through MLM in the new language while freezing parameters of all other layers. Tran (2020) used a similar approach, though instead of randomly initialized embeddings, he used a sparse word translation matrix on English embeddings to create word embeddings in the target language, reducing the training cost of the model. 3 TRELM Cross-lingual Transfer Learning for Language Modeling (TRELM) is a framework that rapidly migrates existing PrLMs. In this framework, the embedding space of a source language is linearly aligned with that of a target using an adversarial embedding alignment, which we experimentally verified was effective due to shared spatial structure similarities (refer to Appendix A.1 for details). Leveraging joint learning, we propose a novel pre-training objective, CdLM, and unify it with MLM into one format. In regards to model structure, we proposed TRILayer, an intermediary transfer layer, to support language conversion during the CdLM training process. 3.1 TRILAYER AND CdLM For the disparities in symbol sets of different languages and different pre-trained models, we employ embedding space alignment, while for the issues of the symbol order and sequence length, unlike previous work, we do not assume that the model can implicitly learn these differences, and we instead leverage language embeddings and explicit alignment information and propose a novel Cross-Lingual Language Modeling (CdLM) training objective and a Transfer Learning Intermediate Layer (TRILayer) structure as a pivot layer in the model to bridge the differences of the two languages. To clearly explain our training approach, we take the popular PrLM BERT as a basis for introduction. In the original BERT (as shown in Figure5(a)), Transformer (Vaswani et al., 2017) is taken as the backbone of model, which takes tokens and their positions in a sequence as input before encoding this sequence into a contextualized representation using multiple stacked multi-head self-attention layers. During the pre-training process, BERT predominantly adopts an MLM training objective, in which a [MASK] (also written as [M]) token is used to replace a token in the sequence selected by a predetermined probability, and the original token is predicted as the gold target. Formally speaking, given a sentence X = {x1, x2, ..., xT } andM, the set of masked positions, the training loss LMLM for the MLM objective is: LMLM(θLM) = − |M|∑ i=1 logPθLM(xMi |X\M), where θLM are the parameters of BERT, |M| is the length of setM, andX\M indicates the sequence after masking. An example of MLM training is shown in the top-left region of Figure 5. Much work in the field of machine translation suggests that the best way to transfer learning across languages is through translation learning because the machine translation model must address all three of the above-described language differences in the training process. Therefore, we take inspiration from the design of machine translation, especially the design of non-autoregressive machine translation, and propose a Cross-Lingual Language Modeling (CdLM) objective. CdLM is just like a traditional language modeling objective, except across languages, so given an input of source tokens, it generates tokens in a separate target language. We describe the differences between CdLM and related MLM variants (such as Translation Language Modeling (TLM) and BRidge Language Modeling (BRLM)) in Appendix A.4. With this proposed objective, we aim to make as few changes as possible to the existing PrLM and thus introduce a Translation/Transfer Intermediate Layer (“TRILayer”) structure, which bridges two opposing half-models to create our final model. First, in the modified version of BERT for transfer learning, we add a language embedding Elng following the practice of (Conneau & Lample, 2019) to indicate the current language being processed by the model. This is important because the model will handle both the source and target languages simultaneously in 2 of our 3 training phases (described in next subsection). The new input embedding is: Einp = Ewrd + Eseg + Epos + Elng, where Ewrd, Eseg , and Epos are the word (token) embedding, segment embedding, and position embedding, respectively. Next, we denote N as the number of stacked Transformer layers (L = {l1, l2, ..., lN}) in BERT and split the BERT layers into two halves L≤N2 = {11, ..., lN2 } and L>N2 = {lN2 +1, lN2 +2, ..., lN}. The TRILayer is placed between the two halves (making the total number of layers N +1) and functions as a pivot. In the L≤N2 half, the input embedding is encoded by its Transformer layers to hidden states Hi = TRANSFORMERi(Hi−1), in which H0 = Einp and TRANSFORMERi indicates the i-th Transformer layer in the model. Before the outputs of the L≤N2 half are fed into the TRILayer, the source hidden representation HN 2 is reordered according to new order O. During CdLM training, for source language sentence X = {x1, x2, ..., xT }, a possible translation sentence Y = {y1, y2, ..., yT ′} is provided. To find the new order, explicit alignment information between the transfer source and target sentences is obtained using an unsupervised external aligner tool. We define the source-to-target alignment pair set as: AX→Y = ALIGN(X,Y ) = {(xALNIDX(y1), y1), (xALNIDX(y2), y2), ..., (xALNIDX(yT ′ ), yT ′ )}, where ALNIDX(·) is a function that returns the alignment index in the source language or xnull when there is no explicit alignment between the token in the target language and any source language token. xnull represents a special placeholder token [P] that is always appended to the inputs. Finally, the source hidden representation HN 2 is reordered according to the new order O = {ALNIDX(y1),ALNIDX(y2), ...,ALNIDX(yT ′ )} from alignment set AX→Y , creating HON 2 . Thus, the resultant hidden representation HON 2 is in the order of the target language and is consistent with the target sequence in length, making it usable for language modeling prediction. Unfortunately, the position information is lost in reordering. To combat this, the position embedding and language embedding will be reintegrated as follows: HTL = TRANSFORMERTL(HON 2 + ElngY + Epos), where HTL is the output of TRILayer, TRANSFORMERTL is the Transformer structure inside the TRILayer, and ElngY is the target language embedding. Next, the HTL is encoded in the L> 2N half as done for the L≤N2 half (let HN2 = HTL for the L>N2 half) to predict the final full sequence of the target language. The model is trained to minimize the loss LCdLM, which is: LCdLM(θLM) = − T ′∑ i=1 logPθLM(yi|X,AX→Y ). To enable MLM and CdLM to train models simultaneously rather than through successive optimization, we provide a unified view for MLM and CdLM language modeling: LULM(θLM) = − Tmax∑ i=1 1(i ∈ C) logPθLM(wi|S,A), where Tmax denotes the maximum sequence length for language modeling, S is the input sequence, wi is the i-th token in output sequence W , C is the set of positions to be predicted, and A is the alignment between the input and output sequence. Both the input and output sequences are padded to the maximum sequence length Tmax during training. 1(i ∈ C) represents the indicator function and equals 1 when i-th position exists in the set for the parse to be predicted and 0 otherwise. In MLM, S = X\C , A = {(1, 1), (2, 2), ..., (Tmax, Tmax)} is a successive alignment, and W = X , while in CdLM, S = X , A = AX→Y , and W = Y . Due to the unified language modeling abstractions of MLM and CdLM, the input and output forms, as well as the internal logic of their models, are the same. Therefore, models can be trained with the two objectives in the same mini-batch, which enhances the stability of transfer training. 3.2 TRIPLE-PHASE TRAINING In our TRELM framework, the whole training process is divided into three phases with different purposes but the same design goal: minimize the number of parameter updates as much as possible to speed up convergence and enhance training stability. The three phases are commonality training, transfer training, and language-specific training. In the commonality learning phase, only the target language MLM objective is used, while in the transfer learning phase, CdLM and target language MLM objectives are both used at the same time, and in the final language-specific learning phase, target language MLM and other secondary language modeling objectives are adopted. Commonality Training Though languages are very different on the surface, they also share a lot of underlying commonalities, often called linguistic universals or cross-linguistic generalizations. We therefore take advantage of these commonalities between languages and jointly learn the transferring source and target languages. In this phase, the parameters of the position, segment embedding, and Transformer layers are initialized with original BERT, the TRILayer is initialized with parameters of Transformer layer LN 2 , the word embedding is initialized with the output of the adversarial embedding aligning, and orthogonal weight initializations are adopted for the language embedding. For this phase, the model is trained by joint MLM with monolingual inputs from both the source and target languages. Moreover, in this training process, to make convergence fast and stable, the parameters of BERT’s backbone (Transformer) layers are fixed; only the embeddings and TRILayer are updated by the gradient-based optimization based on the joint MLM loss. The final model obtained in this phase is denoted as θctLM. Transfer Training Since the model is not pre-trained from scratch, making the model aware of changes in inputs is a critical factor for a maximally rapid and accurate migration in the case of limited data. Since there is not enough monolingual data in the target language to allow the model to adapt to the new language, we use the supervisory signal from the two languages’ differences and leverage parallel corpora to directly train the model. Specifically, we split the original BERT transformer layers into two halves. With a parallel corpus from the source language to the target language and one from the target language to the source language, we train two corresponding models, both of which are initialized using the parameters learned in the previous phase. In the source-to-target model, only the upper half of the encoder layers is trained, and the lower half is kept fixed, while the converse is true for the target-to-source model. TRILayer then provides crosslingual order and length adjustment, which is similar to the behavior of a neural machine translation model. Thus, we create two reciprocal models: one whose upper half can handle the target language, and one whose lower half can handle it, which we connect via the TRILayer. Finally, the two trained models are combined as θttLM. We describe the full procedure in Algorithm 1. Algorithm 1 Transfer Training of Pre-trained Contextualized Language Models Input: The commonality pre-trained model parameters θctLM, Languages L = {lngX , lngY }, Parallel training set P = {(XL0i , X L1 i )} |P| i=1, Number of training steps K 1: for j in 0, 1 do 2: Initialize model parameters θ Lj→L(1−j) LM ← θ ct LM 3: if j == 0 then 4: Fix the parameters of L≤N 2 half of θ Lj→L(1−j) LM 5: else 6: Fix the parameters of L>N 2 half of θ Lj→L(1−j) LM 7: end if 8: for step in 1, 2, 3, ..., K do 9: Sample batch (XLj , XL(1−j)) from P. 10: Alignment information A: ALj→L(1−j) ← ALIGN(X Lj , XL(1−j)) 11: CdLM Loss: LCdLM ← − ∑ logP θ Lj→L(1−j) LM (XL(1−j) |XLj ,ALj→L(1−j)) 12: Masked version of XL1 : XL1\M ← MASK(X L1) 13: MLM Loss: LMLM ← − ∑ logP θ Lj→L(1−j) LM (XL1M |X L1 \M ) 14: CdLM+MLM Update: θ Lj→L(1−j) LM ← optimizer update(θ Lj→L(1−j) LM ,LCdLM,LMLM) 15: end for 16: end for 17: Combine the two obtained models as θttLM by choosing the L>N 2 half model parameters from model θL0→L1LM and L≤N 2 half model parameters from model θL1→L0LM and average the other parameters (such as embedding and TRILayer parameters) of the two models Output: Learned model θttLM Language-specific Training During the language-specific training phase, we only use the monolingual corpus of the target language and further strengthen the target language features for the model obtained in the transfer training phase. We accomplish this by using the MLM objective and other secondary objectives such as Next Sentence Prediction (NSP). 4 EXPERIMENTS In this section, we discuss the details of the experiments undertaken for this work. We conduct experiments based on English PrLMs1. We transfer via English-to-Chinese and English-to-Indonesian directions for the purpose of comparing with previous recent work. We describe the training details and parameters in Appendix A.5. From English to Chinese and English to Indonesian, we transfer two pre-trained contextualized language models: BERT and RoBERTa. Our performance evaluation on the migrated models is mainly conducted on two types of downstream tasks: language understanding and language structure parsing. Please refer to Appendix A.6 for introductions of tasks and baselines and Appendix A.7 for an ablation study. We note that the comparisons between models trained using TRELM and the monolingual and multilingual PrLMs trained from scratch on the target language (see Table 1) is only for illustrating the relative performance loss of the model 1Our code is available at https://github.com/agcbi2017/TreLM. produced by TRELM. These models are not directly comparable, as we intentionally use less data to train models when using TRELM. Continuing to pre-train the PrLMs on the target language would also obviously further improve their performance, but this is not our main focus. Language Understanding We first compare the PrLMs transferred by TRELM alongside the results the existing monolingual pre-trained BERT-base-chinese and the multilingual pre-trained BERT-base-multilingual in Table 1 using the CLUE benchmark. When comparing with the same model architecture, taking BERT as an example, our model TRIBERT-base exceeds m-BERT-base and BERT-small and is slightly weaker than original BERT-base. Compared with BERT-small, which is trained from scratch for a longer time, our TRI-BERT-base generally achieves better results on these NLU tasks. This demonstrates that because of the commonalities of languages, models for languages with relatively few resources can benefit from language models pre-trained on languages with richer resources, which confirms our cross-lingual transfer learning framework’s effectiveness. m-BERT is another potential language model migration scheme and has the advantage of supporting multiple languages at the same time; however, in order to be compatible with multiple languages, the unique characteristics of each language are neglected. Our TRI-BERT, which is built on top of BERT-base, instead focuses on and highlights language differences during the transfer learning process, which leads to an increase in performance compared to m-BERT. When TRI-BERT and TRI-RoBERTa have the same model size, TRI-RoBERTa outperforms TRI-BERT, which is consistent with the performance differences between the original RoBERTa and BERT, indicating that our migration approach maintains the performance advantages of PrLMs. Models CoNLL-09 P R F1 (Cai et al., 2018) 84.7 84.0 84.3 +BERT-base 86.86 87.48 87.17 +m-BERT-base 85.17 85.53 85.34 +TRI-BERT-base 86.15 85.58 85.86 +TRI-RoBERTa-base 87.08 86.99 87.03 +TRI-RoBERTa-base 85.77 85.62 85.69(w/o CdLM) Table 3: Dependency SRL results on the CoNLL-2009 Chinese benchmark. 0 1K 10K 100K 500K 1M 2 3 4 Parallel Data BP W 86 86.5 87 Se m -F 1 Figure 2: Language modeling effects vs. Parallel data size on the evaluation set. Language Structure Parsing We report results on dependency parsing for Chinese and Indonesian in Table 2. As shown in the results, the baseline model has been greatly improved for the PrLM. In Chinese, the performance of BERT-base is far superior to m-BERT-base, which highlights the importance of the unique nature of the language for downstream tasks, especially for refined structural analysis tasks. In Indonesian, IndoBERT (Wilie et al., 2020) performs worse than m-BERT, which we suspect is due to IndoBERT’s insufficient pre-training. We also compare TRI-BERT-base and IndoBERT-base on Indonesian, whose ready-to-use language resources are relatively small compared to English. We find that although pre-training PrLMs on the available corpora is possible, because of the size of language resources, engineering implementation, etc., our migrated model is more effective than the model pre-trained from scratch. This shows that migrating from the readymade language models produced from large-scale language training and extensively validated by the community is more effective than pre-training on relatively small and limited language resources. In addition, we also conduct experiments for these pre-trained and migrated models on Chinese SRL. mPrLMs are another important and competitive approach that can adapt to cross-lingual PrLM applications, so we also include several mPrLMs in our comparison on dependency parsing. Specifically, we used XLM, a monolingual and multilingual PrLM pre-training framework, as our basis. For TRELM, we used XLM-en-2048, officially provided by Conneau & Lample (2019), as the source model. The data amount used and the number of training steps are consistent with TRI-BERT/TRIRoBERTa. In mPrLM, we combined EN, ID, and ZH sentences (including monolingual and parallel sentences) together (10M sentences in total) to train an EN-ID-ZH mPrLM with MLM and TLM objectives. The performance comparison of these three PrLMs on the dependency parsing task is shown in the lower part of Table 2. From the results, we see mPrLMs pre-trained from scratch have no special performance advantage over TRELM when corpus size is constant, and especially when not using the cross-lingual transfer learning objective TLM, which models parallel sentences. In fact, our TRI-XLM-en-2048 solidly outperforms its two multilingual XLM counterparts. Monolingual PrLMs generally outperform mPrLMs, which likely leads to the performance advantages shown with monolingual migration. Additionally, like our TRELM, mPrLMs can also finetune on only the target language to improve performance, and leveraging TRELM to transfer an mPrLM leads to even further gains, as seen in Table 9 in the appendix. While the two approaches can compete with each other, they have their own advantages in general. In particular, TRELM is more suitable for transferring additional languages that were not considered in the initial pre-training phase and for low-resource scenarios, while mPrLMs have the advantage of being able to train and adapt to multiple languages at once. In Table 3, we compared a model migrated without CdLM to the full one. To compensate for the removal of CdLM, we added a monolingual corpus with the same size as the parallel corpora and trained the model with an extra 80K steps, but despite using more target monolingual data and training steps, the performance was still much better when CdLM was included. 5 DISCUSSION Effects of Parallel Data Scale Since the proposed TRELM framework relies on parallel corpora to learn the language differences explicitly, the sizes of the parallel corpora used are also of concern. We explored the influence of different parallel corpus sizes on the performance of the models transferred with the TRI-RoBERTa-base architecture. The variation curve of BPW score with the size of parallel data is shown in Figure 2. We see that with increasingly more parallel data, BPW gradually decreases, but this decrease slows as the data grows. The effect of the parallel corpora for cross-lingual transfer therefore has a upper bound because when the parallel corpora reaches a certain size, the errors from the alignment extraction tools cannot be ignored, and additionally, due to how lightweight the TRILayer structure is, TRILayers can only contain so much cross-lingual transfer information, which further restricts the growth of the migration performance. Pre-training Cost vs. Migration Training Cost The training cost is an important factor for choosing whether to pre-train from scratch or to migrate from an existing PrLM. We listed the training data size, model parameters, training hardware, and training time of several public PrLM models and compared them with our models. The comparisons are shown in Table 4. Although the training hardware and engineering implementation of various PrLM models are different, this can still be used as a general reference. When model size is the same, our proposed transfer learning is much faster than pre-training from scratch, and less data is used in the transfer learning process. In addition, the total training time of our large model migration training is less than that of even the base model pre-training when hardware is kept the same. Therefore, the framework we proposed can be used as a good supplementary scheme for the PrLM in situations when time or computing resources are restricted. Model Data BSZ Steps Params Hardware Train Time G/TPU·Days 6 CONCLUSION AND FUTURE WORK In this work, we present an effective method of transferring knowledge from a given language’s pre-trained contextualized language model to a model in another language. This is an important accomplishment because it allows more languages to benefit from the massive improvements arising from these models, which have been primarily concentrated in English. As a further plus, this method also enables more efficient model training, as languages have commonalities, and models in the target language can exploit these commonalities and quickly adopt these common features rather than learning them from scratch. In future work, we plan to use our framework to transfer other models such as ALBERT and models for more languages. We also aim to develop an unsupervised cross-lingual transfer learning objective to remove the reliance on parallel sentences. A APPENDIX A.1 ADVERSARIAL EMBEDDING ALIGNING Since the symbol sets in different languages are different, the first step in the cross-lingual migration of PrLMs is to supplement or even replace their vocabularies. In our proposed framework, to make the best use of the commonalities between languages, we choose to use a shared vocabulary with multiple languages rather than replace the original language vocabulary with one for the new language. In addition, in current PrLMs, a subword vocabulary is generally adopted in order to better mitigate out-of-vocabulary (OOV) problems caused by limited vocabulary size. To accommodate the introduction of a shared vocabulary, it is necessary to jointly re-train the subword model to ensure that some common words in different languages are consistent in subword segmentation, which leads to the problem that some tokens in the newly acquired subword vocabulary are different from those in the original subword vocabulary, though they belong to the same language. To address this issue, we consider the most complicated case, in which the vocabulary is completely replaced by a new one. Consequently, we assume that there are two embedding spaces: one is the embedding of the original vocabulary, which is well-trained in the language pre-training process, and the other is the embedding of the new vocabulary, yet to be trained. When considering raw embeddings and non-contextualized embeddings (e.g. Word2vec), it is easy to see their training objectives are similar in theory. The only differences are the addition of context and the change in model structure to accommodate language prediction. Despite these differences, non-contextualized embeddings can be used to simulate the raw embeddings in a PrLM that we aim to replace (refer to Appendix A.2 for a detailed explanation). Although the two embedding spaces we consider are similar in structure, they may be at different positions in the whole real embedding space, so an extra alignment process is required, and although common tokens may exist, due to the inconsistent token granularity from using byte-level byte-pair encoding (BBPE) (Radford et al., 2019), a matching token of the two embedding spaces cannot be utilized for embedding space alignment, as it is likely to represent different meanings. Therefore, inspired by (Lample et al., 2018), we present an adversarial approach for aligning the word2vec embedding space to the PrLM’s raw embedding space without supervision. With this approach, we aim to minimize the differences between the two embedding spaces brought about by different similarity forms. We define U = {u1, u2, ..., um} and V = {v1, v2, ..., vn} as the two embedding spaces of m and n tokens from the PrLM and word2vec training, respectively. In the adversarial training approach, a linear mapping W is trained to make the spaces WV = {Wv1,Wv2, ...,Wvn} and U close as possible, while a discriminator D is employed to discriminate between tokens randomly sampled from spaces WV and U . Let θadv denote the parameters of the adversarial training model and the probabilities Pθadv (1(z)|z) and Pθadv (0(z)|z) indicate whether or not the sampling source prediction is the same as its real space for a vector z. Therefore, the discrimination training loss LD(θD|W ) and the mapping training loss LD(W |θD) are defined as: LD(θD|W ) = − 1 n n∑ i=1 logPθadv (1(Wvi)|Wvi)− 1 m m∑ i=0 logPθadv (1(ui)|ui), LW (W |θD) = − 1 n n∑ i=1 logPθadv (0(Wvi)|Wvi)− 1 m m∑ i=0 logPθadv (0(ui)|ui), where θD are the parameters of discriminator D, which is implemented as a multilayer perceptron (MLP) with two hidden layers and Leaky-ReLU as the activation function. During the adversarial training, the discriminator parameters θD and W are optimized successively with discrimination training loss and mapping training loss. To enhance the effect of embedding space alignment, we adopted the same techniques of iterative refinement and cross-domain similarity local scaling as Lample et al. (2018) did. While the two embedding spaces in (Lample et al., 2018) both can be updated by gradient, we consider U as the goal spatial structure and hence fix U throughout the training process, and we updateW to better align V . A.2 ANALYZING NON-CONTEXTUALIZED EMBEDDINGS AND PrLMS’ RAW EMBEDDINGS Bidirectional PrLMs such as BERT (Devlin et al., 2019) use Masked Language Modeling (MLM) as the training objective, in which the model is required to predict a masked part of the sentence. This training paradigm has no essential difference with word2vec (Mikolov et al., 2013). Word2vec employed a simple single-layer perceptron neural network and restricted the context for the masked part to the sliding window, while recent mainstream PrLMs adopted self-attention-based Transformer as the context encoder, which can utilize the whole sentence as context. Because of this, we speculate that BERT’s raw embeddings and word2vec embeddings have a similar nature, and that we can simulate BERT’s raw embeddings with the word2vec embeddings through some special designs. To verify our theory, we studied the important relational nature of embeddings. Specifically, we chose BERT-base-cased’s raw embeddings and word2vec-based FastText cc.en.300d embeddings (Grave et al., 2018) and evaluated the cosine similarity of single terms compared to other terms in their vocabularies. An example histogram for the term “genes” is shown in Figure 3. Examining the two types of embeddings, we found that the learned vectors, regardless of the type of similarity (semantic/syntactic/inflections/spelling/etc.) they capture, have a very similar distribution shape. This showed us that the two embedding spaces are similar, and words within them may just have different relations to each other. Thus, our work focuses on aligning the new word2vec embedding space by learning a mapping to the original embedding space to simulate the original embedding allow for a cross-lingual migration of the PrLM. To illustrate the necessity of embedding alignment, we also took out the top-50 terms closest to the term “genes” in the two embedding spaces, used principal component analysis (PCA) to reduce the vector dimension to 2, and presented it in a two-dimensional figure, as shown in Figure 4. As can be seen from the figure, due to the different language modeling architectures and contexts in FastText and BERT, corresponding points are distributed at different locations in the embedding space. This is why compatibility problems exist when we use the original non-contextualized embeddings to simulate the new embedding and hence why we need to align the embeddings. A.3 MODEL ARCHITECTURE IN TRELM A.4 MLM, TLM, BRLM, AND CdLM As stated in the original MLM objective, the model can only learn from monolingual data. Though a joint MLM training can be performed across languages, there is still a lack of explicit language cues for guiding the model in distinguishing language differences. Conneau & Lample (2019) proposed a Translation Language Modeling (TLM) objective as an extension of the MLM objective. The TLM objective leverages bilingual parallel sentences by concatenating them into single sequences as in the original BERT and predicts the tokens masked in the concatenated sequence. This encourages the model to predict the masked part in a bilingual context. Ji et al. (2020) further proposed a BRidge Language Modeling (BRLM) built on the TLM, benefiting from explicit alignment information or additional attention layers that encourage word representation alignment across different languages. These MLM variants drive models to learn explicit or implicit token alignment information across languages and have been shown effective in machine translation compared to the original MLM, but for the cross-lingual transfer learning of PrLMs, modeling the order difference and semantic equivalence in different languages is still not enough. Since both contexts in MLM variants have been exposed to the model, whether the prediction of the masked part depends on the cross-lingual context or the context of its own language is unknown, as it lacks explicit clues for cross-lingual training. In our proposed CdLM, we use sentence alignment information for explicit ordering. The model is exposed to both the transfer source and transfer target languages at the same time, during which the input is a sequence of the source language, and the prediction goal is a sequence of the target language. Thus, we convert translation into a cross-language modeling objective, which gives a clear supervision signal for cross-lingual transfer learning. A.5 TRAINING DETAILS The initial weights for the migration are BERT-base-cased, BERT-large-cased, RoBERTa-base, and RoBERTa-large, which are taken from their official sources. We use English Wikipedia, Chinese Wikipedia, Chinese News, and Indonesian CommonCrawl Corpora for the monolingual pre-training data. For all models migrated in the same direction, regardless of their original vocabulary, we used the same single vocabulary that we trained on the joint language data using the WordPiece Subword scheme (Schuster & Nakajima, 2012). In English-to-Chinese, the vocabulary size is set to 80K and the alphabet size is limited to 30K, while in English-to-Indonesian, the vocabulary size is set to 50K, and the alphabet size is limited to 1K. With the WordPiece vocabulary, we tokenized the monolingual corpus to train the non-contextualized word2vec embedding of subwords. Using the fastText (Bojanowski et al., 2017) tool and skipgram representation mode, three embedding sizes 128, 768, and 1024 were trained to be compatible the respective pre-trained language models. In the “commonality” training phase, we sampled 1M sentences of English Wikipedia and either 1M sentences of Chinese Wikipedia or 1M sentences of Indonesian CommonCrawl for the English-toChinese and English-to-Indonesian models. We trained the model with 20K update steps with total batch size 128 and set the peak learning rate to 3e-5. For the “transfer” training phase, we sampled 1M parallel sentences from the UN Corpus (Ziemski et al., 2016) for English-to-Chinese and 1M parallel sentences from OpenSubtitles Corpus (Lison & Tiedemann, 2016) for English-to-Indonesian. We use the fastalign toolkit (Dyer et al., 2013) to extract the tokenized subword alignments for CdLM. The two half models are optimized over 20K update steps, and the batch size and peak learning rate are set to 128 and 3e-5, respectively. In the final phase, “language-specific” training, 2M Chinese and Indonesian sentences were sampled to update their respective models, training for 80K steps with total batch size 128 and initial learning rate 2e-5. In all the above training phases, the maximum sequence length was set to 512, weight decay was 0.01, and we used Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999. In addition to our migrated pre-trained models, we also pre-trained a BERT-small2 model from scratch with data of the same size as our migration process to compare the performance differences between migration and scratch training. For the BERT-small model, we started with the BERT-base hyper-parameters and vocabulary but shortened the maximum sequence length from 512 to 128, reduced the model’s hidden and token embedding dimension size from 768 to 256, set the batch size to 256, and extended the training steps to 240K. Our TRI-BERT -* and TRI-RoBERTa-* all used the same amount of training data (2M target language monolingual sentences, 1M source language monolingual sentences, and 1M parallel sentences). BERT-small pre-trained from scratch on only the target language, using 5M target language sentences to ensure the training data amount was the same. Compared with the original model, the TRI-* model only has an extra TRI-layer added and some changes in the embedding layer. BERTbase-chinese and m-BERT-base models were downloaded from the official repository, which trained with 25M sentence (much more than our 5M sentences) and more training steps. 2The performance of BERT-base for pre-training from scratch with this limited data is inferior to that of BERT-small, so we do not compare it with our migrated models. A.6 DOWNSTREAM TASKS Following previous contextualized language model pre-training, we evaluated the English-toChinese migrated language models on the CLUE benchmark. The Chinese Language Understanding Evaluation (CLUE) benchmark (Xu et al., 2020) consists of six different natural language understanding tasks: Ant Financial Question Matching (AFQMC), TouTiao Text Classification for News Titles (TNEWS), IFLYTEK (CO, 2019), Chinese-translated Multi-Genre Natural Language Inference (CMNLI), Chinese Winograd Schema Challenge (WSC), and Chinese Scientific Literature (CSL) and three machine reading comprehension tasks: Chinese Machine Reading Comprehension (CMRC) 2018 (Cui et al., 2019), Chinese IDiom cloze test (CHID) Zheng et al. (2019), and Chinese multiple-Choice machine reading Comprehension (C3) (Sun et al., 2019). We built baselines for the natural language understanding tasks by adding a linear classifier on top of the “[CLS]” token to predict label probabilities. For the extractive question answering task, CMRC, we packed the question and passage tokens together with special tokens to form the input: “[CLS] Question [SEP] Passage [SEP]”, and employed two linear output layers to predict the probability of each token being the start and end positions of the answer span following the practice for BERT (Devlin et al., 2019). Finally, in the multi-choice reading comprehension tasks, CHILD and C3, we concatenated the passage, question, and each candidate answer (“[CLS] Question || Answer [SEP] Passage [SEP]”), input this to the models, and also predicted the probability of each answer on the representations from the “[CLS]” token following prior works (Yang et al., 2019; Liu et al., 2019b). In addition to these language understanding tasks, language structure analysis tasks are also a very important part of natural language processing. Therefore, we also evaluated the PrLMs on syntactic dependency parsing and semantic role labeling, a type of semantic parsing. The baselines we selected for dependency parsing and semantic role labeling are from (Dozat & Manning, 2016) and Cai et al. (2018), respectively. These two baseline models are very strong and efficient and rely only on pure model structures to obtain advanced parsing performance. Our approach to integrate the PrLM with the two baselines is to replace the BiLSTM encoder in the baseline with the encoder of the PrLM. We took the first subword or character representation of a word as the representation of a word, which solved the PrLM’s inconsistent granularity issue that impeded parsing. For the English-to-Indonesian migrated language models, since the language understanding tasks in Indonesian are very limited, we chose to use the Universal Dependency (UD) parsing task (v2.3, Zeman et al., 2018), in which the treebanks of the world’s languages were built by an international cooperative project, as the downstream task for evaluation. A.7 ABLATIONS Effects of Different Embedding Initialization To show the effectiveness of non-contextualized simulation and adversarial embedding space alignment, we compare the TRI-RoBERTa-base models obtained in the commonality training phase of our framework under four different embedding initialization configurations: random, random+adversarial align, fastText pre-trained, and fastText pre-trained+adversarial align. In addition, to lessen the influence of training different amounts during different initializations, we trained an additional 40K update steps in the commonality training phase. We selected newstest2020-enzh.ref.zh in the WMT-20 news translation task as the evaluation set with a total of 1418 sentences to avoid potential overlapping with the training set. The subwordlevel bits-per-word (BPW) was used as the evaluation metric for the model’s MLM performance3. The BPW results on the evaluation set are presented in Table 5. The non-contextualized fastText embedding simulation and adversarial embedding alignment setting achieves better BPW scores than other configurations, which shows the effectiveness of our proposed approach. In addition, comparing the embedding initialization of random+adversarial align and fastText pre-trained shows that pre-training non-contextualized embeddings using language data is more effective than direct embedding space alignment. Considering different training 20K steps versus 40K steps, longer training leads to lower BPW, but the performance gains are less than what our method brings. 3We do this because these models in comparison use the same vocabulary, and the masked parts on the evaluation set are identical, making the BPW scores comparable. Effects of Cross-lingual Transfer Learning in TRELM We conduct further ablation studies to analyze our proposed TRELM framework’s cross-lingual transfer learning design choices, including introducing the novel training objective, CdLM, and the TRILayer structure. The translation performance evaluation results are shown in Table 6. Using the newstest2020 en-zh and zh-en test sets, we evaluate the TRI-RoBERTa-base and TRI-RoBERTa-large models at the end of their transfer training phases. Since there is no alignment information available during the evaluation phase, we use the same successive alignment that MLM uses. For the sequence generated by the model, continuous repetitions were removed and the [SEP] token was taken as the stop mark to obtain the final translation sequence. In the EN→ZH translation direction, we report character-level BLEU, while in ZH→EN, we report word-level BLEU. The Transformer-base NMT models for comparison are from Tiedemann & Thottingal (2020) and were trained on the OPUS corpora (Tiedemann, 2012). As seen from the results, our TRI-RoBERTa-base and TRI-RoBERTa-large with CdLM were able to obtain very good BLEU-1 scores, indicating that the mapping between the transferring source language and target language was explicitly captured by the model. When CdLM is removed and we only use the traditional joint MLM and TLM for training on the same size parallel data, we find that the BLEU-1 score significantly decreases, demonstrating that joint MLM and TLM do not learn explicit alignment information. The BLEU-1 score is lower than that of the Transformer-base NMT model, but this is because the Transformer-base model uses more parallel corpora as well as a more complex model design compared to our non-autoregressive translation pattern and lightweight TRILayer structure. In addition, compared with BLEU-2/3/4, it can be seen that although Transformerbase can accurately translate some tokens, many tokens are not translated or are translated in the wrong order due to the lack of word ordering information and the differing sequence lengths, which result in a very low score. This also shows that word order is a very important factor in translation. Since the TRELM framework is evaluated using the existed pre-trained models, our migrated models are always larger than the original ones. Additional parameters arise in two places: embedding layer parameters grow due to a larger vocabulary and language embeddings, and the TRILayer structure adds parameters. The embedding layer growth is necessary, but the TRILayer structure is optional, as it is only used for cross-lingual transfer training. Therefore, for this ablation, we test removing the TRILayer structure for a fairer comparison4 and show the results in Table 7. Comparing the evaluation set BPW scores of the final models obtained from RoBERTa-base under different migration methods, we found that our TRELM framework is stronger in cross-lingual transfer learning compared to jointly using MLM and TLM, and it does not simply rely on the extra parameters of the TRILayer. Furthermore, applying these pre-trained language models to the downstream task, dependency parsing on the CTB 5.1 treebank, achieves corresponding effects in BPW, which shows that the BPW score does describe the performance of PrLMs and that the pre-training performance will greatly affect performance in downstream tasks. Comparison of Different Cross-lingual Transfer Learning Objectives As discussed in Appendix A.4, CdLM, TLM, and TLM variants such as BRLM are typical objectives of cross-lingual transfer learning, in which parallel sentences are utilized for cross-lingual optimization. In order to compare the differences between these objectives empirically, we conducted a comparative experiment on TRI-RoBERTa-base. For this experiment, instead of using the transfer learning objective CdLM in the second stage of training like our other models, we use TLM or BRLM instead. In addition, we follow (Artetxe et al., 2020) in experimenting with the effects of joint vocabulary versus a separate vocabulary in cross-lingual transfer learning, and we include a model, CdLM∗, with a separate vocabulary in this comparison as well. Specifically, for this model, we forego language embeddings and adopt independent token embeddings for difference languages. CdLM and MLM alternately optimize the model. The empirical comparison of these objectives is listed in Table 8. The migration target language is Chinese, and BPW score is used to compare the performance of the migrated model. We also show the dependency parsing performance on the CTB 5.1 dataset for the obtained model. Looking at CdLM and CdLM∗, in our TRELM framework, using a joint vocabulary leads to better performance than using a separate vocabulary strategy, which is not consistent with Artetxe et al. (2020) ’s conclusion. We attribute this difference to the fact that (Artetxe et al., 2020)’s model uses joint MLM pre-training of multiple languages to achieve implicit transfer learning, so maintaining independent embeddings is important for distinguishing the language. In TRELM, because it trains two half-models, the explicit conversion signal guides the model’s migration training in discerning the language. When using separate vocabularies, some common information (such as punctuation, loanwords, etc.) are ignored, lessening the impact of CdLM. Second, comparing TLM, BRLM, and CdLM, we note that CdLM takes the source and target language sequences as input and output, respectively, which cooperates with the TRILayer and half-model training strategy much better, whereas TL and BRLM combine the source and target sentences as input and predict a masked sentence as in MLM, which is much less conducive to the half-model training strategy. Because the source and target language sentences are separate in CdLM, the model is much more able to differentiate the two languages, which makes CdLM a stronger cross-lingual transfer learning objective. Comparison with Cross-lingual Transfer Learning Related Works on mPrLM Although we propose our method as an alternative to mPrLMs for cross-lingual transferring, it can also be applied to transfer the learning of mPrLMs. When transferring mPrLMs, the vocabulary replacement and embedding re-initialization are no longer needed, which makes our framework more simple. 4In this setting, we train the model with same number of update steps using joint MLM and TLM when leveraging parallel sentences. We examine four main related approaches in the line of cross-lingual transfer learning based on PrLMs. The first approach is trivial: using data from the target language and MLM to finetune a mPrLM. This helps specify the mPrLM as a PrLM specifically for the target language. The second is ROSITAWORD (Mulcaire et al., 2019). In this method, the contextualized embeddings of mPrLM are concatenated with non-contextualized multilingual word embeddings. This representation is then aligned across languages in a supervisory manner using a parallel corpus, biasing the model toward cross-lingual feature sharing. The third, proposed by Liu et al. (2019a), makes use of MIM (Meeting-In-the-Middle) (Doval et al., 2018), which uses a linear mapping to refine the embedding alignment, and is somewhat similar to our first step’s adversarial embedding alignment, but because (Liu et al., 2019a) only migrate the contextualized embedding of an mPrLM, it is not a true migration of the model. Specifically, their post-processing trained linear mapping after the contextualized embedding of mPrLM is completely different from our new initialization of the raw embedding of PrLM. The fourth approach, Word-alignment Finetune, is similar in motivation to our CdLM, which uses the alignment information of the parallel corpora to perform finetuning training on the model (whereas ROSITAWORD and MIM focus on language-specific post-processing on the contextualized embedding of mPrLM). The difference is that Word-Alignment Finetune uses contextualized embedding similarity measurement for alignment to calculate the loss, and our method is inspired by machine translation, which uses language-to-language sequence translation for crosslingual language modeling. We evaluate the effectiveness of these methods on dependency parsing as shown in Table 9. We chose the widely used m-BERT-base as the base mPrLM and Chinese as the target language for these experiments. The resulting models were evaluated on the CTB 5.1 data of the dependency parsing task. For ROSITAWORD, we used the word-level embedding trained by Fastext and aligned by MUSE, as done in the original paper. For MIM, the number of training steps for the linear mapping is kept the same as in our first stage’s adversarial embedding alignment training, and both train for 5 epochs. Target-Language Finetuning and Word-Alignment Finetuning use the same data as our main experiments and the same 120K update as well. We also listed a model migrated from a monolingual PrLM (TRI-BERT) to compare the performance differences between transfer learning from monolinguals and multilingual PrLMs. Since the migrated mPrLM is simpler - it does not need to re-initialize or train embeddings and can converge faster, we train the migrated PrLM model longer steps (400K total training steps) to more fairly compare them. Comparing our TRELM with similar methods, the concatenation of cross-lingual aligned wordlevel embeddings in ROSITAWORD seems to have limited effect. MIM, which uses mapping for post-processing, leads to some improvement, but compared to Target-Language Finetune and Word-Alignment Finetune, it is obviously a weaker option. The results of TRI-m-BERT-base, Word-Alignment Finetune, and Target-Languagde Finetune suggest that using explicit alignment signals is advantageous compared to using the target language monolingual data when finetuning a limited amount of update steps, though when data is sufficient and training time is long enough, the performance for cross-lingually transferred models will approach the performance of monolingually pre-trained models regardless of transfer method. Thus, the methods primarily differ in how they perform with limited data, computing resources, or time. Our TRI-m-BERT-base outperforms +Word-Alignment Finetune, which shows that our CdLM, a language sequence modeling method inspired by machine translation, is more effective than solely deriving loss from an embedding space alignment. The results of TRI-BERT-base and TRI-m-BERT-base demonstrate that the simpler migration for m-BERT-base provides an initial performance boost when both models are trained 120k steps due to its faster convergence, but when they are trained to a longer 400K steps, TRI-BERT-base actually shows better performance compared to TRI-m-BERT-base. More Languages for a More Comprehensive Evaluation In order to demonstrate the generalization ability of the cross-lingual transfer learning of the proposed TRELM framework, we also migrate to German (DE) and Japanese (JA) in addition to Chinese and Indonesian. We also experimented with these languages on the Universal dependency parsing task. The migrated German and Japanese TRI-BERT-base and TRI-RoBERTa-base use the same corpus size and training steps as their respective Chinese and Indonesian models. We show the results of German, Indonesian, and Japanese on UD in Table 10. Since there are no official BERT-base models for these three language, we use third-party pre-trained models: Deepset BERT-base-german5, IndoBERT-base (Wilie et al., 2020), CL-TOHOKU BERT-base-japanese6, and NICT BERT-basejapanese7. First, according to the results in the table, our TRI-BERT-base achieves quite similar performance compared to the third-party BERT-base models and even exceeds the third-party models in some instances. This demonstrates that our TRELM is a general cross-lingual transfer learning framework. Second, comparing third-party pre-trained BERT-base models and the official m-BERT-base, we found that some third-party BERTs are even less effective than m-BERT (Generally speaking, m-BERT is not as good as monolingual BERT when the data and training time are sufficient). This shows that in some scenarios, pre-training from scratch is not a very good choice, potentially due to insufficient data, unsatisfactory pre-training resource quality, and/or insufficient pre-training time. Compared with the well-trained monolingual BERT models, our migrated models are very competitive and can exceed PrLMs suffering from poor pre-training. In addition, in DE and JA, we also observed that the effect of TRI-RoBERTa was stronger than that of the TRI-BERT, indicating that our migration process maintained the performance advantage of the original model. 5https://deepset.ai/german-bert 6https://github.com/cl-tohoku/bert-japanese 7https://alaginrc.nict.go.jp/nict-bert/index.html
1. What is the focus of the paper regarding language model transfer to new languages? 2. What are the strengths and weaknesses of the proposed approach, particularly in its evaluation and comparison to other methods? 3. Do you have any concerns about the benefit of the proposed approach and its limitations regarding its adaptability to other languages? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or suggestions regarding the organization and presentation of the paper?
Review
Review This paper proposes a method to transfer a language model pre-trained in English to a new language. The approach consists of learning a mapping to the target language via the adversarial of Lample et al. (2018), a new intermediate layer where hidden representations of the input tokens are reordered based on alignment information of an external aligner tool, and a multi-step training process. The model is evaluated on Chinese classification tasks from the CLUE benchmark and Chinese and Indonesian dependency parsing datasets where it mostly performs competitively with a monolingual BERT Base and slightly outperforms multilingual BERT. Pros: The paper tackles an important setting, the transfer of models to languages other than English. The proposed method performs competitively with a monolingual BERT model while being more efficient to train. The proposed intermediate layer that explicitly incorporates alignment information is novel. Cons: Lack of evaluation on other languages. Despite proposing a general framework for cross-lingual transfer, the paper only evaluates on two fairly high-resource languages, Chinese and Indonesian. Given that the datasets used by the authors for evaluation such as the CoNLL-09 and UD datasets cover additional languages and in light of the introduction of recent massively multilingual benchmarks such as XTREME (Hu et al., 2020) or XGLUE (Liang et al., 2020), evaluation on multiple typologically diverse languages is necessary to demonstrate the generality of the proposed method. Missing baselines. The paper mentions two papers, Artetxe et al. (2020) and Tran (2020) that propose methods for the same setting—transferring an English model to other languages—in the related work section but does not compare to them during the experiments. The authors should also compare to a state-of-the-art multilingual model, such as XLM-R (Conneau et al., 2020). It is unclear where the benefit of the proposed approach comes from. The multi-step training process employed by the authors trains only on the monolingual corpus of the target language in the final phase of training. Recent work (Pfeiffer et al., 2020; https://arxiv.org/abs/2005.00052) has shown that such target language adaptation improves performance significantly over a pre-trained multilingual baseline. I was not able to find an ablation regarding the effect of this training phase, so the improved performance of the model could be due to fine-tuning only on the target language. At the very least, the authors should compare to a multilingual baseline that was adapted in the same way. This phase also makes the approach less general as the model is specifically tuned for the target language and less transferable to other languages. Lack of references of related work. The intro does not provide any references for related work on cross-lingual modeling and the rest of the paper is generally sparse with references of relevant work. For instance, given that the authors employ an algorithm that has been previously used for mapping cross-lingual word embeddings to map contextual embeddings, mentioning relevant work on this topic such as (Schuster et al., 2019; https://www.aclweb.org/anthology/N19-1162/) would be useful. Some passages in the text are unclear or should be supported with additional evidence. For instance, I found the claim that "symbol sets, symbol order, and sequence length [...] are the three challenges of machine translation" hard to justify without additional evidence given the presence of many other big challenges such as understanding semantics and context, resolving ambiguity, dealing with coreferences, etc. Given the page limit of 8 pages, the organization of the paper overall could be improved. Many important pieces of information such as the entire description of triple-phase training and the entire ablation analysis have been moved to the appendix, with no description of the key takeaways in the main body of the paper. Instead, Section 3.1 for instance, which mostly discusses an existing method could be shortened considerably.
ICLR
Title Binary Paragraph Vectors Abstract Recently Le & Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection. 1 INTRODUCTION One of the significant challenges in contemporary information processing is the sheer volume of available data. Gantz & Reinsel (2012), for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing (Indyk & Motwani, 1998), relies on hashing data into short, localitypreserving binary codes (Wang et al., 2014). The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by Salakhutdinov & Hinton (2009). Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-ofwords (BOW) representation. Salakhutdinov & Hinton demonstrated that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fullyconnected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words. Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. Mikolov et al. (2013) proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by Le & Mikolov (2014) to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bagof-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation (Gutmann & Hyvärinen, 2010) or importance sampling (Cho et al., 2015) to approximate the gradients with respect to the softmax logits. An alternative approach to learning representation of sentences has been recently described by Kiros et al. (2015). Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level. In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by Lin et al. (2015) on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While Lin et al. (2015) employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents. 2 BINARY PARAGRAPH VECTOR MODELS The basic idea in binary paragraph vector models is to introduce a sigmoid nonlinearity before the softmax that models the conditional probability of words given the context. If we then enforce binary or near-binary activations in this nonlinearity, the probability distribution over words will be conditioned on a bit vector context, rather than real-valued representation. The inference in the model proceeds like in Paragraph Vector, except the document code is constructed from the sigmoid activations. After rounding, this code can be seen as a distributed binary representation of the document. In the simplest Binary PV-DBOW model (Figure 1) the dimensionality of the real-valued document embeddings is equal to the length of the binary codes. Despite this low dimensional representation – a useful binary hash will typically have 128 or fewer bits – this model performed surprisingly well in our experiments. Note that we cannot simply increase the embedding dimensionality in Binary PV-DBOW in order to learn better codes: binary vectors learned in this way would be too long to be useful in document hashing. The retrieval performance can, however, be improved by using binary codes for initial filtering of documents, and then using a representation with higher capacity to rank the remaining documents by their similarity to the query. Salakhutdinov & Hinton (2009), for example, used semantic hashing codes for initial filtering and TF-IDF for ranking. A similar document retrieval strategy can be realized with binary paragraph vectors. Furthermore, we can extend the Binary PV-DBOW model to simultaneously learn short binary codes and higher-dimensional realvalued representations. Specifically, in the Real-Binary PV-DBOW model (Figure 2) we introduce a linear projection between the document embedding matrix and the sigmoid nonlinearity. During training, we learn the softmax parameters and the projection matrix. During inference, softmax weights and the projection matrix are fixed. This way, we simultaneously obtain a high-capacity representation of a document in the embedding matrix, e.g. 300-dimensional real-valued vector, and a short binary representation from the sigmoid activations. One advantage of using the Real-Binary PV-DBOW model over two separate networks is that we need to store only one set of softmax parameters (and a small projection matrix) in the memory, instead of two large weight matrices. Additionally, only one model needs to be trained, rather than two distinct networks. Binary document codes can also be learned by extending distributed memory models. Le & Mikolov (2014) suggest that in PV-DM, a context of the central word can be constructed by either concatenating or averaging the document vector and the embeddings of the surrounding words. However, in Binary PV-DM (Figure 3) we always construct the context by concatenating the relevant vectors before applying the sigmoid nonlinearity. This way, the length of binary codes is not tied to the dimensionality of word embeddings. Softmax layers in the models described above should be trained to predict words in documents given binary context vectors. Training should therefore encourage binary activations in the preceding sigmoid layers. This can be done in several ways. In semantic hashing autoencoders Salakhutdinov & Hinton (2009) added noise to the sigmoid coding layer. Error backpropagation then countered the noise, by forcing the activations to be close to 0 or 1. Another approach was used by Krizhevsky & Hinton (2011) in autoencoders that learned binary codes for small images. During the forward pass, activations in the coding layer were rounded to 0 or 1. Original (i.e. not rounded) activations were used when backpropagating errors. Alternatively, one could model the document codes with stochastic binary neurons. Learning in this case can still proceed with error backpropagation, pro- vided that a suitable gradient estimator is used alongside stochastic activations. We experimented with the methods used in semantic hashing and Krizhevsky’s autoencoders, as well as with the two biased gradient estimators for stochastic binary neurons discussed by Bengio et al. (2013). We also investigated the slope annealing trick (Chung et al., 2016) when training networks with stochastic binary activations. From our experience, binary paragraph vector models with rounded activations are easy to train and learn better codes than models with noise-based binarization or stochastic neurons. We therefore use Krizhevsky’s binarization in our models. 3 EXPERIMENTS To assess the performance of binary paragraph vectors, we carried out experiments on two datasets frequently used to evaluate document retrieval methods, namely 20 Newsgroups1 and a cleansed version (also called v2) of Reuters Corpus Volume 12 (RCV1). As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by Li et al. (2015) indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) (Järvelin & Kekäläinen, 2002). The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. However, in RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by Salakhutdinov & Hinton (2009). That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures follows Salakhutdinov & Hinton (2009), enabling comparison with semantic hashing codes. We use AdaGrad (Duchi et al., 2011) for training and inference in all experiments reported in this work. During training we employ dropout (Srivastava et al., 2014) in the embedding layer. To facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by Cho et al. (2015). Binary PV-DM networks use the same number of dimensions for document codes and word embeddings. Performance of 128- and 32-bit binary paragraph vector codes is reported in Table 1 and in Figure 4. For comparison we also report performance of real-valued paragraph vectors. Note that the binary codes perform very well, despite their far lower capacity: on both test sets the 128-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors. Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of precision-recall curves from Figure 4 with Salakhutdinov & Hinton (2009, Figures 6 & 7) shows that on both test sets 128-bit codes learned with this model outperform 128-bit semantic hashing codes. Moreover, the 32-bit codes from this model outperform 128-bit semantic hashing codes on the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3% recall and better precision for higher recall levels. Note that the difference in this case lies not only in retrieval precision: the short 32-bit Binary PV-DBOW codes are more efficient for indexing than long 128-bit semantic hashing codes. 1Available at http://qwone.com/˜jason/20Newsgroups 2Available at http://trec.nist.gov/data/reuters/reuters.html We also compared binary paragraph vectors against codes constructed by first inferring short, realvalued paragraph vectors and then using another unsupervised model or hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. We experimented with an autoencoder with sigmoid coding layer and Krizhevsky’s binarization, with a Gaussian-Bernoulli Restricted Boltzmann Machine (Welling et al., 2004), and with two standard hashing algorithms, namely random hyperplane projection (Charikar, 2002) and iterative quantization (Gong & Lazebnik, 2011). Paragraph vectors in these experiments were inferred using PVDBOW with bigrams. Results reported in Table 2 shows no benefit from using a separate algorithm for binarization. On the 20 Newsgroups dataset an autoencoder with Krizhevsky’s binarization achieved MAP equal to Binary PV-DBOW, while the other three approaches yielded lower MAP. On the larger RCV1 dataset an end-to-end training of Binary PV-DBOW yielded higher MAP than the baseline approaches. Some gain in precision of top hits can be observed for iterative quantization and an autoencoder with Krizhevsky’s binarization. However, it does not translate to an improved MAP, and decreases when models are trained on a larger corpus (RCV1). Li et al. (2015) argue that PV-DBOW outperforms PV-DM on a sentiment classification task, and demonstrate that the performance of PV-DBOW can be improved by including bigrams in the vocabulary. We observed similar results with Binary PV models. That is, including bigrams in the vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW provided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOW model. 3.1 TRANSFER LEARNING In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, whether binary paragraph vectors could be used without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. It is not obvious, however, whether such model would capture language semantics meaningful for unrelated documents. To shed light on this question we trained Binary PVDBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. We used words and bigrams with at least 100 occurrences in the English Wikipedia. The results are presented in Table 3 and in Figure 5. The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning. Table 3: Information retrieval results for the Binary PV-DBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes. MAP NDCG@10 20 Newsgroups 0.24 0.51 RCV1 0.18 0.66 10-2 10-1 100 Recall 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 P re ci si o n training on the 20 Newsgroups training set training on English Wikipedia (a) 20 Newsgroups 10-2 10-1 100 Recall 0.1 0.2 0.3 0.4 0.5 0.6 0.7 P re ci si o n training on the RCV1 training set training on English Wikipedia (b) RCV1 Figure 5: Precision-recall curves for the baseline Binary PV-DBOW models and a Binary PVDBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes. 3.2 RETRIEVAL WITH REAL-BINARY MODELS As pointed out by Salakhutdinov & Hinton (2009), when working with large text collections one can use short binary codes for indexing and a representation with more capacity for ranking. Following this idea, we proposed Real-Binary PV-DBOW model (Section 2) that can simultaneously learn short binary codes and high-dimensional real-valued representations. We begin evaluation of this model by comparing retrieval precision of real-valued and binary representations learned by it. To this end, we trained a Real-Binary PV-DBOW model with 28-bit binary codes and 300-dimensional real-valued representations on the 20 Newsgroups and RCV1 datasets. Results are reported in Figure 6. The real-valued representations learned with this model give lower precision than PV-DBOW vectors but, importantly, improve precision over binary codes for top ranked documents. This justifies their use alongside binary codes. Using short binary codes for initial filtering of documents comes with a tradeoff between the retrieval performance and the recall level. For example, one can select a small subset of similar documents by using 28–32 bit codes and retrieving documents within small Hamming distance to the query. This will improve retrieval performance, and possibly also precision, at the cost of recall. Conversely, short codes provide a less fine-grained hashing and can be used to index documents within larger Hamming distance to the query. They can therefore be used to improve recall at the cost of retrieval performance, and possibly also precision. For these reasons, we evaluated Real-Binary PV-DBOW models with different code sizes and under different limits on the Hamming distance to the query. In general, we cannot expect these models to achieve 100% recall under the test settings. Furthermore, recall will vary on query-by-query basis. We therefore decided to focus on the NDCG@10 metric in this evaluation, as it is suited for measuring model performance when a short list of relevant documents is sought, and the recall level is not known. MAP and precision-recall curves are not applicable in these settings. Information retrieval results for Real-Binary PV-DBOW are summarized in Table 4. The model gives higher NDCG@10 than 32-bit Binary PV-DBOW codes (Table 1). The difference is large when the initial filtering is restrictive, e.g. when using 28-bit codes and 2-bit Hamming distance limit. Real-Binary PV-DBOW can therefore be useful when one needs to quickly find a short list of relevant documents in a large text collection, and the recall level is not of primary importance. If needed, precision can be further improved by using plain Binary PV-DBOW codes for filtering and standard DBOW representation for raking (Table 4, column C). Note, however, that PV-DBOW model would then use approximately 10 times more parameters than Real-Binary PV-DBOW. 4 CONCLUSION In this article we presented simple neural networks that learn short binary codes for text documents. Our networks extend Paragraph Vector by introducing a sigmoid nonlinearity before the softmax that predicts words in documents. Binary codes inferred with the proposed networks achieve higher retrieval precision than semantic hashing codes on two popular information retrieval benchmarks. They also retain a lot of their precision when trained on an unrelated text corpus. Finally, we presented a network that simultaneously learns short binary codes and longer, real-valued representations. The best codes in our experiments were inferred with Binary PV-DBOW networks. The Binary PVDM model did not perform so well. Li et al. (2015) made similar observations for Paragraph Vector models, and argue that in distributed memory model the word context takes a lot of the burden of predicting the central word from the document code. An interesting line of future research could, therefore, focus on models that account for word order, while learning good binary codes. It is also worth noting that Le & Mikolov (2014) constructed paragraph vectors by combining DM and DBOW representations. This strategy may proof useful also with binary codes, when employed with hashing algorithms designed for longer codes, e.g. with multi-index hashing (Norouzi et al., 2012). ACKNOWLEDGMENTS This research is supported by the Polish National Science Centre grant no. DEC-2013/09/B/ST6/01549 “Interactive Visual Text Analytics (IVTA): Development of novel, user-driven text mining and visualization methods for large text corpora exploration.” This research was carried out with the support of the “HPC Infrastructure for Grand Challenges of Science and Engineering” project, co-financed by the European Regional Development Fund under the Innovative Economy Operational Programme. This research was supported in part by PL-Grid Infrastructure.
1. What is the main contribution of the paper, and how does it improve upon previous methods? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of its ability to encode documents efficiently and compare them effectively? 3. How does the reviewer assess the clarity and adequacy of the paper's explanations and illustrations? 4. Are there any concerns regarding the experimental setup and comparisons with other works in the field? 5. What are some potential avenues for future research related to this paper's topic?
Review
Review The method in this paper introduces a binary encoding level in the PV-DBOW and PV-DM document embedding methods (from Le & Mikolov'14). The binary encoding consists in a sigmoid with trained parameters that is inserted after the standard training stage of the embedding. For a document to encode, the binary vector is obtained by forcing the sigmoid to output a binary output for each of the embedding vector components. The binary vector can then be used for compact storage and fast comparison of documents. Pros: - the binary representation outperforms the Semantic hashing method from Salakhutdinov & Hinton '09 - the experimental approach sound: they compare on the same experimental setup as Salakhutdinov & Hinton '09, but since in the meantime document representations improved (Le & Mikolov'14), they also combine this new representation with an RBM to show the benefit of their binary PV-DBOW/PV-DM Cons: - the insertion of the sigmoid to produce binary codes (from Lin & al. '15) in the training process is incremental - the explanation is too abstract and difficult to follow for a non-expert (see details below) - a comparison with efficient indexing methods used in image retrieval is missing. For large-scale indexing of embedding vectors, derivations of the Inverted multi-index are probably more interesting than binary codes. See eg. Babenko & Lempitsky, Efficient Indexing of Billion-Scale Datasets of Deep Descriptors, CVPR'16 Detailed comments: Section 1: the motivation for producing binary codes is not given. Also, the experimental section could give some timings and mem usage numbers to show the benefit of binary embeddings figure 1, 2, 3: there is enough space to include more information on the representation of the model: model parameters + training objective + characteristic sizes + dropout. In particular, in fig 2, it is not clear why "embedding lookup" and "linear projection" cannot be merged in a single smaller lookup table (presumably because there is an intermediate training objective that prevents this). p2: "This way, the length of binary codes is not tied to the dimensionality of word embeddings." -> why not? section 3: This is the experimental setup of Salakhutdinov & Hinton 2009. Specify this and whether there is any difference between the setups. "similarity of the inferred codes": say here that codes are compared using Hamming distances. "binary codes perform very well, despite their far lower capacity" -> do you mean smaller size than real vectors? fig 5: these plots could be dropped if space is needed. section 3.1: one could argue that "transferring" from Wikipedia to anything else cannot be called transferring, since Wikipedia's purpose is to include all topics and lexical domains section 3.2: specify how the 300D real vectors are compared. L2 distance? inner product? fig4: specify what the raw performance of the large embedding vectors is (without pre-filtering with binary codes), or equivalently, the perf of (code-size, Hamming dis) = (28, 28), (24, 24), etc.
ICLR
Title Binary Paragraph Vectors Abstract Recently Le & Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection. 1 INTRODUCTION One of the significant challenges in contemporary information processing is the sheer volume of available data. Gantz & Reinsel (2012), for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing (Indyk & Motwani, 1998), relies on hashing data into short, localitypreserving binary codes (Wang et al., 2014). The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by Salakhutdinov & Hinton (2009). Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-ofwords (BOW) representation. Salakhutdinov & Hinton demonstrated that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fullyconnected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words. Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. Mikolov et al. (2013) proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by Le & Mikolov (2014) to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bagof-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation (Gutmann & Hyvärinen, 2010) or importance sampling (Cho et al., 2015) to approximate the gradients with respect to the softmax logits. An alternative approach to learning representation of sentences has been recently described by Kiros et al. (2015). Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level. In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by Lin et al. (2015) on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While Lin et al. (2015) employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents. 2 BINARY PARAGRAPH VECTOR MODELS The basic idea in binary paragraph vector models is to introduce a sigmoid nonlinearity before the softmax that models the conditional probability of words given the context. If we then enforce binary or near-binary activations in this nonlinearity, the probability distribution over words will be conditioned on a bit vector context, rather than real-valued representation. The inference in the model proceeds like in Paragraph Vector, except the document code is constructed from the sigmoid activations. After rounding, this code can be seen as a distributed binary representation of the document. In the simplest Binary PV-DBOW model (Figure 1) the dimensionality of the real-valued document embeddings is equal to the length of the binary codes. Despite this low dimensional representation – a useful binary hash will typically have 128 or fewer bits – this model performed surprisingly well in our experiments. Note that we cannot simply increase the embedding dimensionality in Binary PV-DBOW in order to learn better codes: binary vectors learned in this way would be too long to be useful in document hashing. The retrieval performance can, however, be improved by using binary codes for initial filtering of documents, and then using a representation with higher capacity to rank the remaining documents by their similarity to the query. Salakhutdinov & Hinton (2009), for example, used semantic hashing codes for initial filtering and TF-IDF for ranking. A similar document retrieval strategy can be realized with binary paragraph vectors. Furthermore, we can extend the Binary PV-DBOW model to simultaneously learn short binary codes and higher-dimensional realvalued representations. Specifically, in the Real-Binary PV-DBOW model (Figure 2) we introduce a linear projection between the document embedding matrix and the sigmoid nonlinearity. During training, we learn the softmax parameters and the projection matrix. During inference, softmax weights and the projection matrix are fixed. This way, we simultaneously obtain a high-capacity representation of a document in the embedding matrix, e.g. 300-dimensional real-valued vector, and a short binary representation from the sigmoid activations. One advantage of using the Real-Binary PV-DBOW model over two separate networks is that we need to store only one set of softmax parameters (and a small projection matrix) in the memory, instead of two large weight matrices. Additionally, only one model needs to be trained, rather than two distinct networks. Binary document codes can also be learned by extending distributed memory models. Le & Mikolov (2014) suggest that in PV-DM, a context of the central word can be constructed by either concatenating or averaging the document vector and the embeddings of the surrounding words. However, in Binary PV-DM (Figure 3) we always construct the context by concatenating the relevant vectors before applying the sigmoid nonlinearity. This way, the length of binary codes is not tied to the dimensionality of word embeddings. Softmax layers in the models described above should be trained to predict words in documents given binary context vectors. Training should therefore encourage binary activations in the preceding sigmoid layers. This can be done in several ways. In semantic hashing autoencoders Salakhutdinov & Hinton (2009) added noise to the sigmoid coding layer. Error backpropagation then countered the noise, by forcing the activations to be close to 0 or 1. Another approach was used by Krizhevsky & Hinton (2011) in autoencoders that learned binary codes for small images. During the forward pass, activations in the coding layer were rounded to 0 or 1. Original (i.e. not rounded) activations were used when backpropagating errors. Alternatively, one could model the document codes with stochastic binary neurons. Learning in this case can still proceed with error backpropagation, pro- vided that a suitable gradient estimator is used alongside stochastic activations. We experimented with the methods used in semantic hashing and Krizhevsky’s autoencoders, as well as with the two biased gradient estimators for stochastic binary neurons discussed by Bengio et al. (2013). We also investigated the slope annealing trick (Chung et al., 2016) when training networks with stochastic binary activations. From our experience, binary paragraph vector models with rounded activations are easy to train and learn better codes than models with noise-based binarization or stochastic neurons. We therefore use Krizhevsky’s binarization in our models. 3 EXPERIMENTS To assess the performance of binary paragraph vectors, we carried out experiments on two datasets frequently used to evaluate document retrieval methods, namely 20 Newsgroups1 and a cleansed version (also called v2) of Reuters Corpus Volume 12 (RCV1). As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by Li et al. (2015) indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) (Järvelin & Kekäläinen, 2002). The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. However, in RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by Salakhutdinov & Hinton (2009). That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures follows Salakhutdinov & Hinton (2009), enabling comparison with semantic hashing codes. We use AdaGrad (Duchi et al., 2011) for training and inference in all experiments reported in this work. During training we employ dropout (Srivastava et al., 2014) in the embedding layer. To facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by Cho et al. (2015). Binary PV-DM networks use the same number of dimensions for document codes and word embeddings. Performance of 128- and 32-bit binary paragraph vector codes is reported in Table 1 and in Figure 4. For comparison we also report performance of real-valued paragraph vectors. Note that the binary codes perform very well, despite their far lower capacity: on both test sets the 128-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors. Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of precision-recall curves from Figure 4 with Salakhutdinov & Hinton (2009, Figures 6 & 7) shows that on both test sets 128-bit codes learned with this model outperform 128-bit semantic hashing codes. Moreover, the 32-bit codes from this model outperform 128-bit semantic hashing codes on the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3% recall and better precision for higher recall levels. Note that the difference in this case lies not only in retrieval precision: the short 32-bit Binary PV-DBOW codes are more efficient for indexing than long 128-bit semantic hashing codes. 1Available at http://qwone.com/˜jason/20Newsgroups 2Available at http://trec.nist.gov/data/reuters/reuters.html We also compared binary paragraph vectors against codes constructed by first inferring short, realvalued paragraph vectors and then using another unsupervised model or hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. We experimented with an autoencoder with sigmoid coding layer and Krizhevsky’s binarization, with a Gaussian-Bernoulli Restricted Boltzmann Machine (Welling et al., 2004), and with two standard hashing algorithms, namely random hyperplane projection (Charikar, 2002) and iterative quantization (Gong & Lazebnik, 2011). Paragraph vectors in these experiments were inferred using PVDBOW with bigrams. Results reported in Table 2 shows no benefit from using a separate algorithm for binarization. On the 20 Newsgroups dataset an autoencoder with Krizhevsky’s binarization achieved MAP equal to Binary PV-DBOW, while the other three approaches yielded lower MAP. On the larger RCV1 dataset an end-to-end training of Binary PV-DBOW yielded higher MAP than the baseline approaches. Some gain in precision of top hits can be observed for iterative quantization and an autoencoder with Krizhevsky’s binarization. However, it does not translate to an improved MAP, and decreases when models are trained on a larger corpus (RCV1). Li et al. (2015) argue that PV-DBOW outperforms PV-DM on a sentiment classification task, and demonstrate that the performance of PV-DBOW can be improved by including bigrams in the vocabulary. We observed similar results with Binary PV models. That is, including bigrams in the vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW provided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOW model. 3.1 TRANSFER LEARNING In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, whether binary paragraph vectors could be used without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. It is not obvious, however, whether such model would capture language semantics meaningful for unrelated documents. To shed light on this question we trained Binary PVDBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. We used words and bigrams with at least 100 occurrences in the English Wikipedia. The results are presented in Table 3 and in Figure 5. The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning. Table 3: Information retrieval results for the Binary PV-DBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes. MAP NDCG@10 20 Newsgroups 0.24 0.51 RCV1 0.18 0.66 10-2 10-1 100 Recall 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 P re ci si o n training on the 20 Newsgroups training set training on English Wikipedia (a) 20 Newsgroups 10-2 10-1 100 Recall 0.1 0.2 0.3 0.4 0.5 0.6 0.7 P re ci si o n training on the RCV1 training set training on English Wikipedia (b) RCV1 Figure 5: Precision-recall curves for the baseline Binary PV-DBOW models and a Binary PVDBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes. 3.2 RETRIEVAL WITH REAL-BINARY MODELS As pointed out by Salakhutdinov & Hinton (2009), when working with large text collections one can use short binary codes for indexing and a representation with more capacity for ranking. Following this idea, we proposed Real-Binary PV-DBOW model (Section 2) that can simultaneously learn short binary codes and high-dimensional real-valued representations. We begin evaluation of this model by comparing retrieval precision of real-valued and binary representations learned by it. To this end, we trained a Real-Binary PV-DBOW model with 28-bit binary codes and 300-dimensional real-valued representations on the 20 Newsgroups and RCV1 datasets. Results are reported in Figure 6. The real-valued representations learned with this model give lower precision than PV-DBOW vectors but, importantly, improve precision over binary codes for top ranked documents. This justifies their use alongside binary codes. Using short binary codes for initial filtering of documents comes with a tradeoff between the retrieval performance and the recall level. For example, one can select a small subset of similar documents by using 28–32 bit codes and retrieving documents within small Hamming distance to the query. This will improve retrieval performance, and possibly also precision, at the cost of recall. Conversely, short codes provide a less fine-grained hashing and can be used to index documents within larger Hamming distance to the query. They can therefore be used to improve recall at the cost of retrieval performance, and possibly also precision. For these reasons, we evaluated Real-Binary PV-DBOW models with different code sizes and under different limits on the Hamming distance to the query. In general, we cannot expect these models to achieve 100% recall under the test settings. Furthermore, recall will vary on query-by-query basis. We therefore decided to focus on the NDCG@10 metric in this evaluation, as it is suited for measuring model performance when a short list of relevant documents is sought, and the recall level is not known. MAP and precision-recall curves are not applicable in these settings. Information retrieval results for Real-Binary PV-DBOW are summarized in Table 4. The model gives higher NDCG@10 than 32-bit Binary PV-DBOW codes (Table 1). The difference is large when the initial filtering is restrictive, e.g. when using 28-bit codes and 2-bit Hamming distance limit. Real-Binary PV-DBOW can therefore be useful when one needs to quickly find a short list of relevant documents in a large text collection, and the recall level is not of primary importance. If needed, precision can be further improved by using plain Binary PV-DBOW codes for filtering and standard DBOW representation for raking (Table 4, column C). Note, however, that PV-DBOW model would then use approximately 10 times more parameters than Real-Binary PV-DBOW. 4 CONCLUSION In this article we presented simple neural networks that learn short binary codes for text documents. Our networks extend Paragraph Vector by introducing a sigmoid nonlinearity before the softmax that predicts words in documents. Binary codes inferred with the proposed networks achieve higher retrieval precision than semantic hashing codes on two popular information retrieval benchmarks. They also retain a lot of their precision when trained on an unrelated text corpus. Finally, we presented a network that simultaneously learns short binary codes and longer, real-valued representations. The best codes in our experiments were inferred with Binary PV-DBOW networks. The Binary PVDM model did not perform so well. Li et al. (2015) made similar observations for Paragraph Vector models, and argue that in distributed memory model the word context takes a lot of the burden of predicting the central word from the document code. An interesting line of future research could, therefore, focus on models that account for word order, while learning good binary codes. It is also worth noting that Le & Mikolov (2014) constructed paragraph vectors by combining DM and DBOW representations. This strategy may proof useful also with binary codes, when employed with hashing algorithms designed for longer codes, e.g. with multi-index hashing (Norouzi et al., 2012). ACKNOWLEDGMENTS This research is supported by the Polish National Science Centre grant no. DEC-2013/09/B/ST6/01549 “Interactive Visual Text Analytics (IVTA): Development of novel, user-driven text mining and visualization methods for large text corpora exploration.” This research was carried out with the support of the “HPC Infrastructure for Grand Challenges of Science and Engineering” project, co-financed by the European Regional Development Fund under the Innovative Economy Operational Programme. This research was supported in part by PL-Grid Infrastructure.
1. What is the main contribution of the paper regarding text representation and similarity search? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. How does the reviewer assess the novelty and significance of the paper's content? 4. Are there any concerns or suggestions regarding the paper's experimental design and comparisons with other approaches? 5. Can you provide additional resources or references relevant to the paper's topic and methodology?
Review
Review This paper presents a method to represent text documents and paragraphs as short binary codes to allow fast similarity search and retrieval by using hashing techniques. The real-valued paragraph vectors by Le & Mikolov is extended by adding a stochastic binary layer on top of the neural network architecture. Two methods for binarizing the final activations are compared: (1) simply adding noise to sigmoid activations to encourage discritization. (2) binarizing the activations in the forward pass and keeping them real-valued in the backward pass (straight-through estimation). The paper presents encouraging results by using straight-through estimation on 20 newsgroup and RCV1 text datasets by using 128 and 32 bit binary codes. On the plus side, the application presented in the paper is interesting and important. The exposition of the paper is clean and clear. However, the novelty of the approach is limited from a machine learning standpoint. The literature on binary hashing beyond semantic hashing and Krizhevsky's binary autoencoders in 2011 is not explained. An important baseline is missing where real-valued paragraph vectors are learned first, and then converted to binary codes using off-the-shelf hashing methods (e.g. random projection LSH by Charikar, BRE by Kulis & Darrell, ITQ by Gong & Lazebnik, MLH by Norouzi & Fleet, etc.) Given the lack of novelty and the missing baseline, I do not recommend this paper in its current for publication in the ICLR conference's proceeding. Moving forward, this paper may be more suitable for NLP conferences as it is more on the applied side. More comments: - I believe from an practical perspective it may be easier to first learn real-valued paragraph vectors and then quantize them for indexing. That said, an end-to-end approach as proposed in this paper may perform better. I would like to see an empirical comparison between the proposed end-to-end approach and a simpler two stage quantization method suggested here. - See "Estimating or Propagating Gradients Through Stochastic Neurons" By Bengio et al - discussing straight through estimation and some other alternatives. - The paper argues that the length of binary codes cannot be longer than 32 bits because longer codes are not suitable for document hashing. This is not quite right given multi-probe hashing mechanisms, for example see "Mult-index Hashing" by Norouzi et al. - See "Hashing for Similarity Search: A Survey" by Wang et al. for a survey of related work on binary hashing and quantization. You seem to ignore the extensive work done on binary hashing.
ICLR
Title Binary Paragraph Vectors Abstract Recently Le & Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection. 1 INTRODUCTION One of the significant challenges in contemporary information processing is the sheer volume of available data. Gantz & Reinsel (2012), for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing (Indyk & Motwani, 1998), relies on hashing data into short, localitypreserving binary codes (Wang et al., 2014). The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by Salakhutdinov & Hinton (2009). Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-ofwords (BOW) representation. Salakhutdinov & Hinton demonstrated that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fullyconnected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words. Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. Mikolov et al. (2013) proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by Le & Mikolov (2014) to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bagof-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation (Gutmann & Hyvärinen, 2010) or importance sampling (Cho et al., 2015) to approximate the gradients with respect to the softmax logits. An alternative approach to learning representation of sentences has been recently described by Kiros et al. (2015). Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level. In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by Lin et al. (2015) on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While Lin et al. (2015) employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents. 2 BINARY PARAGRAPH VECTOR MODELS The basic idea in binary paragraph vector models is to introduce a sigmoid nonlinearity before the softmax that models the conditional probability of words given the context. If we then enforce binary or near-binary activations in this nonlinearity, the probability distribution over words will be conditioned on a bit vector context, rather than real-valued representation. The inference in the model proceeds like in Paragraph Vector, except the document code is constructed from the sigmoid activations. After rounding, this code can be seen as a distributed binary representation of the document. In the simplest Binary PV-DBOW model (Figure 1) the dimensionality of the real-valued document embeddings is equal to the length of the binary codes. Despite this low dimensional representation – a useful binary hash will typically have 128 or fewer bits – this model performed surprisingly well in our experiments. Note that we cannot simply increase the embedding dimensionality in Binary PV-DBOW in order to learn better codes: binary vectors learned in this way would be too long to be useful in document hashing. The retrieval performance can, however, be improved by using binary codes for initial filtering of documents, and then using a representation with higher capacity to rank the remaining documents by their similarity to the query. Salakhutdinov & Hinton (2009), for example, used semantic hashing codes for initial filtering and TF-IDF for ranking. A similar document retrieval strategy can be realized with binary paragraph vectors. Furthermore, we can extend the Binary PV-DBOW model to simultaneously learn short binary codes and higher-dimensional realvalued representations. Specifically, in the Real-Binary PV-DBOW model (Figure 2) we introduce a linear projection between the document embedding matrix and the sigmoid nonlinearity. During training, we learn the softmax parameters and the projection matrix. During inference, softmax weights and the projection matrix are fixed. This way, we simultaneously obtain a high-capacity representation of a document in the embedding matrix, e.g. 300-dimensional real-valued vector, and a short binary representation from the sigmoid activations. One advantage of using the Real-Binary PV-DBOW model over two separate networks is that we need to store only one set of softmax parameters (and a small projection matrix) in the memory, instead of two large weight matrices. Additionally, only one model needs to be trained, rather than two distinct networks. Binary document codes can also be learned by extending distributed memory models. Le & Mikolov (2014) suggest that in PV-DM, a context of the central word can be constructed by either concatenating or averaging the document vector and the embeddings of the surrounding words. However, in Binary PV-DM (Figure 3) we always construct the context by concatenating the relevant vectors before applying the sigmoid nonlinearity. This way, the length of binary codes is not tied to the dimensionality of word embeddings. Softmax layers in the models described above should be trained to predict words in documents given binary context vectors. Training should therefore encourage binary activations in the preceding sigmoid layers. This can be done in several ways. In semantic hashing autoencoders Salakhutdinov & Hinton (2009) added noise to the sigmoid coding layer. Error backpropagation then countered the noise, by forcing the activations to be close to 0 or 1. Another approach was used by Krizhevsky & Hinton (2011) in autoencoders that learned binary codes for small images. During the forward pass, activations in the coding layer were rounded to 0 or 1. Original (i.e. not rounded) activations were used when backpropagating errors. Alternatively, one could model the document codes with stochastic binary neurons. Learning in this case can still proceed with error backpropagation, pro- vided that a suitable gradient estimator is used alongside stochastic activations. We experimented with the methods used in semantic hashing and Krizhevsky’s autoencoders, as well as with the two biased gradient estimators for stochastic binary neurons discussed by Bengio et al. (2013). We also investigated the slope annealing trick (Chung et al., 2016) when training networks with stochastic binary activations. From our experience, binary paragraph vector models with rounded activations are easy to train and learn better codes than models with noise-based binarization or stochastic neurons. We therefore use Krizhevsky’s binarization in our models. 3 EXPERIMENTS To assess the performance of binary paragraph vectors, we carried out experiments on two datasets frequently used to evaluate document retrieval methods, namely 20 Newsgroups1 and a cleansed version (also called v2) of Reuters Corpus Volume 12 (RCV1). As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by Li et al. (2015) indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) (Järvelin & Kekäläinen, 2002). The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. However, in RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by Salakhutdinov & Hinton (2009). That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures follows Salakhutdinov & Hinton (2009), enabling comparison with semantic hashing codes. We use AdaGrad (Duchi et al., 2011) for training and inference in all experiments reported in this work. During training we employ dropout (Srivastava et al., 2014) in the embedding layer. To facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by Cho et al. (2015). Binary PV-DM networks use the same number of dimensions for document codes and word embeddings. Performance of 128- and 32-bit binary paragraph vector codes is reported in Table 1 and in Figure 4. For comparison we also report performance of real-valued paragraph vectors. Note that the binary codes perform very well, despite their far lower capacity: on both test sets the 128-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors. Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of precision-recall curves from Figure 4 with Salakhutdinov & Hinton (2009, Figures 6 & 7) shows that on both test sets 128-bit codes learned with this model outperform 128-bit semantic hashing codes. Moreover, the 32-bit codes from this model outperform 128-bit semantic hashing codes on the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3% recall and better precision for higher recall levels. Note that the difference in this case lies not only in retrieval precision: the short 32-bit Binary PV-DBOW codes are more efficient for indexing than long 128-bit semantic hashing codes. 1Available at http://qwone.com/˜jason/20Newsgroups 2Available at http://trec.nist.gov/data/reuters/reuters.html We also compared binary paragraph vectors against codes constructed by first inferring short, realvalued paragraph vectors and then using another unsupervised model or hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. We experimented with an autoencoder with sigmoid coding layer and Krizhevsky’s binarization, with a Gaussian-Bernoulli Restricted Boltzmann Machine (Welling et al., 2004), and with two standard hashing algorithms, namely random hyperplane projection (Charikar, 2002) and iterative quantization (Gong & Lazebnik, 2011). Paragraph vectors in these experiments were inferred using PVDBOW with bigrams. Results reported in Table 2 shows no benefit from using a separate algorithm for binarization. On the 20 Newsgroups dataset an autoencoder with Krizhevsky’s binarization achieved MAP equal to Binary PV-DBOW, while the other three approaches yielded lower MAP. On the larger RCV1 dataset an end-to-end training of Binary PV-DBOW yielded higher MAP than the baseline approaches. Some gain in precision of top hits can be observed for iterative quantization and an autoencoder with Krizhevsky’s binarization. However, it does not translate to an improved MAP, and decreases when models are trained on a larger corpus (RCV1). Li et al. (2015) argue that PV-DBOW outperforms PV-DM on a sentiment classification task, and demonstrate that the performance of PV-DBOW can be improved by including bigrams in the vocabulary. We observed similar results with Binary PV models. That is, including bigrams in the vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW provided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOW model. 3.1 TRANSFER LEARNING In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, whether binary paragraph vectors could be used without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. It is not obvious, however, whether such model would capture language semantics meaningful for unrelated documents. To shed light on this question we trained Binary PVDBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. We used words and bigrams with at least 100 occurrences in the English Wikipedia. The results are presented in Table 3 and in Figure 5. The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning. Table 3: Information retrieval results for the Binary PV-DBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes. MAP NDCG@10 20 Newsgroups 0.24 0.51 RCV1 0.18 0.66 10-2 10-1 100 Recall 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 P re ci si o n training on the 20 Newsgroups training set training on English Wikipedia (a) 20 Newsgroups 10-2 10-1 100 Recall 0.1 0.2 0.3 0.4 0.5 0.6 0.7 P re ci si o n training on the RCV1 training set training on English Wikipedia (b) RCV1 Figure 5: Precision-recall curves for the baseline Binary PV-DBOW models and a Binary PVDBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes. 3.2 RETRIEVAL WITH REAL-BINARY MODELS As pointed out by Salakhutdinov & Hinton (2009), when working with large text collections one can use short binary codes for indexing and a representation with more capacity for ranking. Following this idea, we proposed Real-Binary PV-DBOW model (Section 2) that can simultaneously learn short binary codes and high-dimensional real-valued representations. We begin evaluation of this model by comparing retrieval precision of real-valued and binary representations learned by it. To this end, we trained a Real-Binary PV-DBOW model with 28-bit binary codes and 300-dimensional real-valued representations on the 20 Newsgroups and RCV1 datasets. Results are reported in Figure 6. The real-valued representations learned with this model give lower precision than PV-DBOW vectors but, importantly, improve precision over binary codes for top ranked documents. This justifies their use alongside binary codes. Using short binary codes for initial filtering of documents comes with a tradeoff between the retrieval performance and the recall level. For example, one can select a small subset of similar documents by using 28–32 bit codes and retrieving documents within small Hamming distance to the query. This will improve retrieval performance, and possibly also precision, at the cost of recall. Conversely, short codes provide a less fine-grained hashing and can be used to index documents within larger Hamming distance to the query. They can therefore be used to improve recall at the cost of retrieval performance, and possibly also precision. For these reasons, we evaluated Real-Binary PV-DBOW models with different code sizes and under different limits on the Hamming distance to the query. In general, we cannot expect these models to achieve 100% recall under the test settings. Furthermore, recall will vary on query-by-query basis. We therefore decided to focus on the NDCG@10 metric in this evaluation, as it is suited for measuring model performance when a short list of relevant documents is sought, and the recall level is not known. MAP and precision-recall curves are not applicable in these settings. Information retrieval results for Real-Binary PV-DBOW are summarized in Table 4. The model gives higher NDCG@10 than 32-bit Binary PV-DBOW codes (Table 1). The difference is large when the initial filtering is restrictive, e.g. when using 28-bit codes and 2-bit Hamming distance limit. Real-Binary PV-DBOW can therefore be useful when one needs to quickly find a short list of relevant documents in a large text collection, and the recall level is not of primary importance. If needed, precision can be further improved by using plain Binary PV-DBOW codes for filtering and standard DBOW representation for raking (Table 4, column C). Note, however, that PV-DBOW model would then use approximately 10 times more parameters than Real-Binary PV-DBOW. 4 CONCLUSION In this article we presented simple neural networks that learn short binary codes for text documents. Our networks extend Paragraph Vector by introducing a sigmoid nonlinearity before the softmax that predicts words in documents. Binary codes inferred with the proposed networks achieve higher retrieval precision than semantic hashing codes on two popular information retrieval benchmarks. They also retain a lot of their precision when trained on an unrelated text corpus. Finally, we presented a network that simultaneously learns short binary codes and longer, real-valued representations. The best codes in our experiments were inferred with Binary PV-DBOW networks. The Binary PVDM model did not perform so well. Li et al. (2015) made similar observations for Paragraph Vector models, and argue that in distributed memory model the word context takes a lot of the burden of predicting the central word from the document code. An interesting line of future research could, therefore, focus on models that account for word order, while learning good binary codes. It is also worth noting that Le & Mikolov (2014) constructed paragraph vectors by combining DM and DBOW representations. This strategy may proof useful also with binary codes, when employed with hashing algorithms designed for longer codes, e.g. with multi-index hashing (Norouzi et al., 2012). ACKNOWLEDGMENTS This research is supported by the Polish National Science Centre grant no. DEC-2013/09/B/ST6/01549 “Interactive Visual Text Analytics (IVTA): Development of novel, user-driven text mining and visualization methods for large text corpora exploration.” This research was carried out with the support of the “HPC Infrastructure for Grand Challenges of Science and Engineering” project, co-financed by the European Regional Development Fund under the Innovative Economy Operational Programme. This research was supported in part by PL-Grid Infrastructure.
1. What is the main contribution of the paper regarding learning short binary codes? 2. What are the strengths and weaknesses of the proposed approach compared to semantic hashing? 3. How does the reviewer assess the clarity and completeness of the paper's content? 4. What are the suggestions for improving the paper's comparisons and fairness in evaluations? 5. What are the minor errors or typos noticed by the reviewer in the paper?
Review
Review This work proposes a model that can learn short binary codes via paragraph vectors to allow fast retrieval of documents. The experiments show that this is superior to semantic hashing. The approach is simple and not very technically interesting. For a code size of 128, the loss compared to a continuous paragraph vector seems moderate. The paper asks the reader to refer to the Salakhutdinov and Hinton paper for the baseline numbers but I think they should be placed in the paper for easy reference. For simplicity, the paper could show the precision at 12.5%, 25% and 50% recall for the proposed model and semantic hashing. It also seems that the semantic hashing paper shows results on RCV2 and not RCV1. RCV1 is twice the size of RCV2 and is English only so it seems that these results are not comparable. It would be interesting to see how many binary bits are required to match the performance of the continuous representation. A comparison to the continuous PV-DBOW trained with bigrams would also make it a more fair comparison. Figure 7 in the paper shows a loss from using the real-binary PV-DBOW. It seems that if a user needed high quality ranking after the retrieval stage and they could afford the extra space and computation, then it would be better for them to use a standard PV-DBOW to obtain the continuous representation at that stage. Minor comments: First line after the introduction: is sheer -> is the sheer 4th line from the bottom of P1: words embeddings -> word embeddings In table 1: What does code size refer to for PV-DBOW? Is this the number of elements in the continuous vector? 5th line from the bottom of P5: W -> We 5th line after section 3.1: covers wide -> covers a wide
ICLR
Title CAREER: Transfer Learning for Economic Prediction of Labor Data Abstract Labor economists regularly analyze employment data by fitting predictive models to small, carefully constructed longitudinal survey datasets. Although modern machine learning methods offer promise for such problems, these survey datasets are too small to take advantage of them. In recent years large datasets of online resumes have also become available, providing data about the career trajectories of millions of individuals. However, standard econometric models cannot take advantage of their scale or incorporate them into the analysis of survey data. To this end we develop CAREER, a transformer-based model that uses transfer learning to learn representations of job sequences. CAREER is first fit to large, passivelycollected resume data and then fine-tuned to smaller, better-curated datasets for economic inferences. We fit CAREER to a dataset of 24 million job sequences from resumes, and fine-tune its representations on longitudinal survey datasets. We find that CAREER forms accurate predictions of job sequences, achieving state-of-the-art predictive performance on three widely-used economics datasets. We further find that CAREER can be used to form good predictions of other downstream variables; incorporating CAREER into a wage model provides better predictions than the econometric models currently in use. 1 INTRODUCTION A variety of economic analyses rely on models for predicting an individual’s future occupations. These models are crucial for estimating important economic quantities, such as gender or racial differences in unemployment (Hall, 1972; Fairlie & Sundstrom, 1999); they underpin causal analyses and decompositions that rely on simulating counterfactual occupations for individuals (Brown et al., 1980; Schubert et al., 2021); and they inform policy, by forecasting occupations with rising or declining market shares. These analyses typically involve fitting predictive models to longitudinal surveys that follow a cohort of individuals during their working career (Panel Study of Income Dynamics, 2021; Bureau of Labor Statistics, 2019a). Such surveys have been carefully collected to represent national demographics, ensuring that the economic analyses can generalize to larger populations. But these datasets are also small, usually containing only thousands of workers, because maintaining them requires regularly interviewing each individual. Consequently, economists use simple sequential models, where a worker’s next occupation depends on their history only through the most recent occupation (Hall, 1972) or a few summary statistics about the past (Blau & Riphahn, 1999). In recent years, however, much larger datasets of online resumes have also become available. In contrast to longitudinal surveys, these passively-collected datasets are not typically used directly for economic inferences because they contain noisy observations and they are missing important economic variables such as demographics and wage. However, they provide occupation sequences of millions of individuals, potentially expanding the scope of insights that can be obtained from analyses on downstream survey datasets. The simple econometric models currently in use cannot incorporate the complex patterns embedded in these larger datasets into the analysis of survey data. To this end, we develop CAREER, a neural sequence model of occupation trajectories. CAREER is designed to be pretrained on large-scale resume data and then fine-tuned to small and bettercurated survey data for economic prediction. Its architecture is based on the transformer language model (Vaswani et al., 2017), for which pretraining and fine-tuning has proven to be an effective paradigm for many NLP tasks (Devlin et al., 2019; Lewis et al., 2019). CAREER extends this transformer-based transfer learning approach to modeling sequences of occupations, rather than text. We will show that CAREER’s representations provide effective predictions of occupations on survey datasets used for economic analysis, and can be used as inputs to economic models for other downstream applications. To study this model empirically, we pretrain CAREER on a dataset of 24 million resumes provided by Zippia, a career planning company. We then fine-tune CAREER’s representations of job sequences to make predictions on three widely-used economic datasets: the National Longitudinal Survey of Youth 1979 (NLSY79), another cohort from the same survey (NLSY97), and the Panel Study of Income Dynamics (PSID). In contrast to resume data, these well-curated datasets are representative of the larger population. It is with these survey datasets that economists make inferences, ensuring their analyses generalize. In this study, we find that CAREER outperforms standard econometric models for predicting and forecasting occupations, achieving state-of-the-art performance on the three widely-used survey datasets. We further find that CAREER can be used to form good predictions of other downstream variables; incorporating CAREER into a wage model provides better predictions than the econometric models currently in use. We release code so that practitioners can train CAREER on their own datasets. In summary, we demonstrate that CAREER can leverage large-scale resume data to make accurate predictions on important datasets from economics. Thus CAREER ties together economic models for understanding career trajectories with transformer-based methods for transfer learning. (See Section 3 for details of related work.) A flexible predictive model like CAREER expands the scope of analyses that can be performed by economists and policy-makers. 2 CAREER Given an individual’s career history, what is the probability distribution of their occupation in the next timestep? We go over a class of models for predicting occupations before introducing CAREER, one such model based on transformers and transfer learning. 2.1 OCCUPATION MODELS Consider an individual worker. This person’s career can be defined as a series of timesteps. Here, we use a timestep of one year. At each timestep, this individual works in a job: it could be the same job as the previous timestep, or a different job. (Note we use the terms “occupation” and “job” synonymously.) We consider “unemployed” and “out-of-labor-force” to be special types of jobs. Define an occupation model to be a probability distribution over sequences of jobs. An occupation model predicts a worker’s job at each timestep as a function of all previous jobs and other observed characteristics of the worker. More formally, define an individual’s career to be a sequence (y1, . . . , yT ), where each yt ∈ {1, . . . , J} indexes one of J occupations at time t. Occupations are categorical; one example of a sequence could be (“cashier”, “salesperson”, ... , “sales manager”). At each timestep, an individual is also associated with C observed covariates xt = {xtc}Cc=1. Covariates are also categorical, with xtc ∈ {1, . . . , Nc}. For example, if c corresponds to the most recent educational degree, xtc could be “high school diploma” or “bachelors”, and Nc is the number of types of educational degrees.1 Define yt = (y1, . . . , yt) to index all jobs that have occurred up to time t, with the analogous definition for xt. At each timestep, an occupation model predicts an individual’s job in the next timestep, p(yt|yt−1,xt). This distribution conditions on covariates from the same timestep because these are “pre-transition.” For example, an individual’s most recent educational degree is available to the model as it predicts their next job. 1Some covariates may not evolve over time. We encode them as time-varying without loss of generality. Note that an occupation model is a predictive rather than structural model. The model does not incorporate unobserved characteristics, like skill, when making predictions. Instead, it implicitly marginalizes over these unobserved variables, incorporating them into its predictive distribution. 2.2 REPRESENTATION-BASED TWO-STAGE MODELS An occupation model’s predictions are governed by an individual’s career history; both whether an individual changes jobs and the specific job they may transition to depend on current and previous jobs and covariates. We consider a class of occupation models that make predictions by conditioning on a lowdimensional representation of work history, ht(yt−1,xt) ∈ RD. This representation is assumed to be a sufficient statistic of the past; ht(yt−1,xt) should contain the relevant observed information for predicting the next job. Since individuals frequently stay in the same job between timesteps, we propose a class of models that make predictions in two stages. These models first predict whether an individual changes jobs, after which they predict the specific job to which an individual transitions. The representation is used in both stages. In the first stage, the career representation ht(yt−1,xt) is used to predict whether an individual changes jobs. Define the binary variable st to be 1 if a worker’s job at time t is different from that at time t− 1, and 0 otherwise. The first stage is a logistic regression, st|yt−1,xt ∼ Bernoulli (σ(η · ht(yt−1,xt))) , (1) where σ(·) is the logistic function and η ∈ RD is a vector of coefficients. If the model predicts that an individual will transition jobs, it only considers jobs that are different from the individual’s most recent job. To formulate this prediction, it combines the career representation with a vector of occupation-specific coefficients βj ∈ RD: p(yt = j|yt−1,xt, st = 1) = exp{βj · ht(yt−1,xt)}∑ j′ 6=yt−1 exp{βj′ · ht(yt−1,xt)} . (2) Otherwise, the next job is deterministic: p(yt = j|yt−1,xt, st = 0) = δj=yt−1 . (3) Two-stage prediction improves the accuracy of occupation models. Moreover, many analyses of occupational mobility focus on whether workers transition jobs rather than the specific job they transition to (Kambourov & Manovskii, 2008). By separating the mechanism by which a worker either keeps or changes jobs (η) and the specific job they may transition to (βj), two-stage models are more interpretable for studying occupational change. Equations 1 to 3 define a two-stage representation-based occupation model. In the next section, we introduce CAREER, one such model based on transformers. 2.3 CAREER MODEL We develop a two-stage representation-based occupation model called CAREER.2 This model uses a transformer to parameterize a representation of an individual’s history. This representation is pretrained on a large resumes dataset and fine-tuned to make predictions on small survey datasets. Transformers. A transformer is a sequence model that uses neural networks to learn representations of discrete tokens (Vaswani et al., 2017). Transformers were originally developed for natural language processing (NLP), to predict words in a sentence. Transformers are able to model complex dependencies between words, and they are a critical component of modern NLP systems including language modeling (Radford et al., 2019) and machine translation (Ott et al., 2018). CAREER is an occupation model that uses a transformer to parameterize a low-dimensional representation of careers. While transformers were developed to model sequences of words, CAREER 2CAREER is short for “Contextual Attention-based Representations of Employment Encoded from Resumes.” uses a transformer to model sequences of jobs. The transformer enables the model to represent complex career trajectories. CAREER is similar to the transformers used in NLP, but with two modifications. First, as described in Section 2.2, the model makes predictions in two stages, making it better-suited to model workers who stay in the same job through consecutive timesteps. (In contrast, words seldom repeat.) Second, while language models only condition on previous words, each career is also associated with covariates x that may affect transition distributions (see Equation 2). We adapt the transformer to these two changes. Parameterization. CAREER’s computation graph is depicted in Figure 1. Note that in this section we provide a simplified description of the ideas underlying the transformer. Appendix E contains a full description of the model. CAREER iteratively builds a representation of career history, ht(yt−1,xt) ∈ RD, using a stack of L layers. Each layer applies a series of computations to the previous layer’s output to produce its own layer-specific representation. The first layer’s representation, h(1)t (yt−1,xt), considers only the most recent job and covariates. At each subsequent layer `, the transformer forms a representation h(`)t (yt−1,xt) by combining the representation of the most recent job with those of preceding jobs. Representations become increasingly complex at each layer, and the final layer’s representation, h(L)t (yt−1,xt), is used to make predictions following Equations 1 to 3. We drop the explicit dependence on yt−1 and xt going forward, and instead denote each layer’s representation as h (`) t . The first layer’s representation combines the previous job, the most recent covariates, and the position of the job in the career. It first embeds each of these variables in D-dimensional space. Define an embedding function for occupations, ey : [J ]→ RD. Additionally, define a separate embedding function for each covariate, {ec}Cc=1, with each ec : [Nc] → RD. Finally, define et : [T ] → RD to embed the position of the sequence, where T denotes the number of possible sequence lengths. The first-layer representation h(1)t sums these embeddings: h (1) t = ey(yt−1) + ∑ c ec(xtc) + et(t). (4) For each subsequent layer `, the transformer combines representations of the most recent job with those of the preceding jobs and passes them through a neural network: π (`) t,t′ ∝ exp {( h (`) t )> W (`)h (`) t′ } for all t′ ≤ t (5) h̃ (`) t = h (`) t + ∑t t′=1 π (`) t,t′ ∗ h (`) t′ (6) h (`+1) t = FFN (`) ( h̃ (`) t ) , (7) where W (`) ∈ RD×D is a model parameter and FFN(`) is a two-layer feedforward neural network specific to layer `, with FFN(`) : RD → RD. The weights {π(`)t,t′} are referred to as attention weights, and they are determined by the career representations andW (`). The attention weights are non-negative and normalized to sum to 1. The matrix W (`) can be interpreted as a similarity matrix; ifW (`) is the identity matrix, occupations t and t′ that have similar representations will have large attention weights, and thus t′ would contribute more to the weighted average in Equation 6. Conversely, if W (`) is the negative identity matrix, occupations that have differing representations will have large attention weights.3 The final computation of each layer involves passing the intermediate representation h̃(`)t through a neural network, which ensures that representations capture complex nonlinear interactions. The computations in Equations 5 to 7 are repeated for each of the L layers. The last layer’s representation is used to predict the next job: p(yt|yt−1,xt) = two-stage-softmax ( h (L) t ; η, β ) , (8) where “two-stage-softmax” refers to the operation in Equations 1 to 3, parameterized by η and β. All of CAREER’s parameters – including the embedding functions, similarity matrices, feedforward neural networks, and regression coefficients η and β – are estimated by maximizing the likelihood in Equation 8 with stochastic gradient descent (SGD), marginalizing out the variable st. Transfer learning. Economists apply occupation models to survey datasets that have been carefully collected to represent national demographics. In the United States, these datasets contain a small number of individuals. While transformers have been successfully applied to large NLP datasets, they are prone to overfitting on small datasets (Kaplan et al., 2020; Dosovitskiy et al., 2021; Variš & Bojar, 2021). As such, CAREER may not learn useful representations solely from small survey datasets. In recent years, however, much larger datasets of online resumes have also become available. Although these passively-collected datasets provide job sequences of many more individuals, they are not used for economic estimation for a few reasons. The occupation sequences from resumes are imputed from short textual descriptions, a process that inevitably introduces more noise and errors than collecting data from detailed questionnaires. Additionally, individuals may not accurately list their work experiences on resumes (Wexler, 2006), and important economic variables relating to demographics and wage are not available. Finally, these datasets are not constructed to ensure that they are representative of the general population. Between these two types of data is a tension. On the one hand, resume data is large-scale and contains valuable information about employment patterns. On the other hand, survey datasets are carefully collected, designed to help make economic inferences that are robust and generalizable. Thus CAREER incorporates the patterns embedded in large-scale resume data into the analysis of survey datasets. It does this through transfer learning: CAREER is first pretrained on a large dataset of resumes to learn an initial representation of careers. When CAREER is then fit to a small survey dataset, parameters are not initialized randomly; instead, they are initialized with the representations learned from resumes. After initialization, all parameters are fine-tuned on the small dataset by optimizing the likelihood. Because the objective function is non-convex, learned representations depend on their initial values. Initializing with the pretrained representations ensures that the model 3In practice, transformers use multiple attention weights to perform multi-headed attention (Appendix E). does not need to re-learn representations on the small dataset. Instead, it only adjusts representations to account for dataset differences. This transfer learning approach takes inspiration from similar methods in NLP, such as BERT and the GPT family of models (Devlin et al., 2019; Radford et al., 2018). These methods pretrain transformers on large corpora, such as unpublished books or Wikipedia, and fine-tune them to make predictions on small datasets such as movie reviews. Our approach is analogous. Although the resumes dataset may not be representative or carefully curated, it contains many more job sequences than most survey datasets. This volume enables CAREER to learn representations that transfer to survey datasets. 3 RELATED WORK Many economic analyses use log-linear models to predict jobs in survey datasets (Boskin, 1974; Schmidt & Strauss, 1975). These models typically use small state spaces consisting of only a few occupation categories. For example, some studies categorize occupations into broad skill groups (Keane & Wolpin, 1997; Cortes, 2016); unemployment analyses only consider employment status (employed, unemployed, and out-of-labor-force) (Hall, 1972; Lauerova & Terrell, 2007); and researchers studying occupational mobility only consider occupational change, a binary variable indicating whether an individual changes jobs (Kambourov & Manovskii, 2008; Guvenen et al., 2020). Although transitions between occupations may depend richly on history, many of these models condition on only the most recent job and a few manually constructed summary statistics about history to make predictions (Hall, 1972; Blau & Riphahn, 1999). In contrast to these methods, CAREER is nonlinear and conditions on every job in an individual’s history. The model learns complex representations of careers without relying on manually constructed features. Moreover, CAREER can effectively predict from among hundreds of occupations. Recently, the proliferation of business networking platforms has resulted in the availability of large resume datasets. Schubert et al. (2021) use a large resume dataset to construct a first-order Markov model of job transitions; CAREER, which conditions on all jobs in a history, makes more accurate predictions than a Markov model. Models developed in the data mining community rely on resumespecific features such as stock prices (Xu et al., 2018), worker skill (Ghosh et al., 2020), network information (Meng et al., 2019; Zhang et al., 2021), and textual descriptions (He et al., 2021), and are not applicable to survey datasets, as is our goal in this paper (other models reduce to a first-order Markov model without these features (Dave et al., 2018; Zhang et al., 2020)). The most suitable model for survey datasets from this line of work is NEMO, an LSTM-based model that is trained on large resume datasets (Li et al., 2017). Our experiments demonstrate that CAREER outperforms NEMO when it is adapted to model survey datasets. Recent works in econometrics have applied machine learning methods to sequences of jobs and other discrete data. Ruiz et al. (2020) develop a matrix factorization method called SHOPPER to model supermarket basket data. We consider a baseline “bag-of-jobs” model similar to SHOPPER. Like the transformer-based model, the bag-of-jobs model conditions on every job in an individual’s history, but it uses relatively simple representations of careers. Our empirical studies demonstrate that CAREER learns complex representations that are better at modeling job sequences. Rajkumar et al. (2021) build on SHOPPER and propose a Bayesian factorization method for predicting job transitions. Similar to CAREER, they predict jobs in two stages. However, their method is focused on modeling individual transitions, so it only conditions on the most recent job in an individual’s history. In our empirical studies, we show that models like CAREER that condition on every job in an individual’s history form more accurate predictions than Markov models. CAREER is based on a transformer, a successful model for representing sequences of words in natural language processing (NLP). In econometrics, transformers have been applied to the text of job descriptions to predict their salaries (Bana, 2021) or authenticity (Naudé et al., 2022); rather than modeling text, we use transformers to model sequences of occupations. Transformers have also been applied successfully to sequences other than text: images (Dosovitskiy et al., 2021), music (Huang et al., 2019), and molecular chemistry (Schwaller et al., 2019). Inspired by their success in modeling a variety of complex discrete sequential distributions, this paper adapts transformers to modeling sequences of jobs. Transformers are especially adept at learning transferrable representations of text from large corpora (Radford et al., 2018; Devlin et al., 2019). We show that CAREER learns representations of job sequences that can be transferred from noisy resume datasets to smaller, wellcurated administrative datasets. 4 EMPIRICAL STUDIES We assess CAREER’s ability to predict jobs and provide useful representations of careers. We pretrain CAREER on a large dataset of resumes, and transfer these representations to small, commonly used survey datasets. With the transferred representations, the model is better than econometric baselines at both held-out prediction and forecasting. Additionally, we demonstrate that CAREER’s representations can be incorporated into standard wage prediction models to make better predictions. Resume pretraining. We pretrain CAREER on a large dataset of resumes provided by Zippia Inc., a career planning company. This dataset contains resumes from 23.7 million working Americans. Each job is encoded into one of 330 occupational codes, using the coding scheme of Autor & Dorn (2013). We transform resumes into sequences of jobs by including an occupation’s code for each year in the resume. For years with multiple jobs, we take the job the individual spent the most time in. We include three covariates: the year each job in an individual’s career took place, along with the individual’s state of residence and most recent educational degree. We denote missing covariates with a special token. See Appendix F for an exploratory data analysis of the resume data. CAREER uses a 12-layer transformer with 5.6 million parameters. Pretraining CAREER on the resumes data takes 18 hours on a single GPU. Although our focus is on fine-tuning CAREER to model survey datasets rather than resumes, CAREER also outperforms standard econometric baselines for modeling resumes; see Appendix B for more details. Survey datasets. We transfer CAREER to three widely-used survey datasets: two cohorts from the National Longitudinal Survey of Youth (NLSY79 and NLSY97) and the Panel Study of Income Dynamics (PSID). These datasets have been carefully constructed to be representative of the general population, and they are widely used by economists for estimating important quantities. NLSY79 is a longitudinal panel survey following a cohort of Americans who were between 14 and 22 when the survey began in 1979, while NLSY97 follows a different cohort of individuals who were between 12 and 17 when the survey began in 1997. PSID is a longitudinal survey following a sample of American families, with individuals added over the years. Compared to the resumes dataset, these survey datasets are small: we use slices of NLSY79, NLSY97, and PSID that contain 12 thousand, 9 thousand, and 12 thousand individuals, respectively. The distribution of job sequences in resumes differs in meaningful ways from those in the survey datasets; for example, manual laborers are under-represented and college graduates are overrepresented in resume data (see Appendix F for more details). We pretrain CAREER on the large resumes dataset and fine-tune on the smaller survey datasets. The fine-tuning process is efficient; although CAREER has 5.6 million parameters, fine-tuning on one GPU takes 13 minutes on NLSY79, 7 minutes on NLSY97, and 23 minutes on PSID. We compare CAREER to several baseline models: a second-order linear regression with covariates and hand-constructed summary statistics about past employment (a common econometric model used to analyze these survey datasets – see Section 3); a bag-of-jobs model inspired by SHOPPER (Ruiz et al., 2020) that conditions on all jobs and covariates in a history but combines representations linearly; and several baselines developed in the data-mining community for modeling worker profiles: NEMO (Li et al., 2017), job representation learning (Dave et al., 2018), and Job2Vec (Zhang et al., 2020). As described in Section 3, the baselines developed in the data-mining community for modeling worker profiles cannot be applied directly to economic survey datasets and thus require modifications, described in detail in Appendix I. We also compare to two additional versions of CAREER — one without pretraining or two-stage prediction, the other only without two-stage prediction — to assess the sources of CAREER’s improvements. All models use the covariates we included for resume pretraining, in addition to demographic covariates (which are recorded for the survey datasets but are unavailable for resumes). We divide all survey datasets into 70/10/20 train/validation/test splits, and train all models by optimizing the log-likelihood with Adam (Kingma & Ba, 2015). We evaluate the predictive performance of each model by computing held-out perplexity, a common metric in NLP for evaluating probabilistic sequence models. The perplexity of a sequence model p on a sequence y1, . . . , yT is Under review as a conference paper at ICLR 2023Under review as a conference paper at ICLR 2023 (a) Test perplexity on survey datasets. Results are averaged over three random seeds. CAREER (vanilla) includes covariates but not two-stage prediction or pretraining; CAREER (two-stage) adds two-stage prediction. (b) CAREER’s scaling law on NLSY79 as a function of pretraining data volume. dictive models have lower perplexities. We train all models to convergence and use the checkpoint with the best validation perplexity. See Appendix I for more experimental details. Figure 2a compares the test-set perplexity of each model. With the transferred representations, CAREER makes the best predictions on all survey datasets, achieving state-of-the-art performance. NEMO, which was designed to model large resume datasets, struggles to make good predictions on these small survey datasets, performing on par with standard econometric baselines. Pretraining is the biggest source of CAREER’s improvements. Although the resume data is noisy and differs in many ways from the survey datasets used for economic prediction, CAREER learns useful representations of work experiences that aid its predictive performance. In Appendix G we show that modifying the baselines to incorporate two-stage prediction (Equations 1 to 3) improves their performance, although CAREER still makes the best predictions across datasets. We include qualitative analysis of CAREER’s predictions in Appendix D. To assess how the volume of resumes used for pretraining affects CAREER’s predictions on survey datasets, we downsample the resume dataset and transfer to survey datasets. The scaling law for NLSY79 is depicted in Figure 2b. When there are less than 20,000 examples in the resume dataset, pretraining CAREER does not offer any improvement. The relationship between pretraining volume and fine-tuned perplexity follows a power law, similar to scaling laws in NLP (Kaplan et al., 2020). We also assess CAREER’s ability to forecast future career trajectories. In contrast to predicting held-out sequences, forecasting involves training models on all sequences before a specific year. To predict future jobs for an individual, the fitted model is used to estimate job probabilities six years into the future by sampling multi-year trajectories. This setting is useful for assessing a model’s ability to make long-term predictions, especially as occupational trends change over time. We evaluate CAREER’s forecasting abilities on NLSY97 and PSID. (These datasets are more valuable for forecasting than NLSY79, which follows a cohort that is near or past retirement age.) We train models on all sequences (holding out 10% as a validation set), without including any observations after 2014. When pretraining CAREER on resumes, we also make sure to only include examples up to 2014. Table 2 compares the forecasting performance of all models. CAREER makes the best overall forecasts. CAREER has a significant advantage over baselines at making long-term forecasts, yielding a 17% advantage over the best baseline for 6-year forecasts on NLSY97. Again, 8 (a) Test perplexity on survey datasets. Results are averaged over three random seeds. CAREER (vanilla) includes covariates but not two-stage prediction or pretraining; CAREER (two-stage) adds two-stage prediction. (b) CAREER’s scaling law on NLSY79 as a function of pretraining data volume. Figure 2: Prediction results on longitudinal survey datasets and scaling law. exp{− 1T ∑T t=1 log p(yt|yt−1,xt)}. It is a monotonic transformation of log-likelihood; better predictive models have lower perplexities. We train all models to convergence and use the checkpoint with the best validation perplexity. See Appendix I for more experimental details. Figure 2a compares the test-set perplexity of each model. With the transferred representations, CA- REER makes the best predictions on all survey datasets, achieving state-of-the-art performance. The baselines developed in the data mining literature, which were designed to model large resume datasets while relying on resume-specific features, struggle to make good predictions on these small survey datasets, performing on par with standard econometric baselines. Pretraining is the biggest source of CAREER’s improv ments. Although the resume data is noisy and differs in many ways from the survey datasets used for economic prediction, CAREER learns useful representations of work experiences that aid its predictive performance. In Appendix G we show that modifying the baselines to incorporate two-stage prediction (Equations 1 to 3) improves their performance, although CAREER still makes the best predictions across datasets. We include qualitative analysis of CAREER’s prediction in Appendix D. To assess how the volume of resumes used for pretrai ing affects CAREER’s predictions on survey dataset , we downsampl the resum dataset and transfer to su vey datasets. The scaling law for NLSY79 is depicted in Figure 2b. When there are less than 20,000 examples in the resume dataset, pretraining CAREER does not offer any improvement. The relationship between pretraining volume and fine-tuned pe plexity follows a power law, imilar to scaling laws i NLP (Kaplan et al., 2020). We also assess CAREER’s ability to forecast future career trajectories. In contrast to predicting held-out sequences, orecasting involves training models on all sequences before a specific year. To pr ict future j bs for an individual, the fitted model is used to estimate job probabilities six years into the future by sampling multi-year trajectories. This setting is useful for assessing a model’s ability to make long-term predictions, especially as occupational trends change over time. We evaluate CAREER’s forecasting abilities on NLSY97 and PSID. (These datasets are more valuable for forecasting than NLSY79, which follows a cohort that is near or past retirement age.) We train models on all sequences (holding out 10% as a validation set), without including any observations after 2014. When pretraining CAREER on resumes, we also make sure to only include examples up to 2014. Table 1 compares the forecasting performance of all models. CAREER makes the best overall forecasts. REER has a significant advantage over baselines at making long-term forecasts, yielding a 17% advantage over the best baseline for 6-year forecasts on NLSY97. Again, the baselines developed for resume data mining, which had been develop d to model much larger corpora, struggle to make good predictions on these smaller survey datasets. Downstream applications. In addition to forming job predictions, CAREER learns lowdimensional representations of job histories. Although these representations were formed to predict jobs in a sequence, they can also be used as inputs to economic models for downstream applications. As an example of how CAREER’s representations can be incorporated into other economic models, we use CAREER to predict wages. Economists build wage prediction models in order to estimate important economic quantities, such as the adjusted gender wage gap. For example, to estimate this wage gap, Blau & Kahn (2017a) regress an individual’s log-wage on observable characteristics such as education, demographics, and current occupation for six different years on PSID. Rather than including the full, high-dimensional job-history, the model summarizes an individual’s career with summary statistics such as full-time and part-time years of experience (and their squares). We incorporate CAREER’s representation into the wage regression by adding the fitted representation for an individual’s job history, ĥi. For log-wage wi and observed covariates xi, we regress wi ∼ α+ θ>xi + γ>ĥi, (9) where α, θ, and γ are regression coefficients. We pretrain CAREER to predict jobs on resumes, and for each year we fine-tune on job sequences of the cohort up to that year. For example, in the 1999 wage regression, we fine-tune CAREER only on the sequences of jobs until 1999 and plug in the fixed representation to the wage regression. We do not include any covariates (except year) when training CAREER. We run each wage regression on 80% of the training data and evaluate mean-square error on the remaining 20% (averaging over 10 random splits). Table 2 shows that adding CAREER’s representations improves wage predictions for each year. Although these representations are fine-tuned to predict jobs on a small dataset (each year contains less than 5,000 workers) and are not adjusted to account for wage, they contain information that is predictive of wage. By summarizing complex career histories with a low-dimensional representation, CAREER provides representations that can improve downstream economic models, resulting in more accurate estimates of important economic quantities. 5 CONCLUSION We introduced CAREER, a method for representing job sequences from large-scale resume data and fine-tuning them on smaller datasets of interest. We took inspiration from modern language modeling to develop a transformer-based occupation model. We transferred the model from a large dataset of resumes to smaller survey datasets in economics, where it achieved state-of-the-art performance for predicting and forecasting career outcomes. We demonstrated that CAREER’s representations can be incorporated into wage prediction models, outperforming standard econometric models. One direction of future research is to incorporate CAREER’s representations of job history into methods for estimating adjusted quantities, like wage gaps. Underlying these methods are models that predict economic outcomes as a function of observed covariates. However, if relevant variables are omitted, the adjusted estimates may be affected; e.g., excluding work experience from wage prediction may change the magnitude of the estimated gap. In practice, economists include handdesigned summary statistics to overcome this problem, such as in Blau & Kahn (2017a). CAREER provides a data-driven way to incorporate such variables—its representations of job history could be incorporated into downstream prediction models and lead to more accurate adjustments of economic quantities. Ethics statement. As discussed, passively-collected resume datasets are not curated to represent national demographics. Pretraining CAREER on these datasets may result in representations that are affected by sampling bias. Although these representations are fine-tuned on survey datasets that are carefully constructed to represent national demographics, the biases from pretraining may propagate through fine-tuning (Ravfogel et al., 2020; Jin et al., 2021). Moreover, even in representative datasets, models may form more accurate predictions for majority groups due to data volume (Dwork et al., 2018). Thus, we encourage practitioners to audit noisy resume data, re-weight samples as necessary (Kalton, 1983), and review accuracy within demographics before using the model for downstream economic analysis. Although resume datasets may contain personally identifiable information, all personally identifiable information had been removed before we were given access to the resume dataset we used for pretraining. Additionally, none of the longitudinal survey datasets contain personally identifiable information. Reproducibility statement. The supplementary material contains code for reproducing the experimental results in this paper, with the README containing detailed instructions for reproducing specific experiments. Our data-use agreement prohibits us from releasing the dataset of resumes used for pretraining. However, similar (private) resume datasets have become increasingly common in applied economics analyses (Azar et al., 2020; Schubert et al., 2021), and we include pretraining code so practitioners can reproduce our results with resume datasets they have access to. Additionally, all longitudinal survey datasets are available publicly online (Bureau of Labor Statistics, 2019a;b; Panel Study of Income Dynamics, 2021). A ECONOMETRIC BASELINES In this section, we describe baseline occupation models that economists have used to model jobs and other discrete sequences. Markov models and regression. A first-order Markov model assumes the job at each timestep depends on only the previous job (Hall, 1972; Poterba & Summers, 1986). Without covariates, a Markov model takes the form p(yt = j|yt−1) = p(yt = j|yt−1). The optimal transition probabilities reflect the overall frequencies of individuals transitioning from occupation yt−1 to occupation j. In a second-order Markov model, the next job depends on the previous two. A multinomial logistic regression can be used to incorporate covariates: p(yt = j|yt−1,xt) ∝ exp { β (0) j + β (1) j · yt−1 + ∑ c β (c) j · xtc } , (10) where β(0)j is an occupation-specific intercept and yt−1 and xtc denote J- and Nc-dimensional indicator vectors, respectively. Equation 10 depends on history only through the most recent job, although the covariates can also include hand-crafted summary statistics about the past, such as the duration of the most recent job (McCall, 1990). This model is fit by maximizing the likelihood with gradient-based methods. Bag-of-jobs. A weakness of the first-order Markov model is that it only uses the most recent job to make predictions. However, one’s working history beyond the last job may inform future transitions (Blau & Riphahn, 1999; Neal, 1999). Another baseline we consider is a bag-of-jobs model, inspired by SHOPPER, a probabilistic model of consumer choice (Ruiz et al., 2020). Unlike the Markov and regression models, the bag-of-jobs model conditions on every job in an individual’s history. It does so by learning a low-dimensional representation of an individual’s history. This model learns a unique embedding for each occupation, similar to a word embedding (Bengio et al., 2003; Mikolov et al., 2013); unlike CAREER, which learns complicated nonlinear interactions between jobs in a history, the bag-of-jobs model combines jobs into a single representation by averaging their embeddings. The bag-of-jobs model assumes that job transitions depend on two terms: a term that captures the effect of the most recent job, and a term that captures the effect of all prior jobs. Accordingly, the model learns two types of representations: an embedding αj ∈ RD of the most recent job j, and an embedding ρj′ ∈ RD for prior jobs j′. To combine the representations for all prior jobs into a single term, the model averages embeddings: p(yt = j|yt−1) ∝ exp { β (1) j · αyt−1 + β (2) j · ( 1 t−2 ∑t−2 t′=1 ρyt′ )} . (11) Covariates can be added to the model analogously; for a single covariate, its most recent value is embedded and summed with the average embeddings for its prior values. All parameters are estimated by maximizing the likelihood in Equation 11 with SGD. B RESUME PREDICTIONS Although our focus is on modeling survey datasets, we also compare CAREER to several econometric baselines for predicting job sequences in resumes. We consider a series of models without covariates: a first- and second-order Markov model, a bag-of-jobs model (Equation 11), and a transformer with the same architecture as CAREER except without covariates. We also compare to econometric models that use covariates: a second-order linear regression with covariates and hand-constructed features (such as how long an individual has worked in their current job), and a bag-of-jobs model with covariates (Appendix I has more details). We randomly divide the resumes dataset into a training set of 23.6 million sequences, and a validation and test set of 23 thousand sequences each. Table 3 compares the test-set predictive performance of all models. CAREER is the best at predicting held-out sequences. To understand the types of transitions contributing to CAREER’s predictive advantage, we decompose predictions into three categories: consecutive repeats (when the next job is the same as the previous year’s), nonconsecutive repeats (when the next job is different from the previous year’s, but is the same as one of the prior jobs in the career), and new jobs. CAREER has a clear advantage over the baselines in all three categories, but the biggest improvement comes when predicting jobs that have been repeated non-consecutively. The transformer model is at an advantage over the Markov models for these kinds of predictions because it is able to condition on an individual’s entire working history, while a Markov model is constrained to use only the most recent job (or two). The bag-of-jobs model, which can condition on all jobs in a worker’s history but cannot learn complex interactions between them, outperforms the Markov models but still falls short of CAREER, which can recognize and represent complex career trajectories. In Appendix C, we demonstrate that CAREER is well-equipped at forecasting future trajectories as well. C FORECASTING RESUMES We also perform the forecasting experiment on the large dataset of resumes. Each model is trained on resumes before 2015. To predict occupations for individuals after 2015, a model samples 1,000 trajectories for each individual, and averages probabilities to form a single prediction for each year. For more experimental details, see Appendix I. Table 4 depicts the forecasting results for the resumes dataset. Each fitted model is used to forecast occupation probabilities for three years into the future. CAREER makes the best forecasts, both overall and for each individual year. D QUALITATIVE ANALYSIS Rationalizing predictions. Figure 3 shows an example of a held-out career sequence from PSID. CAREER is much likelier than a regression and bag-of-jobs baseline to predict this individual’s next job, biological technician. To understand CAREER’s prediction, we show the model’s rationale, or the jobs in this individual’s history that are sufficient for explaining the model’s prediction. (We adapt the greedy rationalization method from Vafa et al. (2021); refer to Appendix I for more details.) In this example, CAREER only needs three previous jobs to predict biological technician: animal caretaker, engineering technician, and student. The model can combine latent attributes of each job to predict the individual’s next job. Representation similarity. To demonstrate the quality of the learned representations, we use CAREER’s fine-tuned representations on NLSY97 to find pairs of individuals with the most similar career trajectories. Specifically, we compute CAREEER’s representation ht(yt−1,xt) for each individual in NLSY97 who has worked for four years. We then measure the similarity between all pairs by computing the cosine similarity between representations. In order to depict meaningful matches, we only consider pairs of individuals with no overlapping jobs in their histories (otherwise the model would find individuals with the exact same career trajectories). Figure 4 depicts the career histories with the most similar CAREER representations. Although none of these pairs have overlapping jobs, the model learns representations that can identify similar careers. E TRANSFORMER DETAILS In this section, we expand on the simplified description of transformers in Section 2.3 and describe CAREER in full detail. Recall that the model estimates representations in L layers, h (1) t (yt−1,xt), . . . , h (L) t (yt−1,xt), with each representation h (`) t ∈ RD. The final representation h (L) t (yt−1,xt) is used to represent careers. We drop the explicit dependence on yt−1 and xt, and instead denote each representation as h(`)t . The first transformer layer combines the previous occupation, the most recent covariates, and the position of the occupation in the career. It first embeds each of these variables in D-dimensional space. Define an embedding function for occupations, ey : [J ] → RD. Additionally, define a separate embedding function for each covariate, {ec}Cc=1, with each ec : [Nc] → RD. Finally, define et : [T ] → RD to embed the position of the sequence, where T denotes the number of possible sequence lengths. The first-layer representation h(1)t sums these embeddings: h (1) t = ey(yt−1) + ∑ c ec(xtc) + et(t). (12) The occupation- and covariate-specific embeddings, ey and {ec}, are model parameters; the positional embeddings, et, are set in advance to follow a sinusoidal pattern (Vaswani et al., 2017). While these embeddings could also be parameterized, in practice the performance is similar, and using sinusoidal embeddings allows the model to generalize to career sequence lengths unseen in the training data. At each subsequent layer, the transformer combines the representations of all occupations in a history. It combines representations by performing multi-headed attention, which is similar to the process described in Section 2.3 albeit with multiple attention weights per layer. Specifically, it uses A specific attention weights, or heads, per layer. The number of heads A should be less than the representation dimension D. (Using A = 1 attention head reduces to the process described in Equations 5 and 6.) The representation dimension D should be divisible by A; denote K = D/A. First, A different sets of attention weights are computed: z (`) a,t,t′ = ( h (`) t )> W (`)a h (`) t′ for t ′ ≤ t πa,t,t′ = exp{za,t,t′}∑ k exp{za,t,k} , (13) where W (`)a ∈ RD×D is a model parameter, specific to attention head a and layer l.4 Each attention head forms a convex combination with all previous representations; to differentiate between attention heads, each representation is transformed by a linear transformation V (`)a ∈ RK×D unique to an attention head, forming b(`)a,t ∈ RK : b (`) a,t = ∑t t′=1 π (`) a,t,t′ ( V (`) a h (`) t′ ) . (14) All attention heads are combined into a single representation by concatenating them into a single vector g(`)t ∈ RD: g (`) t = ( b (`) 1,t, b (`) 2,t, . . . , b (`) A,t ) . (15) To complete the multi-head attention step and form the intermediate representation h̃(`)t , the concatenated representations g(`)t undergo a linear transformation and are summed with the pre-attention representation h(`)t : h̃ (`) t = h (`) t +M (`)g (`) t , (16) with M (`) ∈ RD×D. The intermediate representations h̃(`)t ∈ RD combine the representation at timestep t with those preceding timestep t. Each layer of the transformer concludes by taking a non-linear transformation of the intermediate representations. This non-linear transformation does not depend on any previous representation; it only transforms h̃(`)t . Specifically, h̃ (`) t is passed through a neural network: h (`+1) t = h̃ (`) t + FFN (`) ( h̃ (`) t ) , (17) where FFN(`) denotes a two-layer feedforward neural network with N hidden units, with FFN(`) : RD → RD. We repeat the multi-head attention and feedforward neural network updates above for L layers, using parameters unique to each layer. We represent careers with the last-layer representation, ht(yt−1,xt) = h (L) t (yt−1,xt). For our experiments, we use model specifications similar to the generative pretrained transformer (GPT) architecture (Radford et al., 2018). In particular, we use L = 12 layers, a representation dimension of D = 192, A = 3 attention heads, and N = 768 hidden units and the GELU nonlinearity (Hendrycks & Gimpel, 2016) for all feedforward neural networks. In total, this results in 5.6 million parameters. This model includes a few extra modifications to improve training: we use 0.1 dropout (Srivastava et al., 2014) for the feedforward neural network weights, and 0.1 dropout for the attention weights. Finally, we use layer normalization (Ba et al., 2016) before the updates in Equation 13, after the update in Equation 16, and after the final layer’s neural network update in Equation 17. 4For computational reasons, W (`)a is decomposed into two matrices and scaled by a constant, W (`) a = Q (`) a ( K (`) a )> √ K , with Q(`)a , K (`) a ∈ RD×K . F EXPLORATORY DATA ANALYSIS Table 5 depicts summary statistics of the resume dataset provided by Zippia that is used for pretraining CAREER. Table 6 compares this resume dataset with the longitudinal survey datasets of interest. G ONE-STAGE VS TWO-STAGE PREDICTION Table 7 compares the predictive performance of occupation models when they are modified to make predictions in two stages, following Equations 1 to 3. Incorporating two-stage prediction improves the performance of these models compared to Figure 2a; however, CAREER still makes the best predictions on all survey datasets. H DATA PREPROCESSING In this section, we go over the data preprocessing steps we took for each dataset. Resumes. We were given access to a large dataset of resumes of American workers by Zippia, a career planning company. This dataset coded each occupation into one of 1,073 O*NET 2010 Standard Occupational Classification (SOC) categories based on the provided job titles and descriptions in resumes. We dropped all examples with missing SOC codes. Each resume in the dataset we were given contained covariates that had been imputed based off other data in the resume. We considered three covariates: year, most recent educational degree, and location. Education degrees had been encoded into one of eight categories: high school diploma, associate, bachelors, masters, doctorate, certificate, license, and diploma. Location had been encoded into one of 50 states plus Puerto Rico, Washington D.C., and unknown, for when location could not be imputed. Some covariates also had missing entries. When an occupation’s year was missing, we had to drop it from the dataset, because we could not position it in an individual’s career. Whenever another covariate was missing, we replaced it with a special “missing” token. All personally identifiable information had been removed from the dataset. We transformed each resume in the dataset into a sequence of occupations. We included an entry for each year starting from the first year an individual worked to their last year. We included a special “beginning of sequence” token to indicate when each individual’s sequence started. For each year between an individual’s first and last year, we added the occupation they worked in during that year. If an individual worked in multiple occupations in a year, we took the one where the individual spent more time in that year; if they were both the same amount of time in the particular year, we broke ties by adding the occupation that had started earlier in the career. For the experiments predicting future jobs directly on resumes, we added a “no-observed-occupation” token for years where the resume did not list any occupations (we dropped this token when pretraining). Each occupation was associated with the individual’s most recent educational degree, which we treated as a dynamic covariate. The year an occupation took place was also considered a dynamic categorical covariate. We treated location as static. In total, this preprocessing left us with a dataset of 23.7 million resumes, and 245 million individual occupations. In order to transfer representations, we had to slightly modify the resumes dataset for pretraining to encode occupations and covariates into a format compatible with the survey datasets. The survey datasets we used were encoded with the “occ1990dd” occupation code (Autor & Dorn, 2013) rather than with O*NET’s SOC codes, so we converted the SOC codes to occ1990dd codes using a crosswalk posted online by Destin Royer. Even after we manually added a few missing entries to the crosswalks, there were some SOC codes that did not have corresponding occ1990dd’s. We gave these tokens special codes that were not used when fine-tuning on the survey datasets (because they did not correspond to occ1990dd occupations). When an individual did not work for a given year, the survey datasets differentiated between three possible states: unemployed, out-of-labor-force, and in-school. The resumes dataset did not have these categories. Thus, we initialized parameters for these three new occupational states randomly. Additionally, we did not include the “no-observedoccupation” token when pretraining, and instead dropped missing years from the sequence. Since we did not use gender and race/ethnicity covariates when pretraining, we also initialized these covariatespecific parameters randomly for fine-tuning. Because we used a version of the survey datasets that encoded each individual’s location as a geographic region rather than as a state, we converted each state in the resumes data to be in one of four regions for pretraining: northeast, northcentral, south, or west. We also added a fifth “other” region for Puerto Rico and for when a state was missing in the original dataset. We also converted educational degrees to levels of experience: we converted associate’s degree to represent some college experience and bachelor’s degree to represent fouryear college experience; we combined masters and doctorate to represent a single “graduate degree” category; and we left the other categories as they were. NLSY79. The National Longitudinal Survey of Youth 1979 (NLSY79) is a survey following individuals born in the United States between 1957-1964. The survey included individuals who were between 14 and 22 years old when they began collecting data in 1979; they interviewed individuals annually until 1994, and biennially thereafter. Each individual in the survey is associated with an ID, allowing us to track their careers over time. We converted occupations, which were initially encoded as OCC codes, into “occ1990dd” codes using a crosswalk (Autor & Dorn, 2013). We use a version of the survey that has entries up to 2014. Unlike the resumes dataset, NLSY79 includes three states corresponding to individuals who are not currently employed: unemployed, out-of-labor-force, and in-school. We include special tokens for these states in our sequences. We drop examples with missing occupation states. We also drop sequences for which the individual is out of the labor force for their whole careers. We use the following covariates: years, educational experience, location, race/ethnicity, and gender. We drop individuals with less than 9 years of education experience. We convert years of educational experience into discrete categories: no high school degree, high school degree, some college, college, and graduate degree. We convert geographic location to one of four regions: northeast, northcentral, south, and west. We treat location as a static variable, using each individual’s first location. We use the following race/ethnicities: white, African American, Asian, Latino, Native American, and other. We treat year and education as dynamic covariates whose values can change over time, and we consider the other covariates as static. This preprocessing leaves us with a dataset consisting of 12,270 individuals and 239,545 total observations. NLSY97. The National Longitudinal Survey of Youth 1997 (NLSY97) is a survey following individuals who were between 12 and 17 when the survey began in 1997. Individuals were interviewed annually until 2011, and biennially thereafter. Our preprocessing of this dataset is similar to that of NLSY79. We convert occupations from OCC codes into “occ1990dd” codes. We use a version of the survey that follows individuals up to 2019. We include tokens for unemployed, out-of-labor-force, and in-school occupational states. We only consider individuals who are over 18 and drop military-related occupations. We use the same covariates as NLSY79. We use the following race/ethnicities: white, African-aAmerican, Latino, and other/unknown. We convert years of educational experience into discrete categories: no high school degree, high school degree, some college degree, college degree, graduate degree, and a special token when the education status isn’t known. We use the same regions as NLSY79. We drop sequences for which the individual is out of the labor force for their whole careers. This preprocessing leaves us with a dataset consisting of 8,770 individuals and 114,141 total observations. PSID. The Panel Study of Income Dynamics (PSID) is a longitudinal panel survey following a sample of American families. It was collected annually between 1968 and 1997, and biennially afterwards. The dataset tracks families over time, but it only includes occupation information for the household head and their spouse, so we only include these observations. Occupations are encoded with OCC codes, which we convert to “occ1990dd” using a crosswalk (Autor & Dorn, 2013). Like the NLSY surveys, PSID also includes three states corresponding to individuals who are not currently employed: unemployed, out-of-labor-force, and in-school. We include special tokens for these states in our sequences. We drop other examples with missing or invalid occupation codes. We also drop sequences for which the individual is out of the labor force for their whole careers. We consider five covariates: year, education, location, gender, and race. We include observations for individuals who were added to the dataset after 1995 and include observations up to 2019. We exclude observations for individuals with less than 9 years of education experience. We convert years of education to discrete states: no high school, high school diploma, some college, college, and graduate degree. We convert geographic location to one of four regions: northeast, northcentral, south, and west. We treat location as a static variable, using each individual’s first location. We use the following races: white, Black, and other. We treat year and education as dynamic covariates whose values can change over time, and we consider the other covariates as static. This preprocessing leaves us with a dataset consisting of 12,338 individuals and 62,665 total observations. I EXPERIMENTAL DETAILS Baselines. We consider a first-order Markov model and a second-order Markov model (both without covariates) as baselines. These models are estimated by averaging observed transition counts. We smooth the first-order Markov model by taking a weighted average between the empirical transitions in the training set and the empirical distribution of individual jobs. We perform this smoothing to account for the fact that some feasible transitions may never occur in the training set due to the high-dimensionality of feasible transitions. We assign 0.99 weight to the empirical distributions of transitions and 0.01 to the empirical distribution of individual jobs. We smooth the secondorder model by assigning 0.5 weight to the empirical second-order transitions and 0.5 weight to the smoothed first-order Markov model. When we add covariates to the Markov linear baseline, we also include manually constructed features about history to improve its performance. In total, we include the following categorical variables: the most recent job, the prior job, the year, a dummy indicating whether there has been more than one year since the most recent observed job, the education status, a dummy indicating whether the education status has changed, and state (for the experiments on NLSY79 and PSID, we also include an individual’s gender and race/ethnicity). We also add additive effects for the following continuous variables: the number of years an individual has been in the current job and the total number of years for which an individual has been in the dataset. In addition, we include an intercept term. For the bag-of-jobs model, we vary the representation dimensionD between 256-2048, and find that the predictive performance is not sensitive to the representation dimension, so we use D = 1024 for all experiments. For the LSTM model, we use 3 layers with 436 embedding dimensions so that the model size is comparable to the transformer baseline: the LSTM has 5.8 million parameters, the same number as the transformer. We also compare to NEMO (Li et al., 2017), an LSTM-based method developed for modeling job sequences in resumes. We adapted NEMO to model survey data. In its original setting, NEMO took as input static covariates (such as individual skill) and used these to predict both an individual’s next job title and their company. Survey datasets differ from this original setting in a few ways: covariates are time-varying, important covariates for predicting jobs on resumes (like skill) are missing, and an individual’s company name is unavailable. Therefore, we made several modifications to NEMO. We incorporated the available covariates from survey datasets by embedding them and adding them to the job embeddings passed into the LSTM, similar to the method CAREER uses to incorporate covariates. We removed the company-prediction objective, and instead only used the model to predict an individual’s job in the next timestep. We considered two sizes of NEMO: an architecture using the same number of parameters as CAREER, and the smaller architecture proposed in the original paper. We found the smaller architecture performed better on the survey datasets, so we used this for the experiments. This model contains 2 decoder layers and a hidden dimension of 200. We compare to two additional baselines developed in the data mining literature: job representation learning (Dave et al., 2018) and Job2Vec (Zhang et al., 2020). These methods require resumespecific features such as skills and textual descriptions of jobs and employers, which are not available for the economic longitudinal survey datasets we model. Thus, we adapt these baselines to be suitable for modeling economic survey data. Job representation learning (Dave et al., 2018) is based on developing two graphs, one for job transitions and one for skill transitions. Since worker skills are not available for longitudinal survey data, we adapt the model to only use job transitions by only including the terms in the objective that depend on job transitions. We make a few additional modifications, which we found to improve the performance of this model on our data. Rather than sampling 3-tuples from the directed graph of job transitions, we include all 2-tuple job transitions present in the data, identical to the other models we consider. Additionally, rather than using the contrastive objective in Equation 4 of Dave et al. (2018), we optimize the log-likelihood directly — this is more computationally intensive but leads to better results. Finally, we include survey-specific covariates (e.g. education, demographics, etc.) by adding them to wx, embedding the covariate of each most recent job to the same space as wx. We make similar modifications to Job2Vec (Zhang et al., 2020). Job2Vec requires job titles and descriptions of job keywords, which are unavailable for economic longitudinal survey datasets. Instead, we modify Equation 1 in Zhang et al. (2020) to model occupation codes rather than titles or keywords and optimize this log-likelihood as our objective. We also incorporate survey-specific covariates by embedding each covariate to the same space as ei and adding it ei before computing Equation 2 from Zhang et al. (2020), which we also found to improve performance. We follow Dave et al. (2018) and use 50 embedding dimensions for each model, and optimize with Adam using a maximum learning rate of 0.005, following the minibatch and warmup strategy described below. When we compared the transferred version of CAREER to a version of CAREER without pretrained representations, we tried various architectures for the non-pretrained version of CAREER. We found that, without pretraining, the large architecture we used for CAREER was prone to overfitting on the smaller survey datasets. So we performed an ablation of the non-pretrained CAREER with various architectures: we considered 4 and 12 layers, 64 and 192 embedding dimensions, 256 and 768 hidden units for the feedforward neural networks, and 2 or 3 attention heads (using 2 heads for D = 64 and 3 heads for D = 192 so that D was divisible by the number of heads). We tried all 8 combinations of these parameters on NLSY79, and found that the model with the best validation performance had 4 layers, D = 64 embedding dimensions, 256 hidden units, and 2 attention heads. We used this architecture for the non-pretrained version of CAREER on all survey datasets. Training. We randomly divide the resumes dataset into a training set of 23.6 million sequences, and a validation and test set of 23 thousand sequences each. We randomly divide the survey datasets into 70/10/20 train/test/validation splits. The first- and second-order Markov models without covariates are estimated from empirical transitions counts. We optimize all other models with stochastic gradient descent with minibatches. In total, we use 16,000 total tokens per minibatch, varying the batch size depending on the largest sequence length in the batch. We use the Adam learning rate scheduler (Kingma & Ba, 2015). All experiments on the resumes data warm up the learning rate from 10−7 to 0.0005 over 4,000 steps, after which the inverse square root schedule is used (Vaswani et al., 2017). For the survey datasets, we also used the inverse square root scheduler, but experimented with various learning rates and warmup updates, using the one we found to work best for each model. For CAREER with pretrained representations, we used a learning rate of 0.0001 and 500 warmup updates; for CAREER without pretraining, we used a learning rate of 0.0005 and 500 warmup updates; for the bag of jobs model, we used a learning rate of 0.0005 and 5,000 warmup updates; for the regression model, we used a learning rate of 0.0005 and 4,000 warmup updates. We use a learning rate of 0.005 for job representation learning and Job2Vec, with 5,000 warmup updates. All models besides were also trained with 0.01 weight decay. All models were trained using Fairseq (Ott et al., 2019). When training on resumes, we trained for 85,000 steps, using the checkpoint with the best validation performance. When fine-tuning on the survey datasets, we trained all models until they overfit to the validation set, again using the checkpoint with the best validation performance. We used half precision for training all models, with the exception of the following models (which were only stable with full precision): the bag of jobs model with covariates on the resumes data, and the regression models for all survey dataset experiments. The tables in Section 4 report results averaged over multiple random seeds. For the results in Figure 2a, the randomness includes parameter initialization and minibatch ordering. For CAREER, we use the same pretrained model for all settings. For the forecasting results in Table 1, the randomness is with respect to the Monte-Carlo sampling used to sample multi-year trajectories for individuals. For the wage prediction experiment in Table 2, the randomness is with respect to train/test splits. Forecasting. For the forecasting experiments, occupations that took place after a certain year are dropped from the train and validation sets. When we forecast on the resumes dataset, we use the same train/test/validation split but drop examples that took place after 2014. When we pretrain CAREER on the resumes dataset to make forecasts for PSID and NLSY97, we use a cutoff year of 2014 as well. We incorporate two-stage prediction into the baseline models because we find that this improves their predictions. Although we do not include any examples after the cutoff during training, all models require estimating year-specific terms. We use the fitted values from the last observed year to estimate these terms. For example, CAREER requires embedding each year. When the cutoff year is 2014, there do not exist embeddings for years after 2014, so we substitute the 2014 embedding. We report forecasting results on a split of the dataset containing examples before and after the cutoff year. To make predictions for an individual, we condition on all observations before the cutoff year, and sample 1,000 trajectories through the last forecasting year. We never condition on any occupations after the cutoff year, although we include updated values of dynamic covariates like education. For forecasting on the resumes dataset, we set the cutoff for 2014 and forecast occupations for 2015, 2016, and 2017. We restrict our test set to individuals in the original test set whose first observed occupation was before 2015 and who were observed to have worked until 2017. PSID and NLSY97 are biennial, so we forecast for 2015, 2017, and 2019. We only make forecasts for individuals who have observations before the cutoff year and through the last year of forecasting, resulting in a total of 16,430 observations for PSID and 18,743 for NLSY97. Wage prediction. For the wage prediction experiment, we use replication data provided by Blau & Kahn (2017b). We add individual’s job histories to this dataset by matching interview and person numbers. We drop individuals that could not be matched, about 3% of the data. When we apply CAREER to this data to learn a representation of job history, we do not use any covariates besides the year a job took place. We pretrain a version of CAREER containing 4 layers, 64 dimensions for the representations, 256 hidden units in the feedforward neural networks, and 2 attention heads. We pretrain on resumes for 50,000 steps. We fine-tune to predict jobs on PSID using the job histories of individuals up to the year of interest; for example, for the 2011 experiment, we only fine-tune on jobs that took place before 2011. We update parameters every 6 batches when fine-tuning. After fine-tuning CAREER’s representations to predict jobs, we plug in the learned representations into the wage regression in Equation 9. Notably, we do not alter CAREER’s representations to predict wage; we only estimate regression coefficients. We perform an unweighted linear regression. Our model without CAREER uses the same covariates as the wage regression in Blau & Kahn (2017a), including full- and part-time years of experience (and their squares), education, region, race/ethnicity, union status, current occupation, and current industry. We do not include whether an individual is a government worker because it results in instability for unweighted regression. Rather than estimate two separate models for males and females, we use a single model and include gender as an observed covariate. When we incorporate CAREER’s representations into the model, we use the same base model and add CAREER’s representations. Rationalization. The example in Figure 3 shows an example of CAREER’s rationale on PSID. To simplify the example, this is the rationale for a model trained on no covariates except year. In order to conceal individual behavior patterns, the example in Figure 3 is a slightly altered version of a real sequence. For this example, the transformer used for CAREER follows the architecture described in Radford et al. (2018). We find the rationale using the greedy rationalization method described in Vafa et al. (2021). Greedy rationalization requires fine-tuning the model for compatibility; we do this by fine-tuning with “job dropout”, where with 50% probability, we drop out a uniformly random amount of observations in the history. When making predictions, the model has to implicitly marginalize over the missing observations. (We pretrain on the resumes dataset without any word dropout). We find that training converges quickly when fine-tuning with word dropout, and the model’s performance when conditioning on the full history is similar. Greedy rationalization typically adds observations to a history one at a time in the order that will maximize the model’s likelihood of its top prediction. For occupations, the model’s top prediction is almost always identical to the previous year’s occupation, so we modify greedy rationalization to add the occupation that will maximize the likelihood of its second-largest prediction. This can be interpreted as equivalent to greedy rationalization, albeit conditioning on switching occupations. Thus, the greedy rationalization procedure stops when the model’s second-largest prediction from the target rationale is equivalent to the model’s second-largest prediction when conditioning on the full history.
1. What is the focus and contribution of the paper on transformer-based models for career sequence prediction? 2. What are the strengths of the proposed approach, particularly in terms of its capabilities and representation learning? 3. What are the weaknesses of the paper, especially regarding its claims and demonstrations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposed and developed a clear and detailed transformer-based model called CAREER that uses transfer learning to learn representations of job sequences. The CAREER system was pretrained on a dataset of 24 million resumes and it is capable of outperforming standard econometric models for predicting and forecasting occupations. Strengths And Weaknesses The paper is well written and very detailed description of both the development and operations of the model was provided. The appendix data was provided to clear the reproducibility concerns. Weakness: The deployment of this model was not presented The claimed incorporation of the model into the wage prediction models was not demonstrated Clarity, Quality, Novelty And Reproducibility The quality of the work is OK. The mathematical models are clearly explained.
ICLR
Title CAREER: Transfer Learning for Economic Prediction of Labor Data Abstract Labor economists regularly analyze employment data by fitting predictive models to small, carefully constructed longitudinal survey datasets. Although modern machine learning methods offer promise for such problems, these survey datasets are too small to take advantage of them. In recent years large datasets of online resumes have also become available, providing data about the career trajectories of millions of individuals. However, standard econometric models cannot take advantage of their scale or incorporate them into the analysis of survey data. To this end we develop CAREER, a transformer-based model that uses transfer learning to learn representations of job sequences. CAREER is first fit to large, passivelycollected resume data and then fine-tuned to smaller, better-curated datasets for economic inferences. We fit CAREER to a dataset of 24 million job sequences from resumes, and fine-tune its representations on longitudinal survey datasets. We find that CAREER forms accurate predictions of job sequences, achieving state-of-the-art predictive performance on three widely-used economics datasets. We further find that CAREER can be used to form good predictions of other downstream variables; incorporating CAREER into a wage model provides better predictions than the econometric models currently in use. 1 INTRODUCTION A variety of economic analyses rely on models for predicting an individual’s future occupations. These models are crucial for estimating important economic quantities, such as gender or racial differences in unemployment (Hall, 1972; Fairlie & Sundstrom, 1999); they underpin causal analyses and decompositions that rely on simulating counterfactual occupations for individuals (Brown et al., 1980; Schubert et al., 2021); and they inform policy, by forecasting occupations with rising or declining market shares. These analyses typically involve fitting predictive models to longitudinal surveys that follow a cohort of individuals during their working career (Panel Study of Income Dynamics, 2021; Bureau of Labor Statistics, 2019a). Such surveys have been carefully collected to represent national demographics, ensuring that the economic analyses can generalize to larger populations. But these datasets are also small, usually containing only thousands of workers, because maintaining them requires regularly interviewing each individual. Consequently, economists use simple sequential models, where a worker’s next occupation depends on their history only through the most recent occupation (Hall, 1972) or a few summary statistics about the past (Blau & Riphahn, 1999). In recent years, however, much larger datasets of online resumes have also become available. In contrast to longitudinal surveys, these passively-collected datasets are not typically used directly for economic inferences because they contain noisy observations and they are missing important economic variables such as demographics and wage. However, they provide occupation sequences of millions of individuals, potentially expanding the scope of insights that can be obtained from analyses on downstream survey datasets. The simple econometric models currently in use cannot incorporate the complex patterns embedded in these larger datasets into the analysis of survey data. To this end, we develop CAREER, a neural sequence model of occupation trajectories. CAREER is designed to be pretrained on large-scale resume data and then fine-tuned to small and bettercurated survey data for economic prediction. Its architecture is based on the transformer language model (Vaswani et al., 2017), for which pretraining and fine-tuning has proven to be an effective paradigm for many NLP tasks (Devlin et al., 2019; Lewis et al., 2019). CAREER extends this transformer-based transfer learning approach to modeling sequences of occupations, rather than text. We will show that CAREER’s representations provide effective predictions of occupations on survey datasets used for economic analysis, and can be used as inputs to economic models for other downstream applications. To study this model empirically, we pretrain CAREER on a dataset of 24 million resumes provided by Zippia, a career planning company. We then fine-tune CAREER’s representations of job sequences to make predictions on three widely-used economic datasets: the National Longitudinal Survey of Youth 1979 (NLSY79), another cohort from the same survey (NLSY97), and the Panel Study of Income Dynamics (PSID). In contrast to resume data, these well-curated datasets are representative of the larger population. It is with these survey datasets that economists make inferences, ensuring their analyses generalize. In this study, we find that CAREER outperforms standard econometric models for predicting and forecasting occupations, achieving state-of-the-art performance on the three widely-used survey datasets. We further find that CAREER can be used to form good predictions of other downstream variables; incorporating CAREER into a wage model provides better predictions than the econometric models currently in use. We release code so that practitioners can train CAREER on their own datasets. In summary, we demonstrate that CAREER can leverage large-scale resume data to make accurate predictions on important datasets from economics. Thus CAREER ties together economic models for understanding career trajectories with transformer-based methods for transfer learning. (See Section 3 for details of related work.) A flexible predictive model like CAREER expands the scope of analyses that can be performed by economists and policy-makers. 2 CAREER Given an individual’s career history, what is the probability distribution of their occupation in the next timestep? We go over a class of models for predicting occupations before introducing CAREER, one such model based on transformers and transfer learning. 2.1 OCCUPATION MODELS Consider an individual worker. This person’s career can be defined as a series of timesteps. Here, we use a timestep of one year. At each timestep, this individual works in a job: it could be the same job as the previous timestep, or a different job. (Note we use the terms “occupation” and “job” synonymously.) We consider “unemployed” and “out-of-labor-force” to be special types of jobs. Define an occupation model to be a probability distribution over sequences of jobs. An occupation model predicts a worker’s job at each timestep as a function of all previous jobs and other observed characteristics of the worker. More formally, define an individual’s career to be a sequence (y1, . . . , yT ), where each yt ∈ {1, . . . , J} indexes one of J occupations at time t. Occupations are categorical; one example of a sequence could be (“cashier”, “salesperson”, ... , “sales manager”). At each timestep, an individual is also associated with C observed covariates xt = {xtc}Cc=1. Covariates are also categorical, with xtc ∈ {1, . . . , Nc}. For example, if c corresponds to the most recent educational degree, xtc could be “high school diploma” or “bachelors”, and Nc is the number of types of educational degrees.1 Define yt = (y1, . . . , yt) to index all jobs that have occurred up to time t, with the analogous definition for xt. At each timestep, an occupation model predicts an individual’s job in the next timestep, p(yt|yt−1,xt). This distribution conditions on covariates from the same timestep because these are “pre-transition.” For example, an individual’s most recent educational degree is available to the model as it predicts their next job. 1Some covariates may not evolve over time. We encode them as time-varying without loss of generality. Note that an occupation model is a predictive rather than structural model. The model does not incorporate unobserved characteristics, like skill, when making predictions. Instead, it implicitly marginalizes over these unobserved variables, incorporating them into its predictive distribution. 2.2 REPRESENTATION-BASED TWO-STAGE MODELS An occupation model’s predictions are governed by an individual’s career history; both whether an individual changes jobs and the specific job they may transition to depend on current and previous jobs and covariates. We consider a class of occupation models that make predictions by conditioning on a lowdimensional representation of work history, ht(yt−1,xt) ∈ RD. This representation is assumed to be a sufficient statistic of the past; ht(yt−1,xt) should contain the relevant observed information for predicting the next job. Since individuals frequently stay in the same job between timesteps, we propose a class of models that make predictions in two stages. These models first predict whether an individual changes jobs, after which they predict the specific job to which an individual transitions. The representation is used in both stages. In the first stage, the career representation ht(yt−1,xt) is used to predict whether an individual changes jobs. Define the binary variable st to be 1 if a worker’s job at time t is different from that at time t− 1, and 0 otherwise. The first stage is a logistic regression, st|yt−1,xt ∼ Bernoulli (σ(η · ht(yt−1,xt))) , (1) where σ(·) is the logistic function and η ∈ RD is a vector of coefficients. If the model predicts that an individual will transition jobs, it only considers jobs that are different from the individual’s most recent job. To formulate this prediction, it combines the career representation with a vector of occupation-specific coefficients βj ∈ RD: p(yt = j|yt−1,xt, st = 1) = exp{βj · ht(yt−1,xt)}∑ j′ 6=yt−1 exp{βj′ · ht(yt−1,xt)} . (2) Otherwise, the next job is deterministic: p(yt = j|yt−1,xt, st = 0) = δj=yt−1 . (3) Two-stage prediction improves the accuracy of occupation models. Moreover, many analyses of occupational mobility focus on whether workers transition jobs rather than the specific job they transition to (Kambourov & Manovskii, 2008). By separating the mechanism by which a worker either keeps or changes jobs (η) and the specific job they may transition to (βj), two-stage models are more interpretable for studying occupational change. Equations 1 to 3 define a two-stage representation-based occupation model. In the next section, we introduce CAREER, one such model based on transformers. 2.3 CAREER MODEL We develop a two-stage representation-based occupation model called CAREER.2 This model uses a transformer to parameterize a representation of an individual’s history. This representation is pretrained on a large resumes dataset and fine-tuned to make predictions on small survey datasets. Transformers. A transformer is a sequence model that uses neural networks to learn representations of discrete tokens (Vaswani et al., 2017). Transformers were originally developed for natural language processing (NLP), to predict words in a sentence. Transformers are able to model complex dependencies between words, and they are a critical component of modern NLP systems including language modeling (Radford et al., 2019) and machine translation (Ott et al., 2018). CAREER is an occupation model that uses a transformer to parameterize a low-dimensional representation of careers. While transformers were developed to model sequences of words, CAREER 2CAREER is short for “Contextual Attention-based Representations of Employment Encoded from Resumes.” uses a transformer to model sequences of jobs. The transformer enables the model to represent complex career trajectories. CAREER is similar to the transformers used in NLP, but with two modifications. First, as described in Section 2.2, the model makes predictions in two stages, making it better-suited to model workers who stay in the same job through consecutive timesteps. (In contrast, words seldom repeat.) Second, while language models only condition on previous words, each career is also associated with covariates x that may affect transition distributions (see Equation 2). We adapt the transformer to these two changes. Parameterization. CAREER’s computation graph is depicted in Figure 1. Note that in this section we provide a simplified description of the ideas underlying the transformer. Appendix E contains a full description of the model. CAREER iteratively builds a representation of career history, ht(yt−1,xt) ∈ RD, using a stack of L layers. Each layer applies a series of computations to the previous layer’s output to produce its own layer-specific representation. The first layer’s representation, h(1)t (yt−1,xt), considers only the most recent job and covariates. At each subsequent layer `, the transformer forms a representation h(`)t (yt−1,xt) by combining the representation of the most recent job with those of preceding jobs. Representations become increasingly complex at each layer, and the final layer’s representation, h(L)t (yt−1,xt), is used to make predictions following Equations 1 to 3. We drop the explicit dependence on yt−1 and xt going forward, and instead denote each layer’s representation as h (`) t . The first layer’s representation combines the previous job, the most recent covariates, and the position of the job in the career. It first embeds each of these variables in D-dimensional space. Define an embedding function for occupations, ey : [J ]→ RD. Additionally, define a separate embedding function for each covariate, {ec}Cc=1, with each ec : [Nc] → RD. Finally, define et : [T ] → RD to embed the position of the sequence, where T denotes the number of possible sequence lengths. The first-layer representation h(1)t sums these embeddings: h (1) t = ey(yt−1) + ∑ c ec(xtc) + et(t). (4) For each subsequent layer `, the transformer combines representations of the most recent job with those of the preceding jobs and passes them through a neural network: π (`) t,t′ ∝ exp {( h (`) t )> W (`)h (`) t′ } for all t′ ≤ t (5) h̃ (`) t = h (`) t + ∑t t′=1 π (`) t,t′ ∗ h (`) t′ (6) h (`+1) t = FFN (`) ( h̃ (`) t ) , (7) where W (`) ∈ RD×D is a model parameter and FFN(`) is a two-layer feedforward neural network specific to layer `, with FFN(`) : RD → RD. The weights {π(`)t,t′} are referred to as attention weights, and they are determined by the career representations andW (`). The attention weights are non-negative and normalized to sum to 1. The matrix W (`) can be interpreted as a similarity matrix; ifW (`) is the identity matrix, occupations t and t′ that have similar representations will have large attention weights, and thus t′ would contribute more to the weighted average in Equation 6. Conversely, if W (`) is the negative identity matrix, occupations that have differing representations will have large attention weights.3 The final computation of each layer involves passing the intermediate representation h̃(`)t through a neural network, which ensures that representations capture complex nonlinear interactions. The computations in Equations 5 to 7 are repeated for each of the L layers. The last layer’s representation is used to predict the next job: p(yt|yt−1,xt) = two-stage-softmax ( h (L) t ; η, β ) , (8) where “two-stage-softmax” refers to the operation in Equations 1 to 3, parameterized by η and β. All of CAREER’s parameters – including the embedding functions, similarity matrices, feedforward neural networks, and regression coefficients η and β – are estimated by maximizing the likelihood in Equation 8 with stochastic gradient descent (SGD), marginalizing out the variable st. Transfer learning. Economists apply occupation models to survey datasets that have been carefully collected to represent national demographics. In the United States, these datasets contain a small number of individuals. While transformers have been successfully applied to large NLP datasets, they are prone to overfitting on small datasets (Kaplan et al., 2020; Dosovitskiy et al., 2021; Variš & Bojar, 2021). As such, CAREER may not learn useful representations solely from small survey datasets. In recent years, however, much larger datasets of online resumes have also become available. Although these passively-collected datasets provide job sequences of many more individuals, they are not used for economic estimation for a few reasons. The occupation sequences from resumes are imputed from short textual descriptions, a process that inevitably introduces more noise and errors than collecting data from detailed questionnaires. Additionally, individuals may not accurately list their work experiences on resumes (Wexler, 2006), and important economic variables relating to demographics and wage are not available. Finally, these datasets are not constructed to ensure that they are representative of the general population. Between these two types of data is a tension. On the one hand, resume data is large-scale and contains valuable information about employment patterns. On the other hand, survey datasets are carefully collected, designed to help make economic inferences that are robust and generalizable. Thus CAREER incorporates the patterns embedded in large-scale resume data into the analysis of survey datasets. It does this through transfer learning: CAREER is first pretrained on a large dataset of resumes to learn an initial representation of careers. When CAREER is then fit to a small survey dataset, parameters are not initialized randomly; instead, they are initialized with the representations learned from resumes. After initialization, all parameters are fine-tuned on the small dataset by optimizing the likelihood. Because the objective function is non-convex, learned representations depend on their initial values. Initializing with the pretrained representations ensures that the model 3In practice, transformers use multiple attention weights to perform multi-headed attention (Appendix E). does not need to re-learn representations on the small dataset. Instead, it only adjusts representations to account for dataset differences. This transfer learning approach takes inspiration from similar methods in NLP, such as BERT and the GPT family of models (Devlin et al., 2019; Radford et al., 2018). These methods pretrain transformers on large corpora, such as unpublished books or Wikipedia, and fine-tune them to make predictions on small datasets such as movie reviews. Our approach is analogous. Although the resumes dataset may not be representative or carefully curated, it contains many more job sequences than most survey datasets. This volume enables CAREER to learn representations that transfer to survey datasets. 3 RELATED WORK Many economic analyses use log-linear models to predict jobs in survey datasets (Boskin, 1974; Schmidt & Strauss, 1975). These models typically use small state spaces consisting of only a few occupation categories. For example, some studies categorize occupations into broad skill groups (Keane & Wolpin, 1997; Cortes, 2016); unemployment analyses only consider employment status (employed, unemployed, and out-of-labor-force) (Hall, 1972; Lauerova & Terrell, 2007); and researchers studying occupational mobility only consider occupational change, a binary variable indicating whether an individual changes jobs (Kambourov & Manovskii, 2008; Guvenen et al., 2020). Although transitions between occupations may depend richly on history, many of these models condition on only the most recent job and a few manually constructed summary statistics about history to make predictions (Hall, 1972; Blau & Riphahn, 1999). In contrast to these methods, CAREER is nonlinear and conditions on every job in an individual’s history. The model learns complex representations of careers without relying on manually constructed features. Moreover, CAREER can effectively predict from among hundreds of occupations. Recently, the proliferation of business networking platforms has resulted in the availability of large resume datasets. Schubert et al. (2021) use a large resume dataset to construct a first-order Markov model of job transitions; CAREER, which conditions on all jobs in a history, makes more accurate predictions than a Markov model. Models developed in the data mining community rely on resumespecific features such as stock prices (Xu et al., 2018), worker skill (Ghosh et al., 2020), network information (Meng et al., 2019; Zhang et al., 2021), and textual descriptions (He et al., 2021), and are not applicable to survey datasets, as is our goal in this paper (other models reduce to a first-order Markov model without these features (Dave et al., 2018; Zhang et al., 2020)). The most suitable model for survey datasets from this line of work is NEMO, an LSTM-based model that is trained on large resume datasets (Li et al., 2017). Our experiments demonstrate that CAREER outperforms NEMO when it is adapted to model survey datasets. Recent works in econometrics have applied machine learning methods to sequences of jobs and other discrete data. Ruiz et al. (2020) develop a matrix factorization method called SHOPPER to model supermarket basket data. We consider a baseline “bag-of-jobs” model similar to SHOPPER. Like the transformer-based model, the bag-of-jobs model conditions on every job in an individual’s history, but it uses relatively simple representations of careers. Our empirical studies demonstrate that CAREER learns complex representations that are better at modeling job sequences. Rajkumar et al. (2021) build on SHOPPER and propose a Bayesian factorization method for predicting job transitions. Similar to CAREER, they predict jobs in two stages. However, their method is focused on modeling individual transitions, so it only conditions on the most recent job in an individual’s history. In our empirical studies, we show that models like CAREER that condition on every job in an individual’s history form more accurate predictions than Markov models. CAREER is based on a transformer, a successful model for representing sequences of words in natural language processing (NLP). In econometrics, transformers have been applied to the text of job descriptions to predict their salaries (Bana, 2021) or authenticity (Naudé et al., 2022); rather than modeling text, we use transformers to model sequences of occupations. Transformers have also been applied successfully to sequences other than text: images (Dosovitskiy et al., 2021), music (Huang et al., 2019), and molecular chemistry (Schwaller et al., 2019). Inspired by their success in modeling a variety of complex discrete sequential distributions, this paper adapts transformers to modeling sequences of jobs. Transformers are especially adept at learning transferrable representations of text from large corpora (Radford et al., 2018; Devlin et al., 2019). We show that CAREER learns representations of job sequences that can be transferred from noisy resume datasets to smaller, wellcurated administrative datasets. 4 EMPIRICAL STUDIES We assess CAREER’s ability to predict jobs and provide useful representations of careers. We pretrain CAREER on a large dataset of resumes, and transfer these representations to small, commonly used survey datasets. With the transferred representations, the model is better than econometric baselines at both held-out prediction and forecasting. Additionally, we demonstrate that CAREER’s representations can be incorporated into standard wage prediction models to make better predictions. Resume pretraining. We pretrain CAREER on a large dataset of resumes provided by Zippia Inc., a career planning company. This dataset contains resumes from 23.7 million working Americans. Each job is encoded into one of 330 occupational codes, using the coding scheme of Autor & Dorn (2013). We transform resumes into sequences of jobs by including an occupation’s code for each year in the resume. For years with multiple jobs, we take the job the individual spent the most time in. We include three covariates: the year each job in an individual’s career took place, along with the individual’s state of residence and most recent educational degree. We denote missing covariates with a special token. See Appendix F for an exploratory data analysis of the resume data. CAREER uses a 12-layer transformer with 5.6 million parameters. Pretraining CAREER on the resumes data takes 18 hours on a single GPU. Although our focus is on fine-tuning CAREER to model survey datasets rather than resumes, CAREER also outperforms standard econometric baselines for modeling resumes; see Appendix B for more details. Survey datasets. We transfer CAREER to three widely-used survey datasets: two cohorts from the National Longitudinal Survey of Youth (NLSY79 and NLSY97) and the Panel Study of Income Dynamics (PSID). These datasets have been carefully constructed to be representative of the general population, and they are widely used by economists for estimating important quantities. NLSY79 is a longitudinal panel survey following a cohort of Americans who were between 14 and 22 when the survey began in 1979, while NLSY97 follows a different cohort of individuals who were between 12 and 17 when the survey began in 1997. PSID is a longitudinal survey following a sample of American families, with individuals added over the years. Compared to the resumes dataset, these survey datasets are small: we use slices of NLSY79, NLSY97, and PSID that contain 12 thousand, 9 thousand, and 12 thousand individuals, respectively. The distribution of job sequences in resumes differs in meaningful ways from those in the survey datasets; for example, manual laborers are under-represented and college graduates are overrepresented in resume data (see Appendix F for more details). We pretrain CAREER on the large resumes dataset and fine-tune on the smaller survey datasets. The fine-tuning process is efficient; although CAREER has 5.6 million parameters, fine-tuning on one GPU takes 13 minutes on NLSY79, 7 minutes on NLSY97, and 23 minutes on PSID. We compare CAREER to several baseline models: a second-order linear regression with covariates and hand-constructed summary statistics about past employment (a common econometric model used to analyze these survey datasets – see Section 3); a bag-of-jobs model inspired by SHOPPER (Ruiz et al., 2020) that conditions on all jobs and covariates in a history but combines representations linearly; and several baselines developed in the data-mining community for modeling worker profiles: NEMO (Li et al., 2017), job representation learning (Dave et al., 2018), and Job2Vec (Zhang et al., 2020). As described in Section 3, the baselines developed in the data-mining community for modeling worker profiles cannot be applied directly to economic survey datasets and thus require modifications, described in detail in Appendix I. We also compare to two additional versions of CAREER — one without pretraining or two-stage prediction, the other only without two-stage prediction — to assess the sources of CAREER’s improvements. All models use the covariates we included for resume pretraining, in addition to demographic covariates (which are recorded for the survey datasets but are unavailable for resumes). We divide all survey datasets into 70/10/20 train/validation/test splits, and train all models by optimizing the log-likelihood with Adam (Kingma & Ba, 2015). We evaluate the predictive performance of each model by computing held-out perplexity, a common metric in NLP for evaluating probabilistic sequence models. The perplexity of a sequence model p on a sequence y1, . . . , yT is Under review as a conference paper at ICLR 2023Under review as a conference paper at ICLR 2023 (a) Test perplexity on survey datasets. Results are averaged over three random seeds. CAREER (vanilla) includes covariates but not two-stage prediction or pretraining; CAREER (two-stage) adds two-stage prediction. (b) CAREER’s scaling law on NLSY79 as a function of pretraining data volume. dictive models have lower perplexities. We train all models to convergence and use the checkpoint with the best validation perplexity. See Appendix I for more experimental details. Figure 2a compares the test-set perplexity of each model. With the transferred representations, CAREER makes the best predictions on all survey datasets, achieving state-of-the-art performance. NEMO, which was designed to model large resume datasets, struggles to make good predictions on these small survey datasets, performing on par with standard econometric baselines. Pretraining is the biggest source of CAREER’s improvements. Although the resume data is noisy and differs in many ways from the survey datasets used for economic prediction, CAREER learns useful representations of work experiences that aid its predictive performance. In Appendix G we show that modifying the baselines to incorporate two-stage prediction (Equations 1 to 3) improves their performance, although CAREER still makes the best predictions across datasets. We include qualitative analysis of CAREER’s predictions in Appendix D. To assess how the volume of resumes used for pretraining affects CAREER’s predictions on survey datasets, we downsample the resume dataset and transfer to survey datasets. The scaling law for NLSY79 is depicted in Figure 2b. When there are less than 20,000 examples in the resume dataset, pretraining CAREER does not offer any improvement. The relationship between pretraining volume and fine-tuned perplexity follows a power law, similar to scaling laws in NLP (Kaplan et al., 2020). We also assess CAREER’s ability to forecast future career trajectories. In contrast to predicting held-out sequences, forecasting involves training models on all sequences before a specific year. To predict future jobs for an individual, the fitted model is used to estimate job probabilities six years into the future by sampling multi-year trajectories. This setting is useful for assessing a model’s ability to make long-term predictions, especially as occupational trends change over time. We evaluate CAREER’s forecasting abilities on NLSY97 and PSID. (These datasets are more valuable for forecasting than NLSY79, which follows a cohort that is near or past retirement age.) We train models on all sequences (holding out 10% as a validation set), without including any observations after 2014. When pretraining CAREER on resumes, we also make sure to only include examples up to 2014. Table 2 compares the forecasting performance of all models. CAREER makes the best overall forecasts. CAREER has a significant advantage over baselines at making long-term forecasts, yielding a 17% advantage over the best baseline for 6-year forecasts on NLSY97. Again, 8 (a) Test perplexity on survey datasets. Results are averaged over three random seeds. CAREER (vanilla) includes covariates but not two-stage prediction or pretraining; CAREER (two-stage) adds two-stage prediction. (b) CAREER’s scaling law on NLSY79 as a function of pretraining data volume. Figure 2: Prediction results on longitudinal survey datasets and scaling law. exp{− 1T ∑T t=1 log p(yt|yt−1,xt)}. It is a monotonic transformation of log-likelihood; better predictive models have lower perplexities. We train all models to convergence and use the checkpoint with the best validation perplexity. See Appendix I for more experimental details. Figure 2a compares the test-set perplexity of each model. With the transferred representations, CA- REER makes the best predictions on all survey datasets, achieving state-of-the-art performance. The baselines developed in the data mining literature, which were designed to model large resume datasets while relying on resume-specific features, struggle to make good predictions on these small survey datasets, performing on par with standard econometric baselines. Pretraining is the biggest source of CAREER’s improv ments. Although the resume data is noisy and differs in many ways from the survey datasets used for economic prediction, CAREER learns useful representations of work experiences that aid its predictive performance. In Appendix G we show that modifying the baselines to incorporate two-stage prediction (Equations 1 to 3) improves their performance, although CAREER still makes the best predictions across datasets. We include qualitative analysis of CAREER’s prediction in Appendix D. To assess how the volume of resumes used for pretrai ing affects CAREER’s predictions on survey dataset , we downsampl the resum dataset and transfer to su vey datasets. The scaling law for NLSY79 is depicted in Figure 2b. When there are less than 20,000 examples in the resume dataset, pretraining CAREER does not offer any improvement. The relationship between pretraining volume and fine-tuned pe plexity follows a power law, imilar to scaling laws i NLP (Kaplan et al., 2020). We also assess CAREER’s ability to forecast future career trajectories. In contrast to predicting held-out sequences, orecasting involves training models on all sequences before a specific year. To pr ict future j bs for an individual, the fitted model is used to estimate job probabilities six years into the future by sampling multi-year trajectories. This setting is useful for assessing a model’s ability to make long-term predictions, especially as occupational trends change over time. We evaluate CAREER’s forecasting abilities on NLSY97 and PSID. (These datasets are more valuable for forecasting than NLSY79, which follows a cohort that is near or past retirement age.) We train models on all sequences (holding out 10% as a validation set), without including any observations after 2014. When pretraining CAREER on resumes, we also make sure to only include examples up to 2014. Table 1 compares the forecasting performance of all models. CAREER makes the best overall forecasts. REER has a significant advantage over baselines at making long-term forecasts, yielding a 17% advantage over the best baseline for 6-year forecasts on NLSY97. Again, the baselines developed for resume data mining, which had been develop d to model much larger corpora, struggle to make good predictions on these smaller survey datasets. Downstream applications. In addition to forming job predictions, CAREER learns lowdimensional representations of job histories. Although these representations were formed to predict jobs in a sequence, they can also be used as inputs to economic models for downstream applications. As an example of how CAREER’s representations can be incorporated into other economic models, we use CAREER to predict wages. Economists build wage prediction models in order to estimate important economic quantities, such as the adjusted gender wage gap. For example, to estimate this wage gap, Blau & Kahn (2017a) regress an individual’s log-wage on observable characteristics such as education, demographics, and current occupation for six different years on PSID. Rather than including the full, high-dimensional job-history, the model summarizes an individual’s career with summary statistics such as full-time and part-time years of experience (and their squares). We incorporate CAREER’s representation into the wage regression by adding the fitted representation for an individual’s job history, ĥi. For log-wage wi and observed covariates xi, we regress wi ∼ α+ θ>xi + γ>ĥi, (9) where α, θ, and γ are regression coefficients. We pretrain CAREER to predict jobs on resumes, and for each year we fine-tune on job sequences of the cohort up to that year. For example, in the 1999 wage regression, we fine-tune CAREER only on the sequences of jobs until 1999 and plug in the fixed representation to the wage regression. We do not include any covariates (except year) when training CAREER. We run each wage regression on 80% of the training data and evaluate mean-square error on the remaining 20% (averaging over 10 random splits). Table 2 shows that adding CAREER’s representations improves wage predictions for each year. Although these representations are fine-tuned to predict jobs on a small dataset (each year contains less than 5,000 workers) and are not adjusted to account for wage, they contain information that is predictive of wage. By summarizing complex career histories with a low-dimensional representation, CAREER provides representations that can improve downstream economic models, resulting in more accurate estimates of important economic quantities. 5 CONCLUSION We introduced CAREER, a method for representing job sequences from large-scale resume data and fine-tuning them on smaller datasets of interest. We took inspiration from modern language modeling to develop a transformer-based occupation model. We transferred the model from a large dataset of resumes to smaller survey datasets in economics, where it achieved state-of-the-art performance for predicting and forecasting career outcomes. We demonstrated that CAREER’s representations can be incorporated into wage prediction models, outperforming standard econometric models. One direction of future research is to incorporate CAREER’s representations of job history into methods for estimating adjusted quantities, like wage gaps. Underlying these methods are models that predict economic outcomes as a function of observed covariates. However, if relevant variables are omitted, the adjusted estimates may be affected; e.g., excluding work experience from wage prediction may change the magnitude of the estimated gap. In practice, economists include handdesigned summary statistics to overcome this problem, such as in Blau & Kahn (2017a). CAREER provides a data-driven way to incorporate such variables—its representations of job history could be incorporated into downstream prediction models and lead to more accurate adjustments of economic quantities. Ethics statement. As discussed, passively-collected resume datasets are not curated to represent national demographics. Pretraining CAREER on these datasets may result in representations that are affected by sampling bias. Although these representations are fine-tuned on survey datasets that are carefully constructed to represent national demographics, the biases from pretraining may propagate through fine-tuning (Ravfogel et al., 2020; Jin et al., 2021). Moreover, even in representative datasets, models may form more accurate predictions for majority groups due to data volume (Dwork et al., 2018). Thus, we encourage practitioners to audit noisy resume data, re-weight samples as necessary (Kalton, 1983), and review accuracy within demographics before using the model for downstream economic analysis. Although resume datasets may contain personally identifiable information, all personally identifiable information had been removed before we were given access to the resume dataset we used for pretraining. Additionally, none of the longitudinal survey datasets contain personally identifiable information. Reproducibility statement. The supplementary material contains code for reproducing the experimental results in this paper, with the README containing detailed instructions for reproducing specific experiments. Our data-use agreement prohibits us from releasing the dataset of resumes used for pretraining. However, similar (private) resume datasets have become increasingly common in applied economics analyses (Azar et al., 2020; Schubert et al., 2021), and we include pretraining code so practitioners can reproduce our results with resume datasets they have access to. Additionally, all longitudinal survey datasets are available publicly online (Bureau of Labor Statistics, 2019a;b; Panel Study of Income Dynamics, 2021). A ECONOMETRIC BASELINES In this section, we describe baseline occupation models that economists have used to model jobs and other discrete sequences. Markov models and regression. A first-order Markov model assumes the job at each timestep depends on only the previous job (Hall, 1972; Poterba & Summers, 1986). Without covariates, a Markov model takes the form p(yt = j|yt−1) = p(yt = j|yt−1). The optimal transition probabilities reflect the overall frequencies of individuals transitioning from occupation yt−1 to occupation j. In a second-order Markov model, the next job depends on the previous two. A multinomial logistic regression can be used to incorporate covariates: p(yt = j|yt−1,xt) ∝ exp { β (0) j + β (1) j · yt−1 + ∑ c β (c) j · xtc } , (10) where β(0)j is an occupation-specific intercept and yt−1 and xtc denote J- and Nc-dimensional indicator vectors, respectively. Equation 10 depends on history only through the most recent job, although the covariates can also include hand-crafted summary statistics about the past, such as the duration of the most recent job (McCall, 1990). This model is fit by maximizing the likelihood with gradient-based methods. Bag-of-jobs. A weakness of the first-order Markov model is that it only uses the most recent job to make predictions. However, one’s working history beyond the last job may inform future transitions (Blau & Riphahn, 1999; Neal, 1999). Another baseline we consider is a bag-of-jobs model, inspired by SHOPPER, a probabilistic model of consumer choice (Ruiz et al., 2020). Unlike the Markov and regression models, the bag-of-jobs model conditions on every job in an individual’s history. It does so by learning a low-dimensional representation of an individual’s history. This model learns a unique embedding for each occupation, similar to a word embedding (Bengio et al., 2003; Mikolov et al., 2013); unlike CAREER, which learns complicated nonlinear interactions between jobs in a history, the bag-of-jobs model combines jobs into a single representation by averaging their embeddings. The bag-of-jobs model assumes that job transitions depend on two terms: a term that captures the effect of the most recent job, and a term that captures the effect of all prior jobs. Accordingly, the model learns two types of representations: an embedding αj ∈ RD of the most recent job j, and an embedding ρj′ ∈ RD for prior jobs j′. To combine the representations for all prior jobs into a single term, the model averages embeddings: p(yt = j|yt−1) ∝ exp { β (1) j · αyt−1 + β (2) j · ( 1 t−2 ∑t−2 t′=1 ρyt′ )} . (11) Covariates can be added to the model analogously; for a single covariate, its most recent value is embedded and summed with the average embeddings for its prior values. All parameters are estimated by maximizing the likelihood in Equation 11 with SGD. B RESUME PREDICTIONS Although our focus is on modeling survey datasets, we also compare CAREER to several econometric baselines for predicting job sequences in resumes. We consider a series of models without covariates: a first- and second-order Markov model, a bag-of-jobs model (Equation 11), and a transformer with the same architecture as CAREER except without covariates. We also compare to econometric models that use covariates: a second-order linear regression with covariates and hand-constructed features (such as how long an individual has worked in their current job), and a bag-of-jobs model with covariates (Appendix I has more details). We randomly divide the resumes dataset into a training set of 23.6 million sequences, and a validation and test set of 23 thousand sequences each. Table 3 compares the test-set predictive performance of all models. CAREER is the best at predicting held-out sequences. To understand the types of transitions contributing to CAREER’s predictive advantage, we decompose predictions into three categories: consecutive repeats (when the next job is the same as the previous year’s), nonconsecutive repeats (when the next job is different from the previous year’s, but is the same as one of the prior jobs in the career), and new jobs. CAREER has a clear advantage over the baselines in all three categories, but the biggest improvement comes when predicting jobs that have been repeated non-consecutively. The transformer model is at an advantage over the Markov models for these kinds of predictions because it is able to condition on an individual’s entire working history, while a Markov model is constrained to use only the most recent job (or two). The bag-of-jobs model, which can condition on all jobs in a worker’s history but cannot learn complex interactions between them, outperforms the Markov models but still falls short of CAREER, which can recognize and represent complex career trajectories. In Appendix C, we demonstrate that CAREER is well-equipped at forecasting future trajectories as well. C FORECASTING RESUMES We also perform the forecasting experiment on the large dataset of resumes. Each model is trained on resumes before 2015. To predict occupations for individuals after 2015, a model samples 1,000 trajectories for each individual, and averages probabilities to form a single prediction for each year. For more experimental details, see Appendix I. Table 4 depicts the forecasting results for the resumes dataset. Each fitted model is used to forecast occupation probabilities for three years into the future. CAREER makes the best forecasts, both overall and for each individual year. D QUALITATIVE ANALYSIS Rationalizing predictions. Figure 3 shows an example of a held-out career sequence from PSID. CAREER is much likelier than a regression and bag-of-jobs baseline to predict this individual’s next job, biological technician. To understand CAREER’s prediction, we show the model’s rationale, or the jobs in this individual’s history that are sufficient for explaining the model’s prediction. (We adapt the greedy rationalization method from Vafa et al. (2021); refer to Appendix I for more details.) In this example, CAREER only needs three previous jobs to predict biological technician: animal caretaker, engineering technician, and student. The model can combine latent attributes of each job to predict the individual’s next job. Representation similarity. To demonstrate the quality of the learned representations, we use CAREER’s fine-tuned representations on NLSY97 to find pairs of individuals with the most similar career trajectories. Specifically, we compute CAREEER’s representation ht(yt−1,xt) for each individual in NLSY97 who has worked for four years. We then measure the similarity between all pairs by computing the cosine similarity between representations. In order to depict meaningful matches, we only consider pairs of individuals with no overlapping jobs in their histories (otherwise the model would find individuals with the exact same career trajectories). Figure 4 depicts the career histories with the most similar CAREER representations. Although none of these pairs have overlapping jobs, the model learns representations that can identify similar careers. E TRANSFORMER DETAILS In this section, we expand on the simplified description of transformers in Section 2.3 and describe CAREER in full detail. Recall that the model estimates representations in L layers, h (1) t (yt−1,xt), . . . , h (L) t (yt−1,xt), with each representation h (`) t ∈ RD. The final representation h (L) t (yt−1,xt) is used to represent careers. We drop the explicit dependence on yt−1 and xt, and instead denote each representation as h(`)t . The first transformer layer combines the previous occupation, the most recent covariates, and the position of the occupation in the career. It first embeds each of these variables in D-dimensional space. Define an embedding function for occupations, ey : [J ] → RD. Additionally, define a separate embedding function for each covariate, {ec}Cc=1, with each ec : [Nc] → RD. Finally, define et : [T ] → RD to embed the position of the sequence, where T denotes the number of possible sequence lengths. The first-layer representation h(1)t sums these embeddings: h (1) t = ey(yt−1) + ∑ c ec(xtc) + et(t). (12) The occupation- and covariate-specific embeddings, ey and {ec}, are model parameters; the positional embeddings, et, are set in advance to follow a sinusoidal pattern (Vaswani et al., 2017). While these embeddings could also be parameterized, in practice the performance is similar, and using sinusoidal embeddings allows the model to generalize to career sequence lengths unseen in the training data. At each subsequent layer, the transformer combines the representations of all occupations in a history. It combines representations by performing multi-headed attention, which is similar to the process described in Section 2.3 albeit with multiple attention weights per layer. Specifically, it uses A specific attention weights, or heads, per layer. The number of heads A should be less than the representation dimension D. (Using A = 1 attention head reduces to the process described in Equations 5 and 6.) The representation dimension D should be divisible by A; denote K = D/A. First, A different sets of attention weights are computed: z (`) a,t,t′ = ( h (`) t )> W (`)a h (`) t′ for t ′ ≤ t πa,t,t′ = exp{za,t,t′}∑ k exp{za,t,k} , (13) where W (`)a ∈ RD×D is a model parameter, specific to attention head a and layer l.4 Each attention head forms a convex combination with all previous representations; to differentiate between attention heads, each representation is transformed by a linear transformation V (`)a ∈ RK×D unique to an attention head, forming b(`)a,t ∈ RK : b (`) a,t = ∑t t′=1 π (`) a,t,t′ ( V (`) a h (`) t′ ) . (14) All attention heads are combined into a single representation by concatenating them into a single vector g(`)t ∈ RD: g (`) t = ( b (`) 1,t, b (`) 2,t, . . . , b (`) A,t ) . (15) To complete the multi-head attention step and form the intermediate representation h̃(`)t , the concatenated representations g(`)t undergo a linear transformation and are summed with the pre-attention representation h(`)t : h̃ (`) t = h (`) t +M (`)g (`) t , (16) with M (`) ∈ RD×D. The intermediate representations h̃(`)t ∈ RD combine the representation at timestep t with those preceding timestep t. Each layer of the transformer concludes by taking a non-linear transformation of the intermediate representations. This non-linear transformation does not depend on any previous representation; it only transforms h̃(`)t . Specifically, h̃ (`) t is passed through a neural network: h (`+1) t = h̃ (`) t + FFN (`) ( h̃ (`) t ) , (17) where FFN(`) denotes a two-layer feedforward neural network with N hidden units, with FFN(`) : RD → RD. We repeat the multi-head attention and feedforward neural network updates above for L layers, using parameters unique to each layer. We represent careers with the last-layer representation, ht(yt−1,xt) = h (L) t (yt−1,xt). For our experiments, we use model specifications similar to the generative pretrained transformer (GPT) architecture (Radford et al., 2018). In particular, we use L = 12 layers, a representation dimension of D = 192, A = 3 attention heads, and N = 768 hidden units and the GELU nonlinearity (Hendrycks & Gimpel, 2016) for all feedforward neural networks. In total, this results in 5.6 million parameters. This model includes a few extra modifications to improve training: we use 0.1 dropout (Srivastava et al., 2014) for the feedforward neural network weights, and 0.1 dropout for the attention weights. Finally, we use layer normalization (Ba et al., 2016) before the updates in Equation 13, after the update in Equation 16, and after the final layer’s neural network update in Equation 17. 4For computational reasons, W (`)a is decomposed into two matrices and scaled by a constant, W (`) a = Q (`) a ( K (`) a )> √ K , with Q(`)a , K (`) a ∈ RD×K . F EXPLORATORY DATA ANALYSIS Table 5 depicts summary statistics of the resume dataset provided by Zippia that is used for pretraining CAREER. Table 6 compares this resume dataset with the longitudinal survey datasets of interest. G ONE-STAGE VS TWO-STAGE PREDICTION Table 7 compares the predictive performance of occupation models when they are modified to make predictions in two stages, following Equations 1 to 3. Incorporating two-stage prediction improves the performance of these models compared to Figure 2a; however, CAREER still makes the best predictions on all survey datasets. H DATA PREPROCESSING In this section, we go over the data preprocessing steps we took for each dataset. Resumes. We were given access to a large dataset of resumes of American workers by Zippia, a career planning company. This dataset coded each occupation into one of 1,073 O*NET 2010 Standard Occupational Classification (SOC) categories based on the provided job titles and descriptions in resumes. We dropped all examples with missing SOC codes. Each resume in the dataset we were given contained covariates that had been imputed based off other data in the resume. We considered three covariates: year, most recent educational degree, and location. Education degrees had been encoded into one of eight categories: high school diploma, associate, bachelors, masters, doctorate, certificate, license, and diploma. Location had been encoded into one of 50 states plus Puerto Rico, Washington D.C., and unknown, for when location could not be imputed. Some covariates also had missing entries. When an occupation’s year was missing, we had to drop it from the dataset, because we could not position it in an individual’s career. Whenever another covariate was missing, we replaced it with a special “missing” token. All personally identifiable information had been removed from the dataset. We transformed each resume in the dataset into a sequence of occupations. We included an entry for each year starting from the first year an individual worked to their last year. We included a special “beginning of sequence” token to indicate when each individual’s sequence started. For each year between an individual’s first and last year, we added the occupation they worked in during that year. If an individual worked in multiple occupations in a year, we took the one where the individual spent more time in that year; if they were both the same amount of time in the particular year, we broke ties by adding the occupation that had started earlier in the career. For the experiments predicting future jobs directly on resumes, we added a “no-observed-occupation” token for years where the resume did not list any occupations (we dropped this token when pretraining). Each occupation was associated with the individual’s most recent educational degree, which we treated as a dynamic covariate. The year an occupation took place was also considered a dynamic categorical covariate. We treated location as static. In total, this preprocessing left us with a dataset of 23.7 million resumes, and 245 million individual occupations. In order to transfer representations, we had to slightly modify the resumes dataset for pretraining to encode occupations and covariates into a format compatible with the survey datasets. The survey datasets we used were encoded with the “occ1990dd” occupation code (Autor & Dorn, 2013) rather than with O*NET’s SOC codes, so we converted the SOC codes to occ1990dd codes using a crosswalk posted online by Destin Royer. Even after we manually added a few missing entries to the crosswalks, there were some SOC codes that did not have corresponding occ1990dd’s. We gave these tokens special codes that were not used when fine-tuning on the survey datasets (because they did not correspond to occ1990dd occupations). When an individual did not work for a given year, the survey datasets differentiated between three possible states: unemployed, out-of-labor-force, and in-school. The resumes dataset did not have these categories. Thus, we initialized parameters for these three new occupational states randomly. Additionally, we did not include the “no-observedoccupation” token when pretraining, and instead dropped missing years from the sequence. Since we did not use gender and race/ethnicity covariates when pretraining, we also initialized these covariatespecific parameters randomly for fine-tuning. Because we used a version of the survey datasets that encoded each individual’s location as a geographic region rather than as a state, we converted each state in the resumes data to be in one of four regions for pretraining: northeast, northcentral, south, or west. We also added a fifth “other” region for Puerto Rico and for when a state was missing in the original dataset. We also converted educational degrees to levels of experience: we converted associate’s degree to represent some college experience and bachelor’s degree to represent fouryear college experience; we combined masters and doctorate to represent a single “graduate degree” category; and we left the other categories as they were. NLSY79. The National Longitudinal Survey of Youth 1979 (NLSY79) is a survey following individuals born in the United States between 1957-1964. The survey included individuals who were between 14 and 22 years old when they began collecting data in 1979; they interviewed individuals annually until 1994, and biennially thereafter. Each individual in the survey is associated with an ID, allowing us to track their careers over time. We converted occupations, which were initially encoded as OCC codes, into “occ1990dd” codes using a crosswalk (Autor & Dorn, 2013). We use a version of the survey that has entries up to 2014. Unlike the resumes dataset, NLSY79 includes three states corresponding to individuals who are not currently employed: unemployed, out-of-labor-force, and in-school. We include special tokens for these states in our sequences. We drop examples with missing occupation states. We also drop sequences for which the individual is out of the labor force for their whole careers. We use the following covariates: years, educational experience, location, race/ethnicity, and gender. We drop individuals with less than 9 years of education experience. We convert years of educational experience into discrete categories: no high school degree, high school degree, some college, college, and graduate degree. We convert geographic location to one of four regions: northeast, northcentral, south, and west. We treat location as a static variable, using each individual’s first location. We use the following race/ethnicities: white, African American, Asian, Latino, Native American, and other. We treat year and education as dynamic covariates whose values can change over time, and we consider the other covariates as static. This preprocessing leaves us with a dataset consisting of 12,270 individuals and 239,545 total observations. NLSY97. The National Longitudinal Survey of Youth 1997 (NLSY97) is a survey following individuals who were between 12 and 17 when the survey began in 1997. Individuals were interviewed annually until 2011, and biennially thereafter. Our preprocessing of this dataset is similar to that of NLSY79. We convert occupations from OCC codes into “occ1990dd” codes. We use a version of the survey that follows individuals up to 2019. We include tokens for unemployed, out-of-labor-force, and in-school occupational states. We only consider individuals who are over 18 and drop military-related occupations. We use the same covariates as NLSY79. We use the following race/ethnicities: white, African-aAmerican, Latino, and other/unknown. We convert years of educational experience into discrete categories: no high school degree, high school degree, some college degree, college degree, graduate degree, and a special token when the education status isn’t known. We use the same regions as NLSY79. We drop sequences for which the individual is out of the labor force for their whole careers. This preprocessing leaves us with a dataset consisting of 8,770 individuals and 114,141 total observations. PSID. The Panel Study of Income Dynamics (PSID) is a longitudinal panel survey following a sample of American families. It was collected annually between 1968 and 1997, and biennially afterwards. The dataset tracks families over time, but it only includes occupation information for the household head and their spouse, so we only include these observations. Occupations are encoded with OCC codes, which we convert to “occ1990dd” using a crosswalk (Autor & Dorn, 2013). Like the NLSY surveys, PSID also includes three states corresponding to individuals who are not currently employed: unemployed, out-of-labor-force, and in-school. We include special tokens for these states in our sequences. We drop other examples with missing or invalid occupation codes. We also drop sequences for which the individual is out of the labor force for their whole careers. We consider five covariates: year, education, location, gender, and race. We include observations for individuals who were added to the dataset after 1995 and include observations up to 2019. We exclude observations for individuals with less than 9 years of education experience. We convert years of education to discrete states: no high school, high school diploma, some college, college, and graduate degree. We convert geographic location to one of four regions: northeast, northcentral, south, and west. We treat location as a static variable, using each individual’s first location. We use the following races: white, Black, and other. We treat year and education as dynamic covariates whose values can change over time, and we consider the other covariates as static. This preprocessing leaves us with a dataset consisting of 12,338 individuals and 62,665 total observations. I EXPERIMENTAL DETAILS Baselines. We consider a first-order Markov model and a second-order Markov model (both without covariates) as baselines. These models are estimated by averaging observed transition counts. We smooth the first-order Markov model by taking a weighted average between the empirical transitions in the training set and the empirical distribution of individual jobs. We perform this smoothing to account for the fact that some feasible transitions may never occur in the training set due to the high-dimensionality of feasible transitions. We assign 0.99 weight to the empirical distributions of transitions and 0.01 to the empirical distribution of individual jobs. We smooth the secondorder model by assigning 0.5 weight to the empirical second-order transitions and 0.5 weight to the smoothed first-order Markov model. When we add covariates to the Markov linear baseline, we also include manually constructed features about history to improve its performance. In total, we include the following categorical variables: the most recent job, the prior job, the year, a dummy indicating whether there has been more than one year since the most recent observed job, the education status, a dummy indicating whether the education status has changed, and state (for the experiments on NLSY79 and PSID, we also include an individual’s gender and race/ethnicity). We also add additive effects for the following continuous variables: the number of years an individual has been in the current job and the total number of years for which an individual has been in the dataset. In addition, we include an intercept term. For the bag-of-jobs model, we vary the representation dimensionD between 256-2048, and find that the predictive performance is not sensitive to the representation dimension, so we use D = 1024 for all experiments. For the LSTM model, we use 3 layers with 436 embedding dimensions so that the model size is comparable to the transformer baseline: the LSTM has 5.8 million parameters, the same number as the transformer. We also compare to NEMO (Li et al., 2017), an LSTM-based method developed for modeling job sequences in resumes. We adapted NEMO to model survey data. In its original setting, NEMO took as input static covariates (such as individual skill) and used these to predict both an individual’s next job title and their company. Survey datasets differ from this original setting in a few ways: covariates are time-varying, important covariates for predicting jobs on resumes (like skill) are missing, and an individual’s company name is unavailable. Therefore, we made several modifications to NEMO. We incorporated the available covariates from survey datasets by embedding them and adding them to the job embeddings passed into the LSTM, similar to the method CAREER uses to incorporate covariates. We removed the company-prediction objective, and instead only used the model to predict an individual’s job in the next timestep. We considered two sizes of NEMO: an architecture using the same number of parameters as CAREER, and the smaller architecture proposed in the original paper. We found the smaller architecture performed better on the survey datasets, so we used this for the experiments. This model contains 2 decoder layers and a hidden dimension of 200. We compare to two additional baselines developed in the data mining literature: job representation learning (Dave et al., 2018) and Job2Vec (Zhang et al., 2020). These methods require resumespecific features such as skills and textual descriptions of jobs and employers, which are not available for the economic longitudinal survey datasets we model. Thus, we adapt these baselines to be suitable for modeling economic survey data. Job representation learning (Dave et al., 2018) is based on developing two graphs, one for job transitions and one for skill transitions. Since worker skills are not available for longitudinal survey data, we adapt the model to only use job transitions by only including the terms in the objective that depend on job transitions. We make a few additional modifications, which we found to improve the performance of this model on our data. Rather than sampling 3-tuples from the directed graph of job transitions, we include all 2-tuple job transitions present in the data, identical to the other models we consider. Additionally, rather than using the contrastive objective in Equation 4 of Dave et al. (2018), we optimize the log-likelihood directly — this is more computationally intensive but leads to better results. Finally, we include survey-specific covariates (e.g. education, demographics, etc.) by adding them to wx, embedding the covariate of each most recent job to the same space as wx. We make similar modifications to Job2Vec (Zhang et al., 2020). Job2Vec requires job titles and descriptions of job keywords, which are unavailable for economic longitudinal survey datasets. Instead, we modify Equation 1 in Zhang et al. (2020) to model occupation codes rather than titles or keywords and optimize this log-likelihood as our objective. We also incorporate survey-specific covariates by embedding each covariate to the same space as ei and adding it ei before computing Equation 2 from Zhang et al. (2020), which we also found to improve performance. We follow Dave et al. (2018) and use 50 embedding dimensions for each model, and optimize with Adam using a maximum learning rate of 0.005, following the minibatch and warmup strategy described below. When we compared the transferred version of CAREER to a version of CAREER without pretrained representations, we tried various architectures for the non-pretrained version of CAREER. We found that, without pretraining, the large architecture we used for CAREER was prone to overfitting on the smaller survey datasets. So we performed an ablation of the non-pretrained CAREER with various architectures: we considered 4 and 12 layers, 64 and 192 embedding dimensions, 256 and 768 hidden units for the feedforward neural networks, and 2 or 3 attention heads (using 2 heads for D = 64 and 3 heads for D = 192 so that D was divisible by the number of heads). We tried all 8 combinations of these parameters on NLSY79, and found that the model with the best validation performance had 4 layers, D = 64 embedding dimensions, 256 hidden units, and 2 attention heads. We used this architecture for the non-pretrained version of CAREER on all survey datasets. Training. We randomly divide the resumes dataset into a training set of 23.6 million sequences, and a validation and test set of 23 thousand sequences each. We randomly divide the survey datasets into 70/10/20 train/test/validation splits. The first- and second-order Markov models without covariates are estimated from empirical transitions counts. We optimize all other models with stochastic gradient descent with minibatches. In total, we use 16,000 total tokens per minibatch, varying the batch size depending on the largest sequence length in the batch. We use the Adam learning rate scheduler (Kingma & Ba, 2015). All experiments on the resumes data warm up the learning rate from 10−7 to 0.0005 over 4,000 steps, after which the inverse square root schedule is used (Vaswani et al., 2017). For the survey datasets, we also used the inverse square root scheduler, but experimented with various learning rates and warmup updates, using the one we found to work best for each model. For CAREER with pretrained representations, we used a learning rate of 0.0001 and 500 warmup updates; for CAREER without pretraining, we used a learning rate of 0.0005 and 500 warmup updates; for the bag of jobs model, we used a learning rate of 0.0005 and 5,000 warmup updates; for the regression model, we used a learning rate of 0.0005 and 4,000 warmup updates. We use a learning rate of 0.005 for job representation learning and Job2Vec, with 5,000 warmup updates. All models besides were also trained with 0.01 weight decay. All models were trained using Fairseq (Ott et al., 2019). When training on resumes, we trained for 85,000 steps, using the checkpoint with the best validation performance. When fine-tuning on the survey datasets, we trained all models until they overfit to the validation set, again using the checkpoint with the best validation performance. We used half precision for training all models, with the exception of the following models (which were only stable with full precision): the bag of jobs model with covariates on the resumes data, and the regression models for all survey dataset experiments. The tables in Section 4 report results averaged over multiple random seeds. For the results in Figure 2a, the randomness includes parameter initialization and minibatch ordering. For CAREER, we use the same pretrained model for all settings. For the forecasting results in Table 1, the randomness is with respect to the Monte-Carlo sampling used to sample multi-year trajectories for individuals. For the wage prediction experiment in Table 2, the randomness is with respect to train/test splits. Forecasting. For the forecasting experiments, occupations that took place after a certain year are dropped from the train and validation sets. When we forecast on the resumes dataset, we use the same train/test/validation split but drop examples that took place after 2014. When we pretrain CAREER on the resumes dataset to make forecasts for PSID and NLSY97, we use a cutoff year of 2014 as well. We incorporate two-stage prediction into the baseline models because we find that this improves their predictions. Although we do not include any examples after the cutoff during training, all models require estimating year-specific terms. We use the fitted values from the last observed year to estimate these terms. For example, CAREER requires embedding each year. When the cutoff year is 2014, there do not exist embeddings for years after 2014, so we substitute the 2014 embedding. We report forecasting results on a split of the dataset containing examples before and after the cutoff year. To make predictions for an individual, we condition on all observations before the cutoff year, and sample 1,000 trajectories through the last forecasting year. We never condition on any occupations after the cutoff year, although we include updated values of dynamic covariates like education. For forecasting on the resumes dataset, we set the cutoff for 2014 and forecast occupations for 2015, 2016, and 2017. We restrict our test set to individuals in the original test set whose first observed occupation was before 2015 and who were observed to have worked until 2017. PSID and NLSY97 are biennial, so we forecast for 2015, 2017, and 2019. We only make forecasts for individuals who have observations before the cutoff year and through the last year of forecasting, resulting in a total of 16,430 observations for PSID and 18,743 for NLSY97. Wage prediction. For the wage prediction experiment, we use replication data provided by Blau & Kahn (2017b). We add individual’s job histories to this dataset by matching interview and person numbers. We drop individuals that could not be matched, about 3% of the data. When we apply CAREER to this data to learn a representation of job history, we do not use any covariates besides the year a job took place. We pretrain a version of CAREER containing 4 layers, 64 dimensions for the representations, 256 hidden units in the feedforward neural networks, and 2 attention heads. We pretrain on resumes for 50,000 steps. We fine-tune to predict jobs on PSID using the job histories of individuals up to the year of interest; for example, for the 2011 experiment, we only fine-tune on jobs that took place before 2011. We update parameters every 6 batches when fine-tuning. After fine-tuning CAREER’s representations to predict jobs, we plug in the learned representations into the wage regression in Equation 9. Notably, we do not alter CAREER’s representations to predict wage; we only estimate regression coefficients. We perform an unweighted linear regression. Our model without CAREER uses the same covariates as the wage regression in Blau & Kahn (2017a), including full- and part-time years of experience (and their squares), education, region, race/ethnicity, union status, current occupation, and current industry. We do not include whether an individual is a government worker because it results in instability for unweighted regression. Rather than estimate two separate models for males and females, we use a single model and include gender as an observed covariate. When we incorporate CAREER’s representations into the model, we use the same base model and add CAREER’s representations. Rationalization. The example in Figure 3 shows an example of CAREER’s rationale on PSID. To simplify the example, this is the rationale for a model trained on no covariates except year. In order to conceal individual behavior patterns, the example in Figure 3 is a slightly altered version of a real sequence. For this example, the transformer used for CAREER follows the architecture described in Radford et al. (2018). We find the rationale using the greedy rationalization method described in Vafa et al. (2021). Greedy rationalization requires fine-tuning the model for compatibility; we do this by fine-tuning with “job dropout”, where with 50% probability, we drop out a uniformly random amount of observations in the history. When making predictions, the model has to implicitly marginalize over the missing observations. (We pretrain on the resumes dataset without any word dropout). We find that training converges quickly when fine-tuning with word dropout, and the model’s performance when conditioning on the full history is similar. Greedy rationalization typically adds observations to a history one at a time in the order that will maximize the model’s likelihood of its top prediction. For occupations, the model’s top prediction is almost always identical to the previous year’s occupation, so we modify greedy rationalization to add the occupation that will maximize the likelihood of its second-largest prediction. This can be interpreted as equivalent to greedy rationalization, albeit conditioning on switching occupations. Thus, the greedy rationalization procedure stops when the model’s second-largest prediction from the target rationale is equivalent to the model’s second-largest prediction when conditioning on the full history.
1. What is the focus of the paper regarding the use of transformers for job sequence prediction? 2. What are the strengths of the proposed approach, particularly in terms of its application to labor data? 3. What are the weaknesses of the paper, especially regarding its technical novelty and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper uses the transformer to leverage a sizeable online resume dataset by pretraining and then fine-tuning it on the small and carefully constructed longitudinal survey datasets. According to the results based on their experiments, their approach shows a significant improvement compared to the current state of the arts on the task of job sequence prediction. Besides, they also show that their approach can help a wage model to provide better performance. Strengths And Weaknesses Strength Propose an inspiring method to apply the transformer to the prediction of labor data by pretraining the model on a large online resume dataset and then fine-tuning it on the small datasets. Conduct comprehensive experiments, including both cross-sectional and overtime experiments, demonstrating the usefulness of the approach. Well-written paper Reproducibility: Provide the source code with README and a detailed description of their experiments. Weakness The main concern regards the technical novelty of the paper: The authors only made two minor changes to the transformers used in NLP. Some related studies are missing. The used baselines are quite old. Clarity, Quality, Novelty And Reproducibility The presentation of the paper is quite clear, but the technical contribution is limited.
ICLR
Title CAREER: Transfer Learning for Economic Prediction of Labor Data Abstract Labor economists regularly analyze employment data by fitting predictive models to small, carefully constructed longitudinal survey datasets. Although modern machine learning methods offer promise for such problems, these survey datasets are too small to take advantage of them. In recent years large datasets of online resumes have also become available, providing data about the career trajectories of millions of individuals. However, standard econometric models cannot take advantage of their scale or incorporate them into the analysis of survey data. To this end we develop CAREER, a transformer-based model that uses transfer learning to learn representations of job sequences. CAREER is first fit to large, passivelycollected resume data and then fine-tuned to smaller, better-curated datasets for economic inferences. We fit CAREER to a dataset of 24 million job sequences from resumes, and fine-tune its representations on longitudinal survey datasets. We find that CAREER forms accurate predictions of job sequences, achieving state-of-the-art predictive performance on three widely-used economics datasets. We further find that CAREER can be used to form good predictions of other downstream variables; incorporating CAREER into a wage model provides better predictions than the econometric models currently in use. 1 INTRODUCTION A variety of economic analyses rely on models for predicting an individual’s future occupations. These models are crucial for estimating important economic quantities, such as gender or racial differences in unemployment (Hall, 1972; Fairlie & Sundstrom, 1999); they underpin causal analyses and decompositions that rely on simulating counterfactual occupations for individuals (Brown et al., 1980; Schubert et al., 2021); and they inform policy, by forecasting occupations with rising or declining market shares. These analyses typically involve fitting predictive models to longitudinal surveys that follow a cohort of individuals during their working career (Panel Study of Income Dynamics, 2021; Bureau of Labor Statistics, 2019a). Such surveys have been carefully collected to represent national demographics, ensuring that the economic analyses can generalize to larger populations. But these datasets are also small, usually containing only thousands of workers, because maintaining them requires regularly interviewing each individual. Consequently, economists use simple sequential models, where a worker’s next occupation depends on their history only through the most recent occupation (Hall, 1972) or a few summary statistics about the past (Blau & Riphahn, 1999). In recent years, however, much larger datasets of online resumes have also become available. In contrast to longitudinal surveys, these passively-collected datasets are not typically used directly for economic inferences because they contain noisy observations and they are missing important economic variables such as demographics and wage. However, they provide occupation sequences of millions of individuals, potentially expanding the scope of insights that can be obtained from analyses on downstream survey datasets. The simple econometric models currently in use cannot incorporate the complex patterns embedded in these larger datasets into the analysis of survey data. To this end, we develop CAREER, a neural sequence model of occupation trajectories. CAREER is designed to be pretrained on large-scale resume data and then fine-tuned to small and bettercurated survey data for economic prediction. Its architecture is based on the transformer language model (Vaswani et al., 2017), for which pretraining and fine-tuning has proven to be an effective paradigm for many NLP tasks (Devlin et al., 2019; Lewis et al., 2019). CAREER extends this transformer-based transfer learning approach to modeling sequences of occupations, rather than text. We will show that CAREER’s representations provide effective predictions of occupations on survey datasets used for economic analysis, and can be used as inputs to economic models for other downstream applications. To study this model empirically, we pretrain CAREER on a dataset of 24 million resumes provided by Zippia, a career planning company. We then fine-tune CAREER’s representations of job sequences to make predictions on three widely-used economic datasets: the National Longitudinal Survey of Youth 1979 (NLSY79), another cohort from the same survey (NLSY97), and the Panel Study of Income Dynamics (PSID). In contrast to resume data, these well-curated datasets are representative of the larger population. It is with these survey datasets that economists make inferences, ensuring their analyses generalize. In this study, we find that CAREER outperforms standard econometric models for predicting and forecasting occupations, achieving state-of-the-art performance on the three widely-used survey datasets. We further find that CAREER can be used to form good predictions of other downstream variables; incorporating CAREER into a wage model provides better predictions than the econometric models currently in use. We release code so that practitioners can train CAREER on their own datasets. In summary, we demonstrate that CAREER can leverage large-scale resume data to make accurate predictions on important datasets from economics. Thus CAREER ties together economic models for understanding career trajectories with transformer-based methods for transfer learning. (See Section 3 for details of related work.) A flexible predictive model like CAREER expands the scope of analyses that can be performed by economists and policy-makers. 2 CAREER Given an individual’s career history, what is the probability distribution of their occupation in the next timestep? We go over a class of models for predicting occupations before introducing CAREER, one such model based on transformers and transfer learning. 2.1 OCCUPATION MODELS Consider an individual worker. This person’s career can be defined as a series of timesteps. Here, we use a timestep of one year. At each timestep, this individual works in a job: it could be the same job as the previous timestep, or a different job. (Note we use the terms “occupation” and “job” synonymously.) We consider “unemployed” and “out-of-labor-force” to be special types of jobs. Define an occupation model to be a probability distribution over sequences of jobs. An occupation model predicts a worker’s job at each timestep as a function of all previous jobs and other observed characteristics of the worker. More formally, define an individual’s career to be a sequence (y1, . . . , yT ), where each yt ∈ {1, . . . , J} indexes one of J occupations at time t. Occupations are categorical; one example of a sequence could be (“cashier”, “salesperson”, ... , “sales manager”). At each timestep, an individual is also associated with C observed covariates xt = {xtc}Cc=1. Covariates are also categorical, with xtc ∈ {1, . . . , Nc}. For example, if c corresponds to the most recent educational degree, xtc could be “high school diploma” or “bachelors”, and Nc is the number of types of educational degrees.1 Define yt = (y1, . . . , yt) to index all jobs that have occurred up to time t, with the analogous definition for xt. At each timestep, an occupation model predicts an individual’s job in the next timestep, p(yt|yt−1,xt). This distribution conditions on covariates from the same timestep because these are “pre-transition.” For example, an individual’s most recent educational degree is available to the model as it predicts their next job. 1Some covariates may not evolve over time. We encode them as time-varying without loss of generality. Note that an occupation model is a predictive rather than structural model. The model does not incorporate unobserved characteristics, like skill, when making predictions. Instead, it implicitly marginalizes over these unobserved variables, incorporating them into its predictive distribution. 2.2 REPRESENTATION-BASED TWO-STAGE MODELS An occupation model’s predictions are governed by an individual’s career history; both whether an individual changes jobs and the specific job they may transition to depend on current and previous jobs and covariates. We consider a class of occupation models that make predictions by conditioning on a lowdimensional representation of work history, ht(yt−1,xt) ∈ RD. This representation is assumed to be a sufficient statistic of the past; ht(yt−1,xt) should contain the relevant observed information for predicting the next job. Since individuals frequently stay in the same job between timesteps, we propose a class of models that make predictions in two stages. These models first predict whether an individual changes jobs, after which they predict the specific job to which an individual transitions. The representation is used in both stages. In the first stage, the career representation ht(yt−1,xt) is used to predict whether an individual changes jobs. Define the binary variable st to be 1 if a worker’s job at time t is different from that at time t− 1, and 0 otherwise. The first stage is a logistic regression, st|yt−1,xt ∼ Bernoulli (σ(η · ht(yt−1,xt))) , (1) where σ(·) is the logistic function and η ∈ RD is a vector of coefficients. If the model predicts that an individual will transition jobs, it only considers jobs that are different from the individual’s most recent job. To formulate this prediction, it combines the career representation with a vector of occupation-specific coefficients βj ∈ RD: p(yt = j|yt−1,xt, st = 1) = exp{βj · ht(yt−1,xt)}∑ j′ 6=yt−1 exp{βj′ · ht(yt−1,xt)} . (2) Otherwise, the next job is deterministic: p(yt = j|yt−1,xt, st = 0) = δj=yt−1 . (3) Two-stage prediction improves the accuracy of occupation models. Moreover, many analyses of occupational mobility focus on whether workers transition jobs rather than the specific job they transition to (Kambourov & Manovskii, 2008). By separating the mechanism by which a worker either keeps or changes jobs (η) and the specific job they may transition to (βj), two-stage models are more interpretable for studying occupational change. Equations 1 to 3 define a two-stage representation-based occupation model. In the next section, we introduce CAREER, one such model based on transformers. 2.3 CAREER MODEL We develop a two-stage representation-based occupation model called CAREER.2 This model uses a transformer to parameterize a representation of an individual’s history. This representation is pretrained on a large resumes dataset and fine-tuned to make predictions on small survey datasets. Transformers. A transformer is a sequence model that uses neural networks to learn representations of discrete tokens (Vaswani et al., 2017). Transformers were originally developed for natural language processing (NLP), to predict words in a sentence. Transformers are able to model complex dependencies between words, and they are a critical component of modern NLP systems including language modeling (Radford et al., 2019) and machine translation (Ott et al., 2018). CAREER is an occupation model that uses a transformer to parameterize a low-dimensional representation of careers. While transformers were developed to model sequences of words, CAREER 2CAREER is short for “Contextual Attention-based Representations of Employment Encoded from Resumes.” uses a transformer to model sequences of jobs. The transformer enables the model to represent complex career trajectories. CAREER is similar to the transformers used in NLP, but with two modifications. First, as described in Section 2.2, the model makes predictions in two stages, making it better-suited to model workers who stay in the same job through consecutive timesteps. (In contrast, words seldom repeat.) Second, while language models only condition on previous words, each career is also associated with covariates x that may affect transition distributions (see Equation 2). We adapt the transformer to these two changes. Parameterization. CAREER’s computation graph is depicted in Figure 1. Note that in this section we provide a simplified description of the ideas underlying the transformer. Appendix E contains a full description of the model. CAREER iteratively builds a representation of career history, ht(yt−1,xt) ∈ RD, using a stack of L layers. Each layer applies a series of computations to the previous layer’s output to produce its own layer-specific representation. The first layer’s representation, h(1)t (yt−1,xt), considers only the most recent job and covariates. At each subsequent layer `, the transformer forms a representation h(`)t (yt−1,xt) by combining the representation of the most recent job with those of preceding jobs. Representations become increasingly complex at each layer, and the final layer’s representation, h(L)t (yt−1,xt), is used to make predictions following Equations 1 to 3. We drop the explicit dependence on yt−1 and xt going forward, and instead denote each layer’s representation as h (`) t . The first layer’s representation combines the previous job, the most recent covariates, and the position of the job in the career. It first embeds each of these variables in D-dimensional space. Define an embedding function for occupations, ey : [J ]→ RD. Additionally, define a separate embedding function for each covariate, {ec}Cc=1, with each ec : [Nc] → RD. Finally, define et : [T ] → RD to embed the position of the sequence, where T denotes the number of possible sequence lengths. The first-layer representation h(1)t sums these embeddings: h (1) t = ey(yt−1) + ∑ c ec(xtc) + et(t). (4) For each subsequent layer `, the transformer combines representations of the most recent job with those of the preceding jobs and passes them through a neural network: π (`) t,t′ ∝ exp {( h (`) t )> W (`)h (`) t′ } for all t′ ≤ t (5) h̃ (`) t = h (`) t + ∑t t′=1 π (`) t,t′ ∗ h (`) t′ (6) h (`+1) t = FFN (`) ( h̃ (`) t ) , (7) where W (`) ∈ RD×D is a model parameter and FFN(`) is a two-layer feedforward neural network specific to layer `, with FFN(`) : RD → RD. The weights {π(`)t,t′} are referred to as attention weights, and they are determined by the career representations andW (`). The attention weights are non-negative and normalized to sum to 1. The matrix W (`) can be interpreted as a similarity matrix; ifW (`) is the identity matrix, occupations t and t′ that have similar representations will have large attention weights, and thus t′ would contribute more to the weighted average in Equation 6. Conversely, if W (`) is the negative identity matrix, occupations that have differing representations will have large attention weights.3 The final computation of each layer involves passing the intermediate representation h̃(`)t through a neural network, which ensures that representations capture complex nonlinear interactions. The computations in Equations 5 to 7 are repeated for each of the L layers. The last layer’s representation is used to predict the next job: p(yt|yt−1,xt) = two-stage-softmax ( h (L) t ; η, β ) , (8) where “two-stage-softmax” refers to the operation in Equations 1 to 3, parameterized by η and β. All of CAREER’s parameters – including the embedding functions, similarity matrices, feedforward neural networks, and regression coefficients η and β – are estimated by maximizing the likelihood in Equation 8 with stochastic gradient descent (SGD), marginalizing out the variable st. Transfer learning. Economists apply occupation models to survey datasets that have been carefully collected to represent national demographics. In the United States, these datasets contain a small number of individuals. While transformers have been successfully applied to large NLP datasets, they are prone to overfitting on small datasets (Kaplan et al., 2020; Dosovitskiy et al., 2021; Variš & Bojar, 2021). As such, CAREER may not learn useful representations solely from small survey datasets. In recent years, however, much larger datasets of online resumes have also become available. Although these passively-collected datasets provide job sequences of many more individuals, they are not used for economic estimation for a few reasons. The occupation sequences from resumes are imputed from short textual descriptions, a process that inevitably introduces more noise and errors than collecting data from detailed questionnaires. Additionally, individuals may not accurately list their work experiences on resumes (Wexler, 2006), and important economic variables relating to demographics and wage are not available. Finally, these datasets are not constructed to ensure that they are representative of the general population. Between these two types of data is a tension. On the one hand, resume data is large-scale and contains valuable information about employment patterns. On the other hand, survey datasets are carefully collected, designed to help make economic inferences that are robust and generalizable. Thus CAREER incorporates the patterns embedded in large-scale resume data into the analysis of survey datasets. It does this through transfer learning: CAREER is first pretrained on a large dataset of resumes to learn an initial representation of careers. When CAREER is then fit to a small survey dataset, parameters are not initialized randomly; instead, they are initialized with the representations learned from resumes. After initialization, all parameters are fine-tuned on the small dataset by optimizing the likelihood. Because the objective function is non-convex, learned representations depend on their initial values. Initializing with the pretrained representations ensures that the model 3In practice, transformers use multiple attention weights to perform multi-headed attention (Appendix E). does not need to re-learn representations on the small dataset. Instead, it only adjusts representations to account for dataset differences. This transfer learning approach takes inspiration from similar methods in NLP, such as BERT and the GPT family of models (Devlin et al., 2019; Radford et al., 2018). These methods pretrain transformers on large corpora, such as unpublished books or Wikipedia, and fine-tune them to make predictions on small datasets such as movie reviews. Our approach is analogous. Although the resumes dataset may not be representative or carefully curated, it contains many more job sequences than most survey datasets. This volume enables CAREER to learn representations that transfer to survey datasets. 3 RELATED WORK Many economic analyses use log-linear models to predict jobs in survey datasets (Boskin, 1974; Schmidt & Strauss, 1975). These models typically use small state spaces consisting of only a few occupation categories. For example, some studies categorize occupations into broad skill groups (Keane & Wolpin, 1997; Cortes, 2016); unemployment analyses only consider employment status (employed, unemployed, and out-of-labor-force) (Hall, 1972; Lauerova & Terrell, 2007); and researchers studying occupational mobility only consider occupational change, a binary variable indicating whether an individual changes jobs (Kambourov & Manovskii, 2008; Guvenen et al., 2020). Although transitions between occupations may depend richly on history, many of these models condition on only the most recent job and a few manually constructed summary statistics about history to make predictions (Hall, 1972; Blau & Riphahn, 1999). In contrast to these methods, CAREER is nonlinear and conditions on every job in an individual’s history. The model learns complex representations of careers without relying on manually constructed features. Moreover, CAREER can effectively predict from among hundreds of occupations. Recently, the proliferation of business networking platforms has resulted in the availability of large resume datasets. Schubert et al. (2021) use a large resume dataset to construct a first-order Markov model of job transitions; CAREER, which conditions on all jobs in a history, makes more accurate predictions than a Markov model. Models developed in the data mining community rely on resumespecific features such as stock prices (Xu et al., 2018), worker skill (Ghosh et al., 2020), network information (Meng et al., 2019; Zhang et al., 2021), and textual descriptions (He et al., 2021), and are not applicable to survey datasets, as is our goal in this paper (other models reduce to a first-order Markov model without these features (Dave et al., 2018; Zhang et al., 2020)). The most suitable model for survey datasets from this line of work is NEMO, an LSTM-based model that is trained on large resume datasets (Li et al., 2017). Our experiments demonstrate that CAREER outperforms NEMO when it is adapted to model survey datasets. Recent works in econometrics have applied machine learning methods to sequences of jobs and other discrete data. Ruiz et al. (2020) develop a matrix factorization method called SHOPPER to model supermarket basket data. We consider a baseline “bag-of-jobs” model similar to SHOPPER. Like the transformer-based model, the bag-of-jobs model conditions on every job in an individual’s history, but it uses relatively simple representations of careers. Our empirical studies demonstrate that CAREER learns complex representations that are better at modeling job sequences. Rajkumar et al. (2021) build on SHOPPER and propose a Bayesian factorization method for predicting job transitions. Similar to CAREER, they predict jobs in two stages. However, their method is focused on modeling individual transitions, so it only conditions on the most recent job in an individual’s history. In our empirical studies, we show that models like CAREER that condition on every job in an individual’s history form more accurate predictions than Markov models. CAREER is based on a transformer, a successful model for representing sequences of words in natural language processing (NLP). In econometrics, transformers have been applied to the text of job descriptions to predict their salaries (Bana, 2021) or authenticity (Naudé et al., 2022); rather than modeling text, we use transformers to model sequences of occupations. Transformers have also been applied successfully to sequences other than text: images (Dosovitskiy et al., 2021), music (Huang et al., 2019), and molecular chemistry (Schwaller et al., 2019). Inspired by their success in modeling a variety of complex discrete sequential distributions, this paper adapts transformers to modeling sequences of jobs. Transformers are especially adept at learning transferrable representations of text from large corpora (Radford et al., 2018; Devlin et al., 2019). We show that CAREER learns representations of job sequences that can be transferred from noisy resume datasets to smaller, wellcurated administrative datasets. 4 EMPIRICAL STUDIES We assess CAREER’s ability to predict jobs and provide useful representations of careers. We pretrain CAREER on a large dataset of resumes, and transfer these representations to small, commonly used survey datasets. With the transferred representations, the model is better than econometric baselines at both held-out prediction and forecasting. Additionally, we demonstrate that CAREER’s representations can be incorporated into standard wage prediction models to make better predictions. Resume pretraining. We pretrain CAREER on a large dataset of resumes provided by Zippia Inc., a career planning company. This dataset contains resumes from 23.7 million working Americans. Each job is encoded into one of 330 occupational codes, using the coding scheme of Autor & Dorn (2013). We transform resumes into sequences of jobs by including an occupation’s code for each year in the resume. For years with multiple jobs, we take the job the individual spent the most time in. We include three covariates: the year each job in an individual’s career took place, along with the individual’s state of residence and most recent educational degree. We denote missing covariates with a special token. See Appendix F for an exploratory data analysis of the resume data. CAREER uses a 12-layer transformer with 5.6 million parameters. Pretraining CAREER on the resumes data takes 18 hours on a single GPU. Although our focus is on fine-tuning CAREER to model survey datasets rather than resumes, CAREER also outperforms standard econometric baselines for modeling resumes; see Appendix B for more details. Survey datasets. We transfer CAREER to three widely-used survey datasets: two cohorts from the National Longitudinal Survey of Youth (NLSY79 and NLSY97) and the Panel Study of Income Dynamics (PSID). These datasets have been carefully constructed to be representative of the general population, and they are widely used by economists for estimating important quantities. NLSY79 is a longitudinal panel survey following a cohort of Americans who were between 14 and 22 when the survey began in 1979, while NLSY97 follows a different cohort of individuals who were between 12 and 17 when the survey began in 1997. PSID is a longitudinal survey following a sample of American families, with individuals added over the years. Compared to the resumes dataset, these survey datasets are small: we use slices of NLSY79, NLSY97, and PSID that contain 12 thousand, 9 thousand, and 12 thousand individuals, respectively. The distribution of job sequences in resumes differs in meaningful ways from those in the survey datasets; for example, manual laborers are under-represented and college graduates are overrepresented in resume data (see Appendix F for more details). We pretrain CAREER on the large resumes dataset and fine-tune on the smaller survey datasets. The fine-tuning process is efficient; although CAREER has 5.6 million parameters, fine-tuning on one GPU takes 13 minutes on NLSY79, 7 minutes on NLSY97, and 23 minutes on PSID. We compare CAREER to several baseline models: a second-order linear regression with covariates and hand-constructed summary statistics about past employment (a common econometric model used to analyze these survey datasets – see Section 3); a bag-of-jobs model inspired by SHOPPER (Ruiz et al., 2020) that conditions on all jobs and covariates in a history but combines representations linearly; and several baselines developed in the data-mining community for modeling worker profiles: NEMO (Li et al., 2017), job representation learning (Dave et al., 2018), and Job2Vec (Zhang et al., 2020). As described in Section 3, the baselines developed in the data-mining community for modeling worker profiles cannot be applied directly to economic survey datasets and thus require modifications, described in detail in Appendix I. We also compare to two additional versions of CAREER — one without pretraining or two-stage prediction, the other only without two-stage prediction — to assess the sources of CAREER’s improvements. All models use the covariates we included for resume pretraining, in addition to demographic covariates (which are recorded for the survey datasets but are unavailable for resumes). We divide all survey datasets into 70/10/20 train/validation/test splits, and train all models by optimizing the log-likelihood with Adam (Kingma & Ba, 2015). We evaluate the predictive performance of each model by computing held-out perplexity, a common metric in NLP for evaluating probabilistic sequence models. The perplexity of a sequence model p on a sequence y1, . . . , yT is Under review as a conference paper at ICLR 2023Under review as a conference paper at ICLR 2023 (a) Test perplexity on survey datasets. Results are averaged over three random seeds. CAREER (vanilla) includes covariates but not two-stage prediction or pretraining; CAREER (two-stage) adds two-stage prediction. (b) CAREER’s scaling law on NLSY79 as a function of pretraining data volume. dictive models have lower perplexities. We train all models to convergence and use the checkpoint with the best validation perplexity. See Appendix I for more experimental details. Figure 2a compares the test-set perplexity of each model. With the transferred representations, CAREER makes the best predictions on all survey datasets, achieving state-of-the-art performance. NEMO, which was designed to model large resume datasets, struggles to make good predictions on these small survey datasets, performing on par with standard econometric baselines. Pretraining is the biggest source of CAREER’s improvements. Although the resume data is noisy and differs in many ways from the survey datasets used for economic prediction, CAREER learns useful representations of work experiences that aid its predictive performance. In Appendix G we show that modifying the baselines to incorporate two-stage prediction (Equations 1 to 3) improves their performance, although CAREER still makes the best predictions across datasets. We include qualitative analysis of CAREER’s predictions in Appendix D. To assess how the volume of resumes used for pretraining affects CAREER’s predictions on survey datasets, we downsample the resume dataset and transfer to survey datasets. The scaling law for NLSY79 is depicted in Figure 2b. When there are less than 20,000 examples in the resume dataset, pretraining CAREER does not offer any improvement. The relationship between pretraining volume and fine-tuned perplexity follows a power law, similar to scaling laws in NLP (Kaplan et al., 2020). We also assess CAREER’s ability to forecast future career trajectories. In contrast to predicting held-out sequences, forecasting involves training models on all sequences before a specific year. To predict future jobs for an individual, the fitted model is used to estimate job probabilities six years into the future by sampling multi-year trajectories. This setting is useful for assessing a model’s ability to make long-term predictions, especially as occupational trends change over time. We evaluate CAREER’s forecasting abilities on NLSY97 and PSID. (These datasets are more valuable for forecasting than NLSY79, which follows a cohort that is near or past retirement age.) We train models on all sequences (holding out 10% as a validation set), without including any observations after 2014. When pretraining CAREER on resumes, we also make sure to only include examples up to 2014. Table 2 compares the forecasting performance of all models. CAREER makes the best overall forecasts. CAREER has a significant advantage over baselines at making long-term forecasts, yielding a 17% advantage over the best baseline for 6-year forecasts on NLSY97. Again, 8 (a) Test perplexity on survey datasets. Results are averaged over three random seeds. CAREER (vanilla) includes covariates but not two-stage prediction or pretraining; CAREER (two-stage) adds two-stage prediction. (b) CAREER’s scaling law on NLSY79 as a function of pretraining data volume. Figure 2: Prediction results on longitudinal survey datasets and scaling law. exp{− 1T ∑T t=1 log p(yt|yt−1,xt)}. It is a monotonic transformation of log-likelihood; better predictive models have lower perplexities. We train all models to convergence and use the checkpoint with the best validation perplexity. See Appendix I for more experimental details. Figure 2a compares the test-set perplexity of each model. With the transferred representations, CA- REER makes the best predictions on all survey datasets, achieving state-of-the-art performance. The baselines developed in the data mining literature, which were designed to model large resume datasets while relying on resume-specific features, struggle to make good predictions on these small survey datasets, performing on par with standard econometric baselines. Pretraining is the biggest source of CAREER’s improv ments. Although the resume data is noisy and differs in many ways from the survey datasets used for economic prediction, CAREER learns useful representations of work experiences that aid its predictive performance. In Appendix G we show that modifying the baselines to incorporate two-stage prediction (Equations 1 to 3) improves their performance, although CAREER still makes the best predictions across datasets. We include qualitative analysis of CAREER’s prediction in Appendix D. To assess how the volume of resumes used for pretrai ing affects CAREER’s predictions on survey dataset , we downsampl the resum dataset and transfer to su vey datasets. The scaling law for NLSY79 is depicted in Figure 2b. When there are less than 20,000 examples in the resume dataset, pretraining CAREER does not offer any improvement. The relationship between pretraining volume and fine-tuned pe plexity follows a power law, imilar to scaling laws i NLP (Kaplan et al., 2020). We also assess CAREER’s ability to forecast future career trajectories. In contrast to predicting held-out sequences, orecasting involves training models on all sequences before a specific year. To pr ict future j bs for an individual, the fitted model is used to estimate job probabilities six years into the future by sampling multi-year trajectories. This setting is useful for assessing a model’s ability to make long-term predictions, especially as occupational trends change over time. We evaluate CAREER’s forecasting abilities on NLSY97 and PSID. (These datasets are more valuable for forecasting than NLSY79, which follows a cohort that is near or past retirement age.) We train models on all sequences (holding out 10% as a validation set), without including any observations after 2014. When pretraining CAREER on resumes, we also make sure to only include examples up to 2014. Table 1 compares the forecasting performance of all models. CAREER makes the best overall forecasts. REER has a significant advantage over baselines at making long-term forecasts, yielding a 17% advantage over the best baseline for 6-year forecasts on NLSY97. Again, the baselines developed for resume data mining, which had been develop d to model much larger corpora, struggle to make good predictions on these smaller survey datasets. Downstream applications. In addition to forming job predictions, CAREER learns lowdimensional representations of job histories. Although these representations were formed to predict jobs in a sequence, they can also be used as inputs to economic models for downstream applications. As an example of how CAREER’s representations can be incorporated into other economic models, we use CAREER to predict wages. Economists build wage prediction models in order to estimate important economic quantities, such as the adjusted gender wage gap. For example, to estimate this wage gap, Blau & Kahn (2017a) regress an individual’s log-wage on observable characteristics such as education, demographics, and current occupation for six different years on PSID. Rather than including the full, high-dimensional job-history, the model summarizes an individual’s career with summary statistics such as full-time and part-time years of experience (and their squares). We incorporate CAREER’s representation into the wage regression by adding the fitted representation for an individual’s job history, ĥi. For log-wage wi and observed covariates xi, we regress wi ∼ α+ θ>xi + γ>ĥi, (9) where α, θ, and γ are regression coefficients. We pretrain CAREER to predict jobs on resumes, and for each year we fine-tune on job sequences of the cohort up to that year. For example, in the 1999 wage regression, we fine-tune CAREER only on the sequences of jobs until 1999 and plug in the fixed representation to the wage regression. We do not include any covariates (except year) when training CAREER. We run each wage regression on 80% of the training data and evaluate mean-square error on the remaining 20% (averaging over 10 random splits). Table 2 shows that adding CAREER’s representations improves wage predictions for each year. Although these representations are fine-tuned to predict jobs on a small dataset (each year contains less than 5,000 workers) and are not adjusted to account for wage, they contain information that is predictive of wage. By summarizing complex career histories with a low-dimensional representation, CAREER provides representations that can improve downstream economic models, resulting in more accurate estimates of important economic quantities. 5 CONCLUSION We introduced CAREER, a method for representing job sequences from large-scale resume data and fine-tuning them on smaller datasets of interest. We took inspiration from modern language modeling to develop a transformer-based occupation model. We transferred the model from a large dataset of resumes to smaller survey datasets in economics, where it achieved state-of-the-art performance for predicting and forecasting career outcomes. We demonstrated that CAREER’s representations can be incorporated into wage prediction models, outperforming standard econometric models. One direction of future research is to incorporate CAREER’s representations of job history into methods for estimating adjusted quantities, like wage gaps. Underlying these methods are models that predict economic outcomes as a function of observed covariates. However, if relevant variables are omitted, the adjusted estimates may be affected; e.g., excluding work experience from wage prediction may change the magnitude of the estimated gap. In practice, economists include handdesigned summary statistics to overcome this problem, such as in Blau & Kahn (2017a). CAREER provides a data-driven way to incorporate such variables—its representations of job history could be incorporated into downstream prediction models and lead to more accurate adjustments of economic quantities. Ethics statement. As discussed, passively-collected resume datasets are not curated to represent national demographics. Pretraining CAREER on these datasets may result in representations that are affected by sampling bias. Although these representations are fine-tuned on survey datasets that are carefully constructed to represent national demographics, the biases from pretraining may propagate through fine-tuning (Ravfogel et al., 2020; Jin et al., 2021). Moreover, even in representative datasets, models may form more accurate predictions for majority groups due to data volume (Dwork et al., 2018). Thus, we encourage practitioners to audit noisy resume data, re-weight samples as necessary (Kalton, 1983), and review accuracy within demographics before using the model for downstream economic analysis. Although resume datasets may contain personally identifiable information, all personally identifiable information had been removed before we were given access to the resume dataset we used for pretraining. Additionally, none of the longitudinal survey datasets contain personally identifiable information. Reproducibility statement. The supplementary material contains code for reproducing the experimental results in this paper, with the README containing detailed instructions for reproducing specific experiments. Our data-use agreement prohibits us from releasing the dataset of resumes used for pretraining. However, similar (private) resume datasets have become increasingly common in applied economics analyses (Azar et al., 2020; Schubert et al., 2021), and we include pretraining code so practitioners can reproduce our results with resume datasets they have access to. Additionally, all longitudinal survey datasets are available publicly online (Bureau of Labor Statistics, 2019a;b; Panel Study of Income Dynamics, 2021). A ECONOMETRIC BASELINES In this section, we describe baseline occupation models that economists have used to model jobs and other discrete sequences. Markov models and regression. A first-order Markov model assumes the job at each timestep depends on only the previous job (Hall, 1972; Poterba & Summers, 1986). Without covariates, a Markov model takes the form p(yt = j|yt−1) = p(yt = j|yt−1). The optimal transition probabilities reflect the overall frequencies of individuals transitioning from occupation yt−1 to occupation j. In a second-order Markov model, the next job depends on the previous two. A multinomial logistic regression can be used to incorporate covariates: p(yt = j|yt−1,xt) ∝ exp { β (0) j + β (1) j · yt−1 + ∑ c β (c) j · xtc } , (10) where β(0)j is an occupation-specific intercept and yt−1 and xtc denote J- and Nc-dimensional indicator vectors, respectively. Equation 10 depends on history only through the most recent job, although the covariates can also include hand-crafted summary statistics about the past, such as the duration of the most recent job (McCall, 1990). This model is fit by maximizing the likelihood with gradient-based methods. Bag-of-jobs. A weakness of the first-order Markov model is that it only uses the most recent job to make predictions. However, one’s working history beyond the last job may inform future transitions (Blau & Riphahn, 1999; Neal, 1999). Another baseline we consider is a bag-of-jobs model, inspired by SHOPPER, a probabilistic model of consumer choice (Ruiz et al., 2020). Unlike the Markov and regression models, the bag-of-jobs model conditions on every job in an individual’s history. It does so by learning a low-dimensional representation of an individual’s history. This model learns a unique embedding for each occupation, similar to a word embedding (Bengio et al., 2003; Mikolov et al., 2013); unlike CAREER, which learns complicated nonlinear interactions between jobs in a history, the bag-of-jobs model combines jobs into a single representation by averaging their embeddings. The bag-of-jobs model assumes that job transitions depend on two terms: a term that captures the effect of the most recent job, and a term that captures the effect of all prior jobs. Accordingly, the model learns two types of representations: an embedding αj ∈ RD of the most recent job j, and an embedding ρj′ ∈ RD for prior jobs j′. To combine the representations for all prior jobs into a single term, the model averages embeddings: p(yt = j|yt−1) ∝ exp { β (1) j · αyt−1 + β (2) j · ( 1 t−2 ∑t−2 t′=1 ρyt′ )} . (11) Covariates can be added to the model analogously; for a single covariate, its most recent value is embedded and summed with the average embeddings for its prior values. All parameters are estimated by maximizing the likelihood in Equation 11 with SGD. B RESUME PREDICTIONS Although our focus is on modeling survey datasets, we also compare CAREER to several econometric baselines for predicting job sequences in resumes. We consider a series of models without covariates: a first- and second-order Markov model, a bag-of-jobs model (Equation 11), and a transformer with the same architecture as CAREER except without covariates. We also compare to econometric models that use covariates: a second-order linear regression with covariates and hand-constructed features (such as how long an individual has worked in their current job), and a bag-of-jobs model with covariates (Appendix I has more details). We randomly divide the resumes dataset into a training set of 23.6 million sequences, and a validation and test set of 23 thousand sequences each. Table 3 compares the test-set predictive performance of all models. CAREER is the best at predicting held-out sequences. To understand the types of transitions contributing to CAREER’s predictive advantage, we decompose predictions into three categories: consecutive repeats (when the next job is the same as the previous year’s), nonconsecutive repeats (when the next job is different from the previous year’s, but is the same as one of the prior jobs in the career), and new jobs. CAREER has a clear advantage over the baselines in all three categories, but the biggest improvement comes when predicting jobs that have been repeated non-consecutively. The transformer model is at an advantage over the Markov models for these kinds of predictions because it is able to condition on an individual’s entire working history, while a Markov model is constrained to use only the most recent job (or two). The bag-of-jobs model, which can condition on all jobs in a worker’s history but cannot learn complex interactions between them, outperforms the Markov models but still falls short of CAREER, which can recognize and represent complex career trajectories. In Appendix C, we demonstrate that CAREER is well-equipped at forecasting future trajectories as well. C FORECASTING RESUMES We also perform the forecasting experiment on the large dataset of resumes. Each model is trained on resumes before 2015. To predict occupations for individuals after 2015, a model samples 1,000 trajectories for each individual, and averages probabilities to form a single prediction for each year. For more experimental details, see Appendix I. Table 4 depicts the forecasting results for the resumes dataset. Each fitted model is used to forecast occupation probabilities for three years into the future. CAREER makes the best forecasts, both overall and for each individual year. D QUALITATIVE ANALYSIS Rationalizing predictions. Figure 3 shows an example of a held-out career sequence from PSID. CAREER is much likelier than a regression and bag-of-jobs baseline to predict this individual’s next job, biological technician. To understand CAREER’s prediction, we show the model’s rationale, or the jobs in this individual’s history that are sufficient for explaining the model’s prediction. (We adapt the greedy rationalization method from Vafa et al. (2021); refer to Appendix I for more details.) In this example, CAREER only needs three previous jobs to predict biological technician: animal caretaker, engineering technician, and student. The model can combine latent attributes of each job to predict the individual’s next job. Representation similarity. To demonstrate the quality of the learned representations, we use CAREER’s fine-tuned representations on NLSY97 to find pairs of individuals with the most similar career trajectories. Specifically, we compute CAREEER’s representation ht(yt−1,xt) for each individual in NLSY97 who has worked for four years. We then measure the similarity between all pairs by computing the cosine similarity between representations. In order to depict meaningful matches, we only consider pairs of individuals with no overlapping jobs in their histories (otherwise the model would find individuals with the exact same career trajectories). Figure 4 depicts the career histories with the most similar CAREER representations. Although none of these pairs have overlapping jobs, the model learns representations that can identify similar careers. E TRANSFORMER DETAILS In this section, we expand on the simplified description of transformers in Section 2.3 and describe CAREER in full detail. Recall that the model estimates representations in L layers, h (1) t (yt−1,xt), . . . , h (L) t (yt−1,xt), with each representation h (`) t ∈ RD. The final representation h (L) t (yt−1,xt) is used to represent careers. We drop the explicit dependence on yt−1 and xt, and instead denote each representation as h(`)t . The first transformer layer combines the previous occupation, the most recent covariates, and the position of the occupation in the career. It first embeds each of these variables in D-dimensional space. Define an embedding function for occupations, ey : [J ] → RD. Additionally, define a separate embedding function for each covariate, {ec}Cc=1, with each ec : [Nc] → RD. Finally, define et : [T ] → RD to embed the position of the sequence, where T denotes the number of possible sequence lengths. The first-layer representation h(1)t sums these embeddings: h (1) t = ey(yt−1) + ∑ c ec(xtc) + et(t). (12) The occupation- and covariate-specific embeddings, ey and {ec}, are model parameters; the positional embeddings, et, are set in advance to follow a sinusoidal pattern (Vaswani et al., 2017). While these embeddings could also be parameterized, in practice the performance is similar, and using sinusoidal embeddings allows the model to generalize to career sequence lengths unseen in the training data. At each subsequent layer, the transformer combines the representations of all occupations in a history. It combines representations by performing multi-headed attention, which is similar to the process described in Section 2.3 albeit with multiple attention weights per layer. Specifically, it uses A specific attention weights, or heads, per layer. The number of heads A should be less than the representation dimension D. (Using A = 1 attention head reduces to the process described in Equations 5 and 6.) The representation dimension D should be divisible by A; denote K = D/A. First, A different sets of attention weights are computed: z (`) a,t,t′ = ( h (`) t )> W (`)a h (`) t′ for t ′ ≤ t πa,t,t′ = exp{za,t,t′}∑ k exp{za,t,k} , (13) where W (`)a ∈ RD×D is a model parameter, specific to attention head a and layer l.4 Each attention head forms a convex combination with all previous representations; to differentiate between attention heads, each representation is transformed by a linear transformation V (`)a ∈ RK×D unique to an attention head, forming b(`)a,t ∈ RK : b (`) a,t = ∑t t′=1 π (`) a,t,t′ ( V (`) a h (`) t′ ) . (14) All attention heads are combined into a single representation by concatenating them into a single vector g(`)t ∈ RD: g (`) t = ( b (`) 1,t, b (`) 2,t, . . . , b (`) A,t ) . (15) To complete the multi-head attention step and form the intermediate representation h̃(`)t , the concatenated representations g(`)t undergo a linear transformation and are summed with the pre-attention representation h(`)t : h̃ (`) t = h (`) t +M (`)g (`) t , (16) with M (`) ∈ RD×D. The intermediate representations h̃(`)t ∈ RD combine the representation at timestep t with those preceding timestep t. Each layer of the transformer concludes by taking a non-linear transformation of the intermediate representations. This non-linear transformation does not depend on any previous representation; it only transforms h̃(`)t . Specifically, h̃ (`) t is passed through a neural network: h (`+1) t = h̃ (`) t + FFN (`) ( h̃ (`) t ) , (17) where FFN(`) denotes a two-layer feedforward neural network with N hidden units, with FFN(`) : RD → RD. We repeat the multi-head attention and feedforward neural network updates above for L layers, using parameters unique to each layer. We represent careers with the last-layer representation, ht(yt−1,xt) = h (L) t (yt−1,xt). For our experiments, we use model specifications similar to the generative pretrained transformer (GPT) architecture (Radford et al., 2018). In particular, we use L = 12 layers, a representation dimension of D = 192, A = 3 attention heads, and N = 768 hidden units and the GELU nonlinearity (Hendrycks & Gimpel, 2016) for all feedforward neural networks. In total, this results in 5.6 million parameters. This model includes a few extra modifications to improve training: we use 0.1 dropout (Srivastava et al., 2014) for the feedforward neural network weights, and 0.1 dropout for the attention weights. Finally, we use layer normalization (Ba et al., 2016) before the updates in Equation 13, after the update in Equation 16, and after the final layer’s neural network update in Equation 17. 4For computational reasons, W (`)a is decomposed into two matrices and scaled by a constant, W (`) a = Q (`) a ( K (`) a )> √ K , with Q(`)a , K (`) a ∈ RD×K . F EXPLORATORY DATA ANALYSIS Table 5 depicts summary statistics of the resume dataset provided by Zippia that is used for pretraining CAREER. Table 6 compares this resume dataset with the longitudinal survey datasets of interest. G ONE-STAGE VS TWO-STAGE PREDICTION Table 7 compares the predictive performance of occupation models when they are modified to make predictions in two stages, following Equations 1 to 3. Incorporating two-stage prediction improves the performance of these models compared to Figure 2a; however, CAREER still makes the best predictions on all survey datasets. H DATA PREPROCESSING In this section, we go over the data preprocessing steps we took for each dataset. Resumes. We were given access to a large dataset of resumes of American workers by Zippia, a career planning company. This dataset coded each occupation into one of 1,073 O*NET 2010 Standard Occupational Classification (SOC) categories based on the provided job titles and descriptions in resumes. We dropped all examples with missing SOC codes. Each resume in the dataset we were given contained covariates that had been imputed based off other data in the resume. We considered three covariates: year, most recent educational degree, and location. Education degrees had been encoded into one of eight categories: high school diploma, associate, bachelors, masters, doctorate, certificate, license, and diploma. Location had been encoded into one of 50 states plus Puerto Rico, Washington D.C., and unknown, for when location could not be imputed. Some covariates also had missing entries. When an occupation’s year was missing, we had to drop it from the dataset, because we could not position it in an individual’s career. Whenever another covariate was missing, we replaced it with a special “missing” token. All personally identifiable information had been removed from the dataset. We transformed each resume in the dataset into a sequence of occupations. We included an entry for each year starting from the first year an individual worked to their last year. We included a special “beginning of sequence” token to indicate when each individual’s sequence started. For each year between an individual’s first and last year, we added the occupation they worked in during that year. If an individual worked in multiple occupations in a year, we took the one where the individual spent more time in that year; if they were both the same amount of time in the particular year, we broke ties by adding the occupation that had started earlier in the career. For the experiments predicting future jobs directly on resumes, we added a “no-observed-occupation” token for years where the resume did not list any occupations (we dropped this token when pretraining). Each occupation was associated with the individual’s most recent educational degree, which we treated as a dynamic covariate. The year an occupation took place was also considered a dynamic categorical covariate. We treated location as static. In total, this preprocessing left us with a dataset of 23.7 million resumes, and 245 million individual occupations. In order to transfer representations, we had to slightly modify the resumes dataset for pretraining to encode occupations and covariates into a format compatible with the survey datasets. The survey datasets we used were encoded with the “occ1990dd” occupation code (Autor & Dorn, 2013) rather than with O*NET’s SOC codes, so we converted the SOC codes to occ1990dd codes using a crosswalk posted online by Destin Royer. Even after we manually added a few missing entries to the crosswalks, there were some SOC codes that did not have corresponding occ1990dd’s. We gave these tokens special codes that were not used when fine-tuning on the survey datasets (because they did not correspond to occ1990dd occupations). When an individual did not work for a given year, the survey datasets differentiated between three possible states: unemployed, out-of-labor-force, and in-school. The resumes dataset did not have these categories. Thus, we initialized parameters for these three new occupational states randomly. Additionally, we did not include the “no-observedoccupation” token when pretraining, and instead dropped missing years from the sequence. Since we did not use gender and race/ethnicity covariates when pretraining, we also initialized these covariatespecific parameters randomly for fine-tuning. Because we used a version of the survey datasets that encoded each individual’s location as a geographic region rather than as a state, we converted each state in the resumes data to be in one of four regions for pretraining: northeast, northcentral, south, or west. We also added a fifth “other” region for Puerto Rico and for when a state was missing in the original dataset. We also converted educational degrees to levels of experience: we converted associate’s degree to represent some college experience and bachelor’s degree to represent fouryear college experience; we combined masters and doctorate to represent a single “graduate degree” category; and we left the other categories as they were. NLSY79. The National Longitudinal Survey of Youth 1979 (NLSY79) is a survey following individuals born in the United States between 1957-1964. The survey included individuals who were between 14 and 22 years old when they began collecting data in 1979; they interviewed individuals annually until 1994, and biennially thereafter. Each individual in the survey is associated with an ID, allowing us to track their careers over time. We converted occupations, which were initially encoded as OCC codes, into “occ1990dd” codes using a crosswalk (Autor & Dorn, 2013). We use a version of the survey that has entries up to 2014. Unlike the resumes dataset, NLSY79 includes three states corresponding to individuals who are not currently employed: unemployed, out-of-labor-force, and in-school. We include special tokens for these states in our sequences. We drop examples with missing occupation states. We also drop sequences for which the individual is out of the labor force for their whole careers. We use the following covariates: years, educational experience, location, race/ethnicity, and gender. We drop individuals with less than 9 years of education experience. We convert years of educational experience into discrete categories: no high school degree, high school degree, some college, college, and graduate degree. We convert geographic location to one of four regions: northeast, northcentral, south, and west. We treat location as a static variable, using each individual’s first location. We use the following race/ethnicities: white, African American, Asian, Latino, Native American, and other. We treat year and education as dynamic covariates whose values can change over time, and we consider the other covariates as static. This preprocessing leaves us with a dataset consisting of 12,270 individuals and 239,545 total observations. NLSY97. The National Longitudinal Survey of Youth 1997 (NLSY97) is a survey following individuals who were between 12 and 17 when the survey began in 1997. Individuals were interviewed annually until 2011, and biennially thereafter. Our preprocessing of this dataset is similar to that of NLSY79. We convert occupations from OCC codes into “occ1990dd” codes. We use a version of the survey that follows individuals up to 2019. We include tokens for unemployed, out-of-labor-force, and in-school occupational states. We only consider individuals who are over 18 and drop military-related occupations. We use the same covariates as NLSY79. We use the following race/ethnicities: white, African-aAmerican, Latino, and other/unknown. We convert years of educational experience into discrete categories: no high school degree, high school degree, some college degree, college degree, graduate degree, and a special token when the education status isn’t known. We use the same regions as NLSY79. We drop sequences for which the individual is out of the labor force for their whole careers. This preprocessing leaves us with a dataset consisting of 8,770 individuals and 114,141 total observations. PSID. The Panel Study of Income Dynamics (PSID) is a longitudinal panel survey following a sample of American families. It was collected annually between 1968 and 1997, and biennially afterwards. The dataset tracks families over time, but it only includes occupation information for the household head and their spouse, so we only include these observations. Occupations are encoded with OCC codes, which we convert to “occ1990dd” using a crosswalk (Autor & Dorn, 2013). Like the NLSY surveys, PSID also includes three states corresponding to individuals who are not currently employed: unemployed, out-of-labor-force, and in-school. We include special tokens for these states in our sequences. We drop other examples with missing or invalid occupation codes. We also drop sequences for which the individual is out of the labor force for their whole careers. We consider five covariates: year, education, location, gender, and race. We include observations for individuals who were added to the dataset after 1995 and include observations up to 2019. We exclude observations for individuals with less than 9 years of education experience. We convert years of education to discrete states: no high school, high school diploma, some college, college, and graduate degree. We convert geographic location to one of four regions: northeast, northcentral, south, and west. We treat location as a static variable, using each individual’s first location. We use the following races: white, Black, and other. We treat year and education as dynamic covariates whose values can change over time, and we consider the other covariates as static. This preprocessing leaves us with a dataset consisting of 12,338 individuals and 62,665 total observations. I EXPERIMENTAL DETAILS Baselines. We consider a first-order Markov model and a second-order Markov model (both without covariates) as baselines. These models are estimated by averaging observed transition counts. We smooth the first-order Markov model by taking a weighted average between the empirical transitions in the training set and the empirical distribution of individual jobs. We perform this smoothing to account for the fact that some feasible transitions may never occur in the training set due to the high-dimensionality of feasible transitions. We assign 0.99 weight to the empirical distributions of transitions and 0.01 to the empirical distribution of individual jobs. We smooth the secondorder model by assigning 0.5 weight to the empirical second-order transitions and 0.5 weight to the smoothed first-order Markov model. When we add covariates to the Markov linear baseline, we also include manually constructed features about history to improve its performance. In total, we include the following categorical variables: the most recent job, the prior job, the year, a dummy indicating whether there has been more than one year since the most recent observed job, the education status, a dummy indicating whether the education status has changed, and state (for the experiments on NLSY79 and PSID, we also include an individual’s gender and race/ethnicity). We also add additive effects for the following continuous variables: the number of years an individual has been in the current job and the total number of years for which an individual has been in the dataset. In addition, we include an intercept term. For the bag-of-jobs model, we vary the representation dimensionD between 256-2048, and find that the predictive performance is not sensitive to the representation dimension, so we use D = 1024 for all experiments. For the LSTM model, we use 3 layers with 436 embedding dimensions so that the model size is comparable to the transformer baseline: the LSTM has 5.8 million parameters, the same number as the transformer. We also compare to NEMO (Li et al., 2017), an LSTM-based method developed for modeling job sequences in resumes. We adapted NEMO to model survey data. In its original setting, NEMO took as input static covariates (such as individual skill) and used these to predict both an individual’s next job title and their company. Survey datasets differ from this original setting in a few ways: covariates are time-varying, important covariates for predicting jobs on resumes (like skill) are missing, and an individual’s company name is unavailable. Therefore, we made several modifications to NEMO. We incorporated the available covariates from survey datasets by embedding them and adding them to the job embeddings passed into the LSTM, similar to the method CAREER uses to incorporate covariates. We removed the company-prediction objective, and instead only used the model to predict an individual’s job in the next timestep. We considered two sizes of NEMO: an architecture using the same number of parameters as CAREER, and the smaller architecture proposed in the original paper. We found the smaller architecture performed better on the survey datasets, so we used this for the experiments. This model contains 2 decoder layers and a hidden dimension of 200. We compare to two additional baselines developed in the data mining literature: job representation learning (Dave et al., 2018) and Job2Vec (Zhang et al., 2020). These methods require resumespecific features such as skills and textual descriptions of jobs and employers, which are not available for the economic longitudinal survey datasets we model. Thus, we adapt these baselines to be suitable for modeling economic survey data. Job representation learning (Dave et al., 2018) is based on developing two graphs, one for job transitions and one for skill transitions. Since worker skills are not available for longitudinal survey data, we adapt the model to only use job transitions by only including the terms in the objective that depend on job transitions. We make a few additional modifications, which we found to improve the performance of this model on our data. Rather than sampling 3-tuples from the directed graph of job transitions, we include all 2-tuple job transitions present in the data, identical to the other models we consider. Additionally, rather than using the contrastive objective in Equation 4 of Dave et al. (2018), we optimize the log-likelihood directly — this is more computationally intensive but leads to better results. Finally, we include survey-specific covariates (e.g. education, demographics, etc.) by adding them to wx, embedding the covariate of each most recent job to the same space as wx. We make similar modifications to Job2Vec (Zhang et al., 2020). Job2Vec requires job titles and descriptions of job keywords, which are unavailable for economic longitudinal survey datasets. Instead, we modify Equation 1 in Zhang et al. (2020) to model occupation codes rather than titles or keywords and optimize this log-likelihood as our objective. We also incorporate survey-specific covariates by embedding each covariate to the same space as ei and adding it ei before computing Equation 2 from Zhang et al. (2020), which we also found to improve performance. We follow Dave et al. (2018) and use 50 embedding dimensions for each model, and optimize with Adam using a maximum learning rate of 0.005, following the minibatch and warmup strategy described below. When we compared the transferred version of CAREER to a version of CAREER without pretrained representations, we tried various architectures for the non-pretrained version of CAREER. We found that, without pretraining, the large architecture we used for CAREER was prone to overfitting on the smaller survey datasets. So we performed an ablation of the non-pretrained CAREER with various architectures: we considered 4 and 12 layers, 64 and 192 embedding dimensions, 256 and 768 hidden units for the feedforward neural networks, and 2 or 3 attention heads (using 2 heads for D = 64 and 3 heads for D = 192 so that D was divisible by the number of heads). We tried all 8 combinations of these parameters on NLSY79, and found that the model with the best validation performance had 4 layers, D = 64 embedding dimensions, 256 hidden units, and 2 attention heads. We used this architecture for the non-pretrained version of CAREER on all survey datasets. Training. We randomly divide the resumes dataset into a training set of 23.6 million sequences, and a validation and test set of 23 thousand sequences each. We randomly divide the survey datasets into 70/10/20 train/test/validation splits. The first- and second-order Markov models without covariates are estimated from empirical transitions counts. We optimize all other models with stochastic gradient descent with minibatches. In total, we use 16,000 total tokens per minibatch, varying the batch size depending on the largest sequence length in the batch. We use the Adam learning rate scheduler (Kingma & Ba, 2015). All experiments on the resumes data warm up the learning rate from 10−7 to 0.0005 over 4,000 steps, after which the inverse square root schedule is used (Vaswani et al., 2017). For the survey datasets, we also used the inverse square root scheduler, but experimented with various learning rates and warmup updates, using the one we found to work best for each model. For CAREER with pretrained representations, we used a learning rate of 0.0001 and 500 warmup updates; for CAREER without pretraining, we used a learning rate of 0.0005 and 500 warmup updates; for the bag of jobs model, we used a learning rate of 0.0005 and 5,000 warmup updates; for the regression model, we used a learning rate of 0.0005 and 4,000 warmup updates. We use a learning rate of 0.005 for job representation learning and Job2Vec, with 5,000 warmup updates. All models besides were also trained with 0.01 weight decay. All models were trained using Fairseq (Ott et al., 2019). When training on resumes, we trained for 85,000 steps, using the checkpoint with the best validation performance. When fine-tuning on the survey datasets, we trained all models until they overfit to the validation set, again using the checkpoint with the best validation performance. We used half precision for training all models, with the exception of the following models (which were only stable with full precision): the bag of jobs model with covariates on the resumes data, and the regression models for all survey dataset experiments. The tables in Section 4 report results averaged over multiple random seeds. For the results in Figure 2a, the randomness includes parameter initialization and minibatch ordering. For CAREER, we use the same pretrained model for all settings. For the forecasting results in Table 1, the randomness is with respect to the Monte-Carlo sampling used to sample multi-year trajectories for individuals. For the wage prediction experiment in Table 2, the randomness is with respect to train/test splits. Forecasting. For the forecasting experiments, occupations that took place after a certain year are dropped from the train and validation sets. When we forecast on the resumes dataset, we use the same train/test/validation split but drop examples that took place after 2014. When we pretrain CAREER on the resumes dataset to make forecasts for PSID and NLSY97, we use a cutoff year of 2014 as well. We incorporate two-stage prediction into the baseline models because we find that this improves their predictions. Although we do not include any examples after the cutoff during training, all models require estimating year-specific terms. We use the fitted values from the last observed year to estimate these terms. For example, CAREER requires embedding each year. When the cutoff year is 2014, there do not exist embeddings for years after 2014, so we substitute the 2014 embedding. We report forecasting results on a split of the dataset containing examples before and after the cutoff year. To make predictions for an individual, we condition on all observations before the cutoff year, and sample 1,000 trajectories through the last forecasting year. We never condition on any occupations after the cutoff year, although we include updated values of dynamic covariates like education. For forecasting on the resumes dataset, we set the cutoff for 2014 and forecast occupations for 2015, 2016, and 2017. We restrict our test set to individuals in the original test set whose first observed occupation was before 2015 and who were observed to have worked until 2017. PSID and NLSY97 are biennial, so we forecast for 2015, 2017, and 2019. We only make forecasts for individuals who have observations before the cutoff year and through the last year of forecasting, resulting in a total of 16,430 observations for PSID and 18,743 for NLSY97. Wage prediction. For the wage prediction experiment, we use replication data provided by Blau & Kahn (2017b). We add individual’s job histories to this dataset by matching interview and person numbers. We drop individuals that could not be matched, about 3% of the data. When we apply CAREER to this data to learn a representation of job history, we do not use any covariates besides the year a job took place. We pretrain a version of CAREER containing 4 layers, 64 dimensions for the representations, 256 hidden units in the feedforward neural networks, and 2 attention heads. We pretrain on resumes for 50,000 steps. We fine-tune to predict jobs on PSID using the job histories of individuals up to the year of interest; for example, for the 2011 experiment, we only fine-tune on jobs that took place before 2011. We update parameters every 6 batches when fine-tuning. After fine-tuning CAREER’s representations to predict jobs, we plug in the learned representations into the wage regression in Equation 9. Notably, we do not alter CAREER’s representations to predict wage; we only estimate regression coefficients. We perform an unweighted linear regression. Our model without CAREER uses the same covariates as the wage regression in Blau & Kahn (2017a), including full- and part-time years of experience (and their squares), education, region, race/ethnicity, union status, current occupation, and current industry. We do not include whether an individual is a government worker because it results in instability for unweighted regression. Rather than estimate two separate models for males and females, we use a single model and include gender as an observed covariate. When we incorporate CAREER’s representations into the model, we use the same base model and add CAREER’s representations. Rationalization. The example in Figure 3 shows an example of CAREER’s rationale on PSID. To simplify the example, this is the rationale for a model trained on no covariates except year. In order to conceal individual behavior patterns, the example in Figure 3 is a slightly altered version of a real sequence. For this example, the transformer used for CAREER follows the architecture described in Radford et al. (2018). We find the rationale using the greedy rationalization method described in Vafa et al. (2021). Greedy rationalization requires fine-tuning the model for compatibility; we do this by fine-tuning with “job dropout”, where with 50% probability, we drop out a uniformly random amount of observations in the history. When making predictions, the model has to implicitly marginalize over the missing observations. (We pretrain on the resumes dataset without any word dropout). We find that training converges quickly when fine-tuning with word dropout, and the model’s performance when conditioning on the full history is similar. Greedy rationalization typically adds observations to a history one at a time in the order that will maximize the model’s likelihood of its top prediction. For occupations, the model’s top prediction is almost always identical to the previous year’s occupation, so we modify greedy rationalization to add the occupation that will maximize the likelihood of its second-largest prediction. This can be interpreted as equivalent to greedy rationalization, albeit conditioning on switching occupations. Thus, the greedy rationalization procedure stops when the model’s second-largest prediction from the target rationale is equivalent to the model’s second-largest prediction when conditioning on the full history.
1. What is the main contribution of the paper in the field of job sequence prediction? 2. What are the strengths of the proposed approach, particularly in terms of leveraging large-scale resume data? 3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper develops a transformer-based model to learn representations of job sequences. The authors first leverage the transformer architecture to fit large-scale resume data, and then finetune the model with smaller, task-specific data, which achieves good performance in predicting job sequences. Strengths And Weaknesses a. Strength: 1). This paper is well-presented. 2). The authors propose a transformer-based framework for job prediction, which can effectively take use of large-scale resume data; the pretrain-finetune paradigm is reasonable. 3). The experiments show that their proposed model can perform well on job prediction task. b. Weaknesses: 1). Some related work is not mentioned and some potential baselines are missed. Actually person-job fit is quite a mature topic in data mining and information retrieval. The first two baselines (Markov regression and bag-of-jobs) seem to come from econometrics, and the third baseline (NEMO) is quite old in data mining. The authors should investigate more and discover more strong baselines to verify the effectiveness. 2). Depside the effectiveness of pretrain-finetune paradigm of transformer architecture, it has been well studied in other tasks such as NLP and CV. Therefore, the technical novelty is inadequate as the paper seems to be an application of transformer in job prediction task. Clarity, Quality, Novelty And Reproducibility Presentation of the paper is quite clear, but the techincal novelty from the CS perspective is limited.
ICLR
Title Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference Abstract Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely.1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994). In continual learning we assume that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution (see Appendix A for details). We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Unfortunately, neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network We consider task agnostic future gradients, referring to gradients of the model parameters with respect to unseen data points. These can be drawn from tasks that have already been partially learned or unseen tasks. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994). In continual learning we assume that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution (see Appendix A for details). We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Unfortunately, neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network 1We consider task agnostic future gradients, referring to gradients of the model parameters with respect to unseen data points. These can be drawn from tasks that have already been partially learned or unseen tasks. stability (to preserve past knowledge) and plasticity (to rapidly learn the current experience). For example, these techniques focus on balancing limited weight sharing with some mechanism to ensure fast learning (Li & Hoiem, 2016; Riemer et al., 2016a; Lopez-Paz & Ranzato, 2017; Rosenbaum et al., 2018; Lee et al., 2018; Serrà et al., 2018). In this paper, we extend this view by noting that for continual learning over an unbounded number of distributions, we need to consider weight sharing and the stability-plasticity trade-off in both the forward and backward directions in time (Figure 1A). The transfer-interference trade-off proposed in this paper (section 2) presents a novel perspective on the goal of gradient alignment for the continual learning problem. This is right at the heart of the problem as these gradients are the update steps for SGD based optimizers during learning and there is a clear connection between gradients angles and managing the extent of weight sharing. The key difference in perspective with past conceptualizations of continual learning is that we are not just concerned with current transfer and interference with respect to past examples, but also with the dynamics of transfer and interference moving forward as we learn. Other approaches have certainly explored operational notions of transfer and interference in forward and backward directions (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2018), the link to weight sharing (French, 1991; Ajemian et al., 2013), and the idea of influencing gradient alignment for continual learning before (Lopez-Paz & Ranzato, 2017). However, in past work, ad hoc changes have been made to the dynamics of weight sharing based on current learning and past learning without formulating a consistent theory about the optimal weight sharing dynamics. This new view of the problem leads to a natural meta-learning (Schmidhuber, 1987) perspective on continual learning: we would like to learn to modify our learning to affect the dynamics of transfer and interference in a general sense. To the extent that our meta-learning into the future generalizes, this should make it easier for our model to perform continual learning in non-stationary settings. We achieve this by building off past work on experience replay (Murre, 1992; Lin, 1992; Robins, 1995) that has been a mainstay for solving non-stationary problems with neural networks. We propose a novel meta-experience replay (MER) algorithm that combines experience replay with optimization based meta-learning (section 3) as a first step towards modeling this perspective. Moreover, our experiments (sections 4, 5, and 6), confirm our theory. MER shows great promise across a variety of supervised continual learning and continual reinforcement learning settings. Critically, our approach is not reliant on any provided notion of tasks and in most of the settings we explore we must detect the concept of tasks without supervision. See Appendix B for a more detailed positioning with respect to related research. 2 THE TRANSFER-INTERFERENCE TRADE-OFF FOR CONTINUAL LEARNING At an instant in time with parameters θ and loss L, we can define2 operational measures of transfer and interference between two arbitrary distinct examples (xi, yi) and (xj , yj) while training with 2Throughout the paper we discuss ideas in terms of the supervised learning problem formulation. Extensions to the reinforcement learning formulation are straightforward. We provide more details in Appendix N. SGD. Transfer occurs when: ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ > 0, (1) where · is the dot product operator. This implies that learning example i will without repetition improve performance on example j and vice versa (Figure 1B). Interference occurs when: ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ < 0. (2) Here, in contrast, learning example i will lead to unlearning (i.e. forgetting) of example j and vice versa (Figure 1C). 3 There is weight sharing between i and j when they are learned using an overlapping set of parameters. So, potential for transfer is maximized when weight sharing is maximized while potential for interference is minimized when weight sharing is minimized (Appendix C). Past solutions for the stability-plasticity dilemma in continual learning operate in a simplified temporal context where learning is divided into two phases: all past experiences are lumped together as old memories and the data currently being learned qualifies as new learning. In this setting, the goal is to simply minimize the interference projecting backward in time, which is generally achieved by reducing the degree of weight sharing explicitly or implicitly. In Appendix D we explain how our baseline approaches (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017) fit within this paradigm. The important issue with this perspective, however, is that the system still has learning to do and what the future may bring is largely unknown. This makes it incumbent upon us to do nothing to potentially undermine the networks ability to effectively learn in an uncertain future. This consideration makes us extend the temporal horizon of the stability-plasticity problem forward, turning it, more generally, into a continual learning problem that we label as solving the Transfer-Interference Trade-off (Figure 1A). Specifically, it is important not only to reduce backward interference from our current point in time, but we must do so in a manner that does not limit our ability to learn in the future. This more general perspective acknowledges a subtlety in the problem: the issue of gradient alignment and thus weight sharing across examples arises both backward and forward in time. With this temporally symmetric perspective, the transfer-interference trade-off becomes clear. Here we propose a potential solution where we learn to learn in a way that promotes gradient alignment at each point in time. The weight sharing across examples that enables transfer to improve future performance must not disrupt performance on what has come previously. As such, our work adopts a meta-learning perspective on the continual learning problem. We would like to learn to learn each example in a way that generalizes to other examples from the overall distribution. 3 A SYSTEM FOR LEARNING TO LEARN WITHOUT FORGETTING In typical offline supervised learning, we can express our optimization objective over the stationary distribution of x, y pairs within the dataset D: θ = argmin θ E(x,y)∼D[L(x, y)], (3) where L is the loss function, which can be selected to fit the problem. If we would like to maximize transfer and minimize interference, we can imagine it would be useful to add an auxiliary loss to the objective to bias the learning process in that direction. Considering equations 1 and 2, one obviously beneficial choice would be to also directly consider the gradients with respect to the loss function evaluated at randomly chosen datapoints. If we could maximize the dot products between gradients at these different points, it would directly encourage the network to share parameters where gradient directions align and keep parameters separate where interference is caused by gradients in opposite directions. So, ideally we would like to optimize for the following objective 4: θ = argmin θ E[(xi,yi),(xj ,yj)]∼D[L(xi, yi) + L(xj , yj)− α ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ ], (4) 3We borrow our terminology from operational measures of forward transfer and backward transfer in LopezPaz & Ranzato (2017), but adopt a temporally symmetric view of the phenomenon by dropping the specification of direction. Interference commonly refers to negative transfer in either direction in the literature. 4The inclusion of L(xj , yj) is largely an arbitrary notation choice as the relative prioritization of the two types of terms can be absorbed in α. We use this notation as it is most consistant with our implementation. Algorithm 1 Meta-Experience Replay (MER) procedure TRAIN(D, θ, α, β, γ, s, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batches from buffer: B1, ..., Bs ← sample(x, y, s, k,M) θA0 ← θ for i = 1, ..., s do θWi,0 ← θ for j = 1, ..., k do xc, yc ← Bi[j] θWi,j ← SGD(xc, yc, θWi,j−1, α) end for // Within batch Reptile meta-update: θ ← θWi,0 + β(θWi,k − θWi,0) θAi ← θ end for // Across batch Reptile meta-update: θ ← θA0 + γ(θAs − θA0 ) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure where (xi, yi) and (xj , yj) are randomly sampled unique data points. We will attempt to design a continual learning system that optimizes for this objective. However, there are multiple problems that must be addressed to implement this kind of learning process in practice. The first problem is that continual learning deals with learning over a non-stationary stream of data. We address this by implementing an experience replay module that augments online learning so that we can approximately optimize over the stationary distribution of all examples seen so far. Another practical problem is that the gradients of this loss depend on the second derivative of the loss function, which is expensive to compute. We address this by indirectly approximating the objective to a first order Taylor expansion using a meta-learning algorithm with minimal computational overhead. 3.1 EXPERIENCE REPLAY Learning objective: The continual lifelong learning setting poses a challenge for the optimization of neural networks as examples come one by one in a non-stationary stream. Instead, we would like our network to optimize over the stationary distribution of all examples seen so far. Experience replay (Lin, 1992; Murre, 1992) is an old technique that remains a central component of deep learning systems attempting to learn in non-stationary settings, and we will adopt here conventions from recent work (Zhang & Sutton, 2017; Riemer et al., 2017b) leveraging this approach. The central feature of experience replay is keeping a memory of examples seen M that is interleaved with the training of the current example with the goal of making training more stable. As a result, experience replay approximates the objective in equation 3 to the extent that M approximates D: θ = argmin θ E(x,y)∼M [L(x, y)], (5) M has a current size Msize and maximum size Mmax. In our work, we update the buffer with reservoir sampling (Appendix F). This ensures that at every time-step the probability that any of the N examples seen has of being in the buffer is equal to Msize/N . The content of the buffer resembles a stationary distribution over all examples seen to the extent that the items stored captures the variation of past examples. Following the standard practice in offline learning, we train by randomly sampling a batch B from the distribution captured by M . Prioritizing the current example: the variant of experience replay we explore differs from offline learning in that the current example has a special role ensuring that it is always interleaved with the examples sampled from the replay buffer. This is because before we proceed to the next example, we want to make sure our algorithm has the ability to optimize for the current example (particularly if it is not added to the memory). Over N examples seen, this still implies that we have trained with each example as the current example with probability per step of 1/N . We provide algorithms further detailing how experience replay is used in this work in Appendix G (algorithms 4 and 5). Concerns about storing examples: Obviously, it is not scalable to store every experience seen in memory. As such, in this work we focus on showing that we can achieve greater performance than baseline techniques when each approach is provided with only a small memory buffer. 3.2 COMBINING EXPERIENCE REPLAY WITH OPTIMIZATION BASED META-LEARNING First order meta-learning: One of the most popular meta-learning algorithms to date is Model Agnostic Meta-Learning (MAML) (Finn et al., 2017). MAML is an optimization based meta-learning algorithm with nice properties such as the ability to approximate any learning algorithm and the ability to generalize well to learning data outside of the previous distribution (Finn & Levine, 2017). One aspect of MAML that limits its scalability is the need to explicitly compute second derivatives. The authors proposed a variant called first-order MAML (FOMAML), which is defined by ignoring the second derivative terms to address this issue and surprisingly found that it achieved very similar performance. Recently, this phenomenon was explained by Nichol & Schulman (2018) who noted through Taylor expansion that the two algorithms were approximately optimizing for the same loss function. Nichol & Schulman (2018) also proposed an algorithm, Reptile, that efficiently optimizes for approximately the same objective while not requiring that the data be split into training and testing splits for each task learned as MAML does. Reptile is implemented by optimizing across s batches of data sequentially with an SGD based optimizer and learning rate α. After training on these batches, we take the initial parameters before training θ0 and update them to θ0 ← θ0 + β ∗ (θk − θ0) where β is the learning rate for the meta-learning update. The process repeats for each series of s batches (algorithm 2). Shown in terms of gradients in Nichol & Schulman (2018), Reptile approximately optimizes for the following objective over a set of s batches: θ = argmin θ EB1,...,Bs∼D[2 s∑ i=1 [L(Bi)− i−1∑ j=1 α ∂L(Bi) ∂θ · ∂L(Bj) ∂θ ]], (6) where B1, ..., Bs are batches within D. This is similar to our motivation in equation 4 to the extent that gradients produced on these batches approximate samples from the stationary distribution. The MER learning objective: In this work, we modify the Reptile algorithm to properly integrate it with an experience replay module, facilitating continual learning while maximizing transfer and minimizing interference. As we describe in more detail during the derivation in Appendix I, achieving the Reptile objective in an online setting where examples are provided sequentially is non-trivial and is in part only achievable because of our sampling strategies for both the buffer and batch. Following our remarks about experience replay from the prior section, this allows us to optimize for the following objective in a continual learning setting using our proposed MER algorithm: θ = argmin θ E[(x11,y11),...,(xsk,ysk)]∼M [2 s∑ i=1 k∑ j=1 [L(xij , yij)− i−1∑ q=1 j−1∑ r=1 α ∂L(xij , yij) ∂θ ·∂L(xqr, yqr) ∂θ ]]. (7) The MER algorithm: MER maintains an experience replay style memory M with reservoir sampling and at each time step draws s batches including k − 1 random samples from the buffer to be trained alongside the current example. Each of the k examples within each batch is treated as its own Reptile batch of size 1 with an inner loop Reptile meta-update after that batch is processed. We then apply the Reptile meta-update again in an outer loop across the s batches. We provide further details for MER in algorithm 1. This procedure approximates the objective of equation 7 when β = 1. The sample function produces s batches for updates. Each batch is created by first adding the current example and then interleaving k − 1 random examples from M . Controlling the degree of regularization: In light of our ideal objective in equation 4, we can see that using a SGD batch size of 1 has an advantage over larger batches because it allows for the second derivative information conveyed to the algorithm to be fine grained on the example level. Another reason to use sample level effective batches is that for a given number of samples drawn from the buffer, we maximize s from equation 6. In equation 6, the typical offline learning loss has a weighting proportional to s and the regularizer term to maximize transfer and minimize interference has a weighting proportional to αs(s−1)/2. This implies that by maximizing the effective s we can put more weight on the regularization term. We found that for a fixed number of examples drawn from M , we consistently performed better converting to a long list of individual samples than we did using proper batches as in Nichol & Schulman (2018) for few shot learning. Prioritizing current learning: To ensure strong regularization, we would like our number of batches processed in a Reptile update to be large – enough that experience replay alone would start to overfit to M . As such, we also need to make sure we provide enough priority to learning the current example, particularly because we may not store it in M . To achieve this in algorithm 1, we sample s separate batches from M that are processed sequentially and each interleaved with the current example. In Appendix H we also outline two additional variants of MER with very similar properties in that they effectively approximate for the same objective. In one we choose one big batch of size sk − s memories and s copies of the current example (algorithm 6). In the other, we choose one memory batch of size k−1 with a special current item learning rate of sα (algorithm 7). Unique properties: In the end, our approach amounts to a quite easy to implement and computationally efficient extension of SGD, which is applied to an experience replay buffer by leveraging the machinery of past work on optimization based meta-learning. However, the emergent regularization on learning is totally different than those previously considered. Past work on optimization based meta-learning has enabled fast learning on incoming data without considering past data. Meanwhile, past work on experience replay only focused on stabilizing learning by approximating stationary conditions without altering model parameters to change the dynamics of transfer and interference. 4 EVALUATION FOR SUPERVISED CONTINUAL LIFELONG LEARNING To test the efficacy of MER we compare it to relevant baselines for continual learning of many supervised tasks from Lopez-Paz & Ranzato (2017) (see Appendix D for in-depth descriptions): • Online: represents online learning performance of a model trained straightforwardly one example at a time on the incoming non-stationary training data by simply applying SGD. • Independent: an independent predictor per task with less hidden units proportional to the number of tasks. When useful, it can be initialized by cloning the last predictor. • Task Input: has the same architecture as Online, but with a dedicated input layer per task. • EWC: Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is an algorithm that modifies online learning where the loss is regularized to avoid catastrophic forgetting. • GEM: Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017) is an approach for making efficient use of episodic storage by following gradients on incoming examples to the maximum extent while altering them so that they do not interfere with past memories. An independent adhoc analysis is performed to alter each incoming gradient. In contrast to MER, nothing generalizable is learned across examples about how to alter gradients. We follow Lopez-Paz & Ranzato (2017) and consider final retained accuracy across all tasks after training sequentially on all tasks as our main metric for comparing approaches. Moving forward we will refer to this metric as retained accuracy (RA). In order to reveal more characteristics of the learning behavior, we also report the learning accuracy (LA) which is the average accuracy for each task directly after it is learned. Additionally, we report the backward transfer and interference (BTI) as the average change in accuracy from when a task is learned to the end of training. A highly negative BTI reflects catastrophic forgetting. Forward transfer and interference (Lopez-Paz & Ranzato, 2017) is only applicable for one task we explore, so we provide details in Appendix K. Question 1 How does MER perform on supervised continual learning benchmarks? To address this question we consider two continual learning benchmarks from Lopez-Paz & Ranzato (2017). MNIST Permutations is a variant of MNIST first proposed in Kirkpatrick et al. (2017) where each task is transformed by a fixed permutation of the MNIST pixels. As such, the input distribution of each task is unrelated. MNIST Rotations is another variant of MNIST proposed in Lopez-Paz & Ranzato (2017) where each task contains digits rotated by a fixed angle between 0 and 180 degrees. We follow the standard benchmark setting from Lopez-Paz & Ranzato (2017) using a modest memory buffer of size 5120 to learn 1000 sampled examples across each of 20 tasks. We provide detailed information about our architectures and hyperparameters in Appendix J. In Table 1 we report results on these benchmarks in comparison to our baseline approaches. Clearly GEM outperforms our other baselines, but our approach adds significant value over GEM in terms of retained accuracy on both benchmarks. MER achieves this by striking a superior balance between transfer and interference with respect to the past and future data. MER displays the best adaption to incoming tasks, while also providing very strong retention of knowledge when learning future tasks. EWC and using a task specific input layer both also lead to gains over standard online learning in terms of retained accuracy. However, they are quite far below the performance of approaches that make usage of episodic storage. While EWC does not store examples, in storing the Fisher information for each task it accrues more incremental resources than the episodic storage approaches. Question 2 How do the performance gains from MER vary as a function of the buffer size? To make progress towards the greater goals of lifelong learning, we would like our algorithm to make the most use of even a modest buffer. This is because in extremely large scale settings it is unrealistic to assume a system can store a large percentage of previous examples in memory. As such, we would like to compare MER to GEM, which is known to perform well with an extremely small memory buffer (Lopez-Paz & Ranzato, 2017). We consider a buffer size of 500, that is over 10 times smaller than the standard setting on these benchmarks. Additionally, we also consider a buffer size of 200, matching the smallest setting explored in Lopez-Paz & Ranzato (2017). This setting corresponds to an average storage of 1 example for each combination of task and class. We report our results in Table 2. The benefits of MER seem to grow as the buffer becomes smaller. In the smallest setting, MER provides more than a 10% boost in retained accuracy on both benchmarks. Question 3 How effective is MER at dealing with increasingly non-stationary settings? Another larger goal of lifelong learning is to enable continual learning with only relatively few examples per task. This setting is particularly difficult because we have less data to characterize each class to learn from and our distribution is increasingly non-stationary over a fixed amount of training. We would like to explore how various models perform in this kind of setting. To do this we consider two new benchmarks. Many Permutations is a variant of MNIST Permutations that has 5 times more tasks (100 total) and 5 times less training examples per task (200 each). Meanwhile we also explore the Omniglot (Lake et al., 2011) benchmark treating each of the 50 alphabets to be a task (see Appendix J for experimental details). Following multi-task learning conventions, 90% of the data is used for training and 10% is used for testing (Yang & Hospedales, 2017). Overall there are 1623 characters. We learn each character and task sequentially with a task specific output layer. We report continual learning results using these new datasets in Table 3. The effect on Many Permutations of efficiently using episodic storage becomes even more pronounced when the setting becomes more non-stationary. GEM and MER both achieve nearly double the performance of EWC and online learning. We also see that increasingly non-stationary settings lead to a larger performance gain for MER over GEM. Gains are quite significant for Many Permutations and remarkable for Omniglot. Omniglot is even more non-stationary including slightly fewer examples per task and MER nearly quadruples the performance of baseline techniques. Considering the poor performance of online learning and EWC it is natural to question whether or not examples were learned in the first place. We experiment with using as many as 100 gradient descent steps per incoming example to ensure each example is learned when first seen. However, due to the extremely non-stationary setting no run of any variant we tried surpassed 5.5% retained accuracy. GEM also has major deficits for learning on Omniglot that are resolved by MER which achieves far better performance when it comes to quickly learning the current task. GEM maintains a buffer using a recent item based sampling strategy and thus can not deal with non-stationarity within the task nearly as well as reservoir sampling. Additionally, we found that the optimization based on the buffer was significantly less effective and less reliable as the quadratic program fails for many hyperparameter values that lead to non-positive definite matrices. Unfortunately, we could not get GEM to consistently converge on Omniglot for a memory size of 500 (significantly less than the number of classes), meanwhile MER handles it well. In fact, MER greatly outperforms GEM with an order of magnitude smaller buffer. We provide additional details about our experiments on Omniglot in Figure 3. We plot retained training accuracy, retained testing accuracy, and computation time for the entire training period using one CPU. We find that MER strikes the best balance of computational efficiency and performance even when using algorithm 1 for MER which performs more computation than algorithm 7. The computation involved in the GEM update does not scale well to large CNN models like those that are popular for Omniglot. MER is far better able to fit the training data than our baseline models while maintaining a computational efficiency closer to online update methods like EWC than GEM. 5 EVALUATION FOR CONTINUAL REINFORCEMENT LEARNING Question 4 Can MER improve a DQN with ER in continual reinforcement learning settings? We considered the evaluation of MER in a continual reinforcement learning setting where the environment is highly non-stationary. In order to produce these non-stationary environments in a controlled way suitable for our experimental purposes, we utilized arcade games provided by Tasfi (2016). Specifically, we used Catcher and Flappy Bird, two simple but interesting enough environments (see Appendix N.1 for details). For the purposes of our explanation, we will call each set of fixed game-dependent parameters a task5. The multi-task setting is then built by introducing changes in these parameters, resulting in non-stationarity across tasks. Each agent is once again evaluated based on its performance over time on all tasks. Our model uses a standard DQN model, developed for Atari (Mnih et al., 2015). See Appendix N.2 for implementation details. In Catcher, we then obtain different tasks by incrementally increasing the pellet velocity a total of 5 times during training. In Flappy Bird, the different tasks are obtained by incrementally reducing the separation between upper and lower pipes a total of 5 times during training. In Figure 4, we show the performance in Catcher when trained sequentially on 6 different tasks for 25k frames each to a maximum of 150k frames, evaluated at each point in time in all 6 tasks. Under these nonstationary conditions, a DQN using MER performs consistently better than the standard DQN with an experience replay buffer (see Appendix N.4 for further comments and ablation results). If we take as inspiration how humans perform, in the last stages of training we hope that a player that obtains good results in later tasks will also obtain good results in the first tasks, as the first tasks are subsumed in the latter ones. For example, in Catcher, the pellet moves faster in later tasks, and thus we expect to be able to do well on the first task. However, DQN forgets significantly how to get slowly moving pellets. In contrast, DQN-MER exhibits minimal or no forgetting after training on the rest of the tasks. This behavior is intuitive because we would expect transfer to happen naturally in this setting. We see similar behavior for Flappy Bird. DQN-MER becomes a Platinum player on the first task when it is learning the third task. This is a more difficult environment in which the pipe gap is noticeably smaller (see Appendix N.4). DQN-MER exhibits the kind of learning patterns expected from humans for these games, while a standard DQN struggles to generalize as the game changes and to retain knowledge over time. 6 FURTHER ANALYSIS OF THE APPROACH In this section we would like to dive deeper into how MER works. To do so we run additional detailed experiments across our three MNIST based continual learning benchmarks. Question 5 Does MER lead to a shift in the distribution of gradient dot products? We would like to directly verify that MER achieves our motivation in equation 7 and results in significant changes in the distribution of gradient dot products between new incoming examples and past examples over the course of learning when compared to experience replay (ER) from algorithm 5Agents are not provided task information, forcing them to identify changes in game play on their own. 5. For these experiments, we maintain a history of all examples seen that is totally separate from our notion of memory buffers that only include a partial history of examples. Every time we receive a new example we use the current model to extract a gradient direction and we also randomly sample five examples from the previous history. We save the dot products of the incoming example gradient with these five past example gradients and consider the mean of the distribution of dot products seen over the course of learning for each model. We run this experiment on the best hyperparamater setting for both ER and MER from algorithm 6 with one batch per example for fair comparison. Each model is evaluated five times over the course of learning. We report mean and standard deviations of the mean gradient dot product across runs in Table 4. We can thus verify that a very significant and reproducible difference in the mean gradient encountered is seen for MER in comparison to ER alone. This difference alters the learning process making incoming examples on average result in slight transfer rather than significant interference. This analysis confirms the desired effect of the objective function in equation 7. For these tasks there are enough similarities that our meta-learning generalizes very well into the future. We should also expect it to perform well during drastic domain shifts like other meta-learning algorithms driven by SGD alone (Finn & Levine, 2017). Question 6 What components of MER are most important? We would like to further analyze our MER model to understand what components add the most value and when. We want to understand how powerful our proposed variants of ER are on their own and how much is added by adding meta-learning to ER. In Appendix L we provide detailed results considering ablated baselines for our experiments on the MNIST lifelong learning benchmarks. 6 Our versions of ER consistently provide gains over GEM on their own, but the techniques perform very comparably when we also maintain GEM’s buffer with reservoir sampling or use ER with a GEM style buffer. Additionally, we see that adding meta-learning to ER consistently results in performance gains. In fact, meta-learning appears to provide increasing value for smaller buffers. In Appendix M, we provide further validation that our results are reproducible across runs and seeds. We would also like to compare the variants of MER proposed in algorithms 1, 6, and 7. Conceptually algorithms 1 and 7 represent different mechanisms of increasing the importance of the current example in algorithm 6. We find that all variants of MER result in significant improvements on ER. Meanwhile, the variants that increase the importance of the current example see a further improvement in performance, performing quite comparably to each other. Overall, in our MNIST experiments algorithm 7 displays the best tradeoff of computational efficiency and performance. Finally, we conducted experiments demonstrating that adaptive optimizers like Adam and RMSProp can not account for the gap between ER and MER. Particularly for smaller buffer sizes, these approaches overfit more on the buffer and actually hurt generalization in comparison to SGD. 7 CONCLUSION In this paper we have cast a new perspective on the problem of continual learning in terms of a fundamental trade-off between transfer and interference. Exploiting this perspective, we have in turn developed a new algorithm Meta-Experience Replay (MER) that is well suited for application to general purpose continual learning problems. We have demonstrated that MER regularizes the objective of experience replay so that gradients on incoming examples are more likely to have transfer and less likely to have interference with respect to past examples. The result is a general purpose solution to continual learning problems that outperforms strong baselines for both supervised continual learning benchmarks and continual learning in non-stationary reinforcement learning environments. Techniques for continual learning have been largely driven by different conceptualizations of the fundamental problem encountered by neural networks. We hope that the transfer-interference tradeoff can be a useful problem view for future work to exploit with MER as a first successful example. 6Code available at https://github.com/mattriemer/mer. ACKNOWLEDGMENTS We would like to thank Pouya Bashivan, Christopher Potts, Dan Jurafsky, and Joshua Greene for their input and support of this work. Additionally, we would like to thank Arslan Chaudhry and Marc’Aurelio Ranzato for their helpful comments and discussions. We also thank the three anonymous reviewers for their valuable suggestions. This research was supported by the MIT-IBM Watson AI Lab, and is based in part upon work supported by the Stanford Data Science Initiative and by the NSF under Grant No. BCS-1456077 and the NSF Award IIS-1514268. A CONTINUAL LEARNING PROBLEM FORMULATION In the classical offline supervised learning setting, a learning agent is given a fixed training data set D = {(xi, yi)}ni=1 of n samples, each containing an input feature vector xi ∈ X associated with the corresponding output (target, or label) yi ∈ Y; a common assumption is that the training samples are i.i.d. samples drawn from the same unknown joint probability distribution P (x, y). The learning task is often formulated as a function approximation problem, i.e. finding a function, or model, fθ(x) : X → Y from a given class of models (e.g., neural networks, decision trees, linear functions, etc.) where θ are the parameters estimated from data. Given a loss function L(fθ(x), y), the parameter estimation is formulated as an empirical risk minimization problem: minθ 1 |D| ∑ (xi,yi)∼D L(fθ(x), y). On the contrary, the online learning setting does not assume a fixed training dataset but rather a stream of data samples, where unlabeled feature vectors arrive one at a time, or in small minibatches, and the learner must assign labels to those inputs, receive the correct labels, and update the model accordingly, in iterative fashion. While classical online learning assumes i.i.d. samples, continual or lifelong learning does not make such an assumption, and requires a learning agent to handle non-stationary data streams. In this work, we define continual learning as online learning from a non-stationary input data stream, with a specific type of non-stationarity as defined below. Namely, we follow a commonly used setting to define non-stationary conditions for continual learning, dubbed locally i.i.d by Lopez-Paz & Ranzato (2017), where the agent learns over a sequence of separate stationary distributions one after another. We call the individual stationary distributions tasks, where each task tk is an online supervised learning problem associated with its own data probability distribution Pk(x, y). Namely, we are given a (potentially infinite) sequence (x1, y1, t1), ..., (xi, yi, ti), ..., (xi+j , yi+j , ti+j) While many continual learning methods assume the task descriptors tk are available to a learner, we are interested in developing approaches which do not have to rely on such information and can learn continuously without explicit announcement of the task change. Borrowing terminology from Chaudhry et al. (2018), we explore the single-headed setting in most of our experiments, which keeps learning a common function fθ across changing tasks. In contrast, multi-headed learning, which we consider for our Omniglot experiments, involves a separate final classification layer for each task. This makes more sense in case of Omniglot dataset, where the number of classes for each task varies considerably from task to task. We should also note that for Omniglot we consider a setting that is locally i.i.d. at the class level rather than the task level. B RELATION TO PAST WORK With regard to the continual learning setting specifically, other recent work has explored similar operational measures of transfer and interference. For example, the notions of Forward Transfer and Backward Transfer were explored in Lopez-Paz & Ranzato (2017). However, the approach of that work, GEM, was primarily concerned with solving the classic stability-plasticity dilemma (Carpenter & Grossberg, 1987) at a specific instance of time. Adjustments to gradients on the current data are made in an ad hoc manner solving a quadratic program separately for each example. In our work we try to learn a generalizable theory about weight sharing that can learn to influence the distribution of gradients not just in the past and present, but in the future as well. Additionally, in Chaudhry et al. (2018) similar ideas were explored with operational measures of intransigence (the inability to learn new data) and forgetting (the loss of previous performance). These measures are also intimately related to the stability-plasticity dilemma as intransigence is high when plasticity is low and forgetting is high when stability is low. The major distinction in the transfer-interference trade-off proposed in this work is that we aim to learn the optimal weight sharing scheme to optimize for the stability-plasticity dilemma with the hope that our learning about weight sharing will improve the stability and efficacy of learning on unseen data as well. With regard to the problem of weight-sharing in neural networks more generally, a host of different strategies have been proposed in the past to deal with the problems of catastrophic forgetting and/or the stability-plasticity dilemma (for review, see French (1999)). For example, one strategy for alleviating catastrophic forgetting is to make distributed representations less distributed – or semi-distributed (French, 1991) – for the case of past learning. Activation sharpening as introduced by French (1991) is a prominent example. A second strategy known as dual network models (McClelland et al., 1995; Ans & Rousset, 1997) is based on the neurobiological finding that both hippocampal and cortical circuits contributed differentially to memory. The cortical circuits are highly distributed with overlapping representations suitable for task generalization, while the more sparse hippocampal representations tend to be non-overlapping. The existence of dual circuits provides an extra degree of freedom for balancing the dual constraints of stability and plasticity. In a similar spirit, models have been proposed that have two classes of weights operating on two different timescales (Hinton & Plaut, 1987). A third strategy also motivated by neurobiological considerations is the use of latent synaptic dynamics (Fusi et al., 2005; Lahiri & Ganguli, 2013). Here the basic idea is that synaptic strength is determined by a multiple of variables, including latent ones not easily observed, operating at different timescales such that their net effect is to provide the system with additional degrees-of-freedom to store past experience without interfering with current learning. A fourth strategy is the use of feedback mechanisms to stabilize representations (Carpenter & Grossberg, 1987; Murre, 1992). In this class of models, a previously experienced memory will trigger top down feedback that prevents plasticity, while novel stimuli that experience no such feedback trigger plasticity. All of these approaches have their own strengths and weaknesses with respect to the stability-plasticity dilemma and, by extension, the transfer-interference trade-off we propose. Another relevant work is the POWERPLAY framework (Schmidhuber, 2004; 2013) which is a method for asymptotically optimal curriculum learning that by definition cannot forget previously learned skills. POWERPLAY also uses environment-independent replay of behavioral traces to avoid forgetting previous skills. However, POWERPLAY is orthogonal to our work as we consider a different setting where the agent cannot directly control the new tasks that will be encountered in the environment and thus must instead learn to adapt and react to non-stationarity conditions. In contrast to past work on meta-learning for few shot learning (Santoro et al., 2016; Vinyals et al., 2016; Ravi & Larochelle, 2016; Finn et al., 2017) and reinforcement learning across successive tasks (Al-Shedivat et al., 2018), we are not only trying to improve the speed of learning on new data, but also trying to do it in a way that preserves knowledge of past data and generalizes to future data. While past work has considered learning to influence gradient angles, so that there is more alignment and thus faster learning within a task, we focus on a setting where we would like to influence gradient angles from all tasks at all points in time. As our model aims to influence the dynamics of weight sharing, it bears conceptual similarity to mixtures of experts (Jacobs et al., 1991) style models for lifelong and multi-task learning (Misra et al., 2016; Riemer et al., 2016b; Aljundi et al., 2017; Fernando et al., 2017; Shazeer et al., 2017; Rosenbaum et al., 2018). MER implicitly affects the dynamics of weight sharing, but it is possible that combining it with mixtures of experts models could further amplify the ability for the model to control these dynamics. This is potentially an interesting avenue for future work. The options framework has also been considered as a solution to a similar continual RL setting to the one we explore (Mankowitz et al., 2018). Options formalize the notion of temporally abstraction actions in RL. Interestingly, generic architectures designed for shallow (Bacon et al., 2017) or deep (Riemer et al., 2018) hierarchies of options in essence learn very complex patterns of weight sharing over time. The option hierarchies constitute an explicit mechanism of controlling the extent of weight sharing for continual learning, allowing for orthogonalization of weights relating to different skills. In contrast, our work explores a method of implicitly optimizing weight sharing for continual learning that improves the efficacy of experience replay. MER should be simple to implement in concert with options based methods and combining the two is an interesting direction for future work. C THE CONNECTION BETWEEN WEIGHT SHARING AND THE TRANSFER-INTERFERENCE TRADE-OFF In this section we would like to generalize our interpretation of a large set of different weight sharing schemes including (Riemer et al., 2015; Bengio et al., 2015; Rosenbaum et al., 2018; Serrà et al., 2018) and how the concept of weight sharing impacts the dynamics of transfer (equation 1) and interference (equation 2). We will assume that we have a total parameter space θ that can be used by our network at any point in time. However, it is not a requirement that all parameters are actually used at all points in time. So, we can consider two specific instances in time. One where we receive data point (x1, y1) and leverage parameters θ1. Then, at the other instance in time, we receive data point (x2, y2) and leverage parameters θ2. θ1 and θ2 are both subsets of θ and critically the overlap between these subsets influences the possible extent of transfer and interference when training on either data point. First let us consider two extremes. In the first extreme imagine θ1 and θ2 are entirely nonoverlapping. As such ∂L(x1,y1)∂θ · ∂L(x2,y2) ∂θ = 0. On the positive side, this means that our solution has no potential for interference between the examples. On the other hand, there is no potential for transfer either. On the other extreme, we can imagine that θ1 = θ2. In this case, the potential for both transfer and interference is maximized as gradients with respect to every parameter have the possibility of a non-zero dot product with each other. From this discussion it is clear that both the extreme of full weight sharing and the extreme of no weight sharing have value depending on the relationship between data points. What we would really like for continual learning is to have a system that learns when to share weights and when not to on its own. To the extent that our learning about weight sharing generalizes, this should allow us to find an optimal solution to the transfer-interference trade-off. D FURTHER DESCRIPTIONS AND COMPARISONS WITH BASELINE ALGORITHMS Independent: originally reported in (Lopez-Paz & Ranzato, 2017) is the performance of an independent predictor per task which has the same architecture but with less hidden units proportional to the number of tasks. The independent predictor can be initialized randomly or clone the last trained predictor depending on what leads to better performance. EWC: Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is an algorithm that modifies online learning where the loss is regularized to avoid catastrophic forgetting by considering the importance of parameters in the model as measured by their fisher information. EWC follows the catastrophic forgetting view of the continual learning problem by promoting less sharing of parameters for new learning that were deemed to be important for performance on old memories. We utilize the code provided by Lopez-Paz & Ranzato (2017) in our experiments. The only difference in our setting is that we provide the model one example at a time to test true continual learning rather than providing a batch of 10 examples at a time. GEM: Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017) is an algorithm meant to enhance the effectiveness of episodic storage based continual learning techniques by allowing the model to adapt to incoming examples using SGD as long as the gradients do not interfere with examples from each task stored in a memory buffer. If gradients interfere leading to a decrease in the performance of a past task, a quadratic program is used to solve for the closest gradient to the original that does not have negative gradient dot products with the aggregate memories from any previous tasks. GEM is known to achieve superior performance in comparison to other recently proposed techniques that use episodic storage like Rebuffi et al. (2017), making superior use of small memory buffer sizes. GEM follows similar motivation to our approach in that it also considers the intelligent use of gradient dot product information to improve the use case of supervised continual learning. As a result, it is a very strong and interesting baseline to compare with our approach. We modify the original code and benchmarks provided by Lopez-Paz & Ranzato (2017). Once again the only difference in our setting is that we provide the model one example at a time to test true continual learning rather than providing a batch of 10 examples at a time. We can consider the GEM algorithm as tailored to the stability-plasticity dilemma conceptualization of continual learning in that it looks to preserve performance on past tasks while allowing for maximal plasticity to the new task. To achieve this, GEM solves a quadratic program to find an approximate gradient gnew that closely matches ∂L(xnew,ynew) ∂θ while ensuring that the following constraint holds: gnew · ∂L(xold, yold) ∂θ > 0. (8) E REPTILE ALGORITHM We detail the standard Reptile algorithm from (Nichol & Schulman, 2018) in algorithm 2. The sample function randomly samples s batches of size k from dataset D. The SGD function applies min-batch stochastic gradient descent over a batch of data given a set of current parameters and learning rate. Algorithm 2 Reptile for Stationary Data procedure TRAIN(D, θ, α, β, s, k) while not done do // Draw batches from data: B1, ..., Bs ← sample(D, s, k) θ0 ← θ for i = 1, ..., s do θi ← SGD(Bi, θi−1, α) end for // Reptile meta-update: θ ← θ0 + β(θs − θ0) end while return θ end procedure F DETAILS ON RESERVOIR SAMPLING Throughout this paper we refer to updates to our memory M as M ←M ∪{(x, y)}. We would like to now provide details on how we update our memory buffer using reservoir sampling as outlined in Vitter (1985) (algorithm 3). Reservoir sampling solves the problem of keeping some limited number M of N total items seen before with equal probability MN when you don’t know what number N will be in advance. The randomInteger function randomly draws an integer inclusively between the provided minimum and maximum values. Algorithm 3 Reservoir Sampling with Algorithm R procedure RESERVOIR(M,N, x, y) if M > N then M [N ]← (x, y) else j = randomInteger(min = 0,max = N) if j < M then M [j]← (x, y) end if end if return M end procedure G EXPERIENCE REPLAY ALGORITHMS We detail the our variant of the experience replay in algorithm 4. This procedure closely follows recent enhancements discussed in Zhang & Sutton (2017); Riemer et al. (2017b;a) The sample function randomly samples k − 1 examples from the memory buffer M and interleaves them with the current example to form a single size k batch. The SGD function applies mini-batch stochastic gradient descent over a batch of data given a set of current parameters and learning rate. Algorithm 4 Experience Replay (ER) with Reservoir Sampling procedure TRAIN(D, θ, α, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, k,M) // Update parameters with mini-batch SGD: θ ← SGD(B, θ, α) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Unfortunately, it is not straightforward to implement algorithm 4 in all circumstances. In particular, it depends whether the neural network architecture is single headed (sharing an output layer and output space among all tasks) or multi-headed (where each task gets its own unique output space). In multi-headed settings, it is common to consider the tasks in separate batches and to equally weight the sampled tasks during each update. This results in training the parameters evenly for each task and is particularly important so we pay equal attention to each set of task specific parameters. We detail an approach that separates tasks into sub-batches for a balanced update in algorithm 5. Here L is the loss given a set of parameters over a batch of data and SGD applies a mini-batch gradient descent update rule over a loss given a set of parameters and learning rate. Algorithm 5 Experience Replay (ER) with Tasks procedure TRAIN(D, θ, α, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, k,M) // Compute balanced loss across tasks loss = 0.0 for task in B do loss = loss+ L(B[task], θ) end for // Update parameters with mini-batch SGD: θ ← SGD(loss, θ, α) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Our experiments demonstrate that both variants of experience replay are very effective for continual learning. Meanwhile, each performs significantly better than the other on some datasets and settings. H THE VARIANTS OF MER We detail two additional variants of MER (algorithm 1) in algorithms 6 and 7. The sample function takes on a slightly different meaning in each variant of the algorithm. In algorithm 1 sample is used to produce s batches consisting of k − 1 random examples from the memory buffer and the current example. In algorithm 6 sample is used to produce one batch consisting of sk − s examples from the memory buffer and s copies of the current example. In algorithm 7 sample is used to produce one batch consisting of k − 1 examples from the memory buffer. In algorithm 6, sample places the current example at the end of the batch. Meanwhile, in algorithm 7, sample places the current example in a random location within the batch. In contrast, the SGD function carries a common meaning across algorithms, applying stochastic gradient descent over a particular input and output given a set of current parameters and learning rate. Algorithm 6 Meta-Experience Replay (MER) - One Big Batch procedure TRAIN(D, θ, α, γ, sk) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, s, k,M) θ0 ← θ for i = 1, ..., sk do xc, yc ← Bi[j] θi ← SGD(xc, yc, θi−1, α) end for // Reptile meta-update: θ ← θ0 + γ(θsk − θ0) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Algorithm 7 Meta-Experience Replay (MER) - Current Example Learning Rate procedure TRAIN(D, θ, α, γ, s, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B, index← sample(k − 1,M) θ0 ← θ // SGD on individual samples from batch: for i = 1, ..., k − 1 do xc, yc ← Bi[j] if j = index // High learning rate SGD on current example: θk ← SGD(x, y, θk−1, sα) else θi ← SGD(xc, yc, θi−1, α) end for // Reptile meta-update: θ ← θ0 + γ(θk − θ0) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure I DERIVING THE EFFECTIVE OBJECTIVE OF MER We would like to derive what objective Meta-Experience Replay (algorithm 1) approximates and show that it is approximately the same objective from algorithms 6 and 7. We follow conventions from Nichol & Schulman (2018) and first demonstrate what happens to the effective gradients computed by the algorithm in the most trivial case. As in Nichol & Schulman (2018), this allows us to extrapolate an effective gradient that is a function of the number of steps taken. We can then consider the effective loss function that results in this gradient. Before we begin, let us define the following terms from Nichol & Schulman (2018): gi = ∂L(θi) ∂θi (gradient obtained during SGD) (9) θi+1 = θi − αgi (sequence of parameter vectors) (10) ḡi = ∂L(θi) ∂θ0 (gradient at initial point) (11) gji = ∂L(θi) ∂θj (gradient evaluated at point i with respect to parameters j) (12) H̄i = ∂2L(θi) ∂θ20 (Hessian at initial point) (13) Hji = ∂2L(θi) ∂θ2j (Hessian evaluated at point i with respect to parameters j) (14) In Nichol & Schulman (2018) they consider the effective gradient across one loop of reptile with size k = 2. As we have both an outer loop of Reptile applied across batches and an inner loop applied within the batch to consider, we start with a setting where the number of batches s = 2 and the number of examples per batch k = 2. Let’s recall from the original paper that the gradients of Reptile with k = 2 was: gReptile,k=2,s=1 = g0 + g1 = ḡ0 + ḡ1 − αH̄1ḡ0 +O(α2) (15) So, we can also consider the gradients of Reptile if we had 4 examples in one big batch (algorithm 6) as opposed to 2 batches of 2 examples: gReptile,k=4,s=1 = g0 + g1 + g2 + g3 = ḡ0 + ḡ1 + ḡ2 + ḡ3 − αH̄1ḡ0 − αH̄2ḡ0 − αH̄2ḡ1 − αH̄3ḡ0 − αH̄3ḡ1 − αH̄3ḡ2 +O(α2) (16) Now we can consider the case for MER where we define the parameter values as follows extending algorithm 1 where A stands for across batches and W stands for within batches: θ0 = θ A 0 = θ W 00 (17) θW01 = θ W 00 − αg00 (18) θW02 = θ W 01 − αg01 (19) θA1 = θ A 0 + β (θW02 − θA0 ) α = θ0 + β (θW02 − θ0) α = θW10 (20) θW11 = θ W 10 − αg10 (21) θW12 = θ W 11 − αg11 (22) θA2 = θ A 1 + β (θW12 − θA1 ) α (23) θ = θA0 + γβ (θA2 − θA0 ) β = θA0 + γ(θ A 2 − θA0 ) (24) gMER the gradient of Meta-Experience Replay can thus be defined analogously to the gradient of Reptile as: gMER = θA0 − θA2 β = θ0 − θA2 β (25) By simply applying Reptile from equation 15 we can derive the value of the parameters θA1 after updating with Reptile within the first batch in terms of the original parameters θ0: θA1 = θ0 − βḡ00 − βḡ01 + βαH̄01ḡ00 +O(βα2) (26) By subbing equation 26 into equation 23 we can see that: θA2 = θ0 − βḡ00 − βḡ01 + βαH̄01ḡ00 − βg10 − βg11 +O(βα2) (27) We can express g10 in terms of the initial point, by considering a Taylor expansion following the Reptile paper: g10 = ḡ10 + αH̄10(θ W 10 − θ0) +O(α2) (28) Then substituting in for θW10 we express g10 in terms of θ0: g10 = ḡ10 − αβH̄10ḡ00 − αβH̄10ḡ01 +O(α2) (29) We can then rewrite g11 by taking a Taylor expansions with respect to θW10 : g11 = g 10 11 − αH1011g10 +O(α2) (30) Taking another Taylor expansion we find that we can transform our expression for the Hessian: H1011 = H̄11 +O(α) (31) We can analogously also transform our expression our expression for g1011 : g1011 = ḡ11 + αH̄11(θ W 10 − θ0) +O(α2) (32) Substituting for θW10 in terms of θ0 g1011 = ḡ11 − αβH̄11ḡ00 − αβH̄11ḡ01 +O(α2) (33) We then substitute equation 31, equation 33, and equation 29 into equation 34: g11 = ḡ11 − αβH̄11ḡ00 − αβH̄11ḡ01 − αH̄11ḡ10 +O(α2) (34) Finally, we have all of the terms we need to express θA2 and we can then derive an expression for the MER gradient gMER: gMER = ḡ00 + ḡ01 + ḡ10 + ḡ11 −αH̄01ḡ00 − αH̄11ḡ10 − αβH̄10ḡ00 − αβH̄10ḡ01 − αβH̄11ḡ00 − αβH̄11ḡ01 +O(α2) (35) This equation is quite interesting and very similar to equation 16. As we would like to approximate the same objective, we can remove one hyperparameter from our model by setting β = 1. This yields: gMER = ḡ00 + ḡ01 + ḡ10 + ḡ11 −αH̄01ḡ00 − αH̄11ḡ10 − αH̄10ḡ00 − αH̄10ḡ01 − αH̄11ḡ00 − αH̄11ḡ01 +O(α2) (36) Indeed, with β set to equal 1, we have shown that the gradient of MER is the same as one loop of Reptile with a number of steps equal to the total number of examples in all batches of MER (algorithm 6) if the current example is mixed in with the same proportion. If we include in the current example for s of sk examples in our meta-replay batch, it gets the same overall priority in both cases which is s times larger than that of a random example drawn from the buffer. As such, we can also optimize an equivalent gradient using algorithm 7 because it uses a factor s to increase the priority of the gradient given to the current example. While β = 1 is an interesting special case of MER in algorithm 1, in general we find it can be useful to set β to be a value smaller than 1. In fact, in our experiments we consider the case when β is smaller than 1 and γ = 1. The success of this approach makes sense because the higher order terms in the Taylor expansion that reflect the mismatch between parameters across replay batches disturb the learning process. By setting β to a value below 1 we can reduce our comparative weighting on promoting inter batch gradient similarities rather than intra batch gradient similarities. It was noted in (Nichol & Schulman, 2018) that the following equality holds if the examples and order are random: E[H̄1ḡ0] = E[H̄0ḡ1] = 1 2 E[ ∂ ∂θ0 (ḡ0 · ḡ1)] (37) In our work to make sure this equality holds in an online setting, we must take multiple precautions as noted in the main text. The issue is that examples are received in a non-stationary sequence so when applied in a continual learning setting the order is not totally random or arbitrary as in the original Reptile work. We address this by maintaining our buffer using reservoir sampling, which ensures that any example seen before has a probability 1N of being a particular element in the buffer. We also randomly select over these elements to form a batch. As this makes the order largely arbitrary to the extent that our buffer includes all examples seen, we are approximating the random offline setting from the original Reptile paper. As such we can view the gradients in equation 16 and equation 36 as leading to approximately the following objective function: θ = argmin θ E(x11,y11),...,(xsk,ysk)∼M [2 s∑ i=1 k∑ j=1 [L(xij , yij)− i−1∑ q=1 j−1∑ r=1 α ∂L(xij , yij) ∂θ ·∂L(xqr, yqr) ∂θ ]]. (38) This is precisely equation 7 in the main text. J SUPERVISED CONTINUAL LIFELONG LEARNING For the supervised continual learning benchmarks leveraging MNIST Rotations and MNIST Permutations, following conventions, we use a two layer MLP architecture for all models with 100 hidden units in each layer. We also model our hyperparameter search after Lopez-Paz & Ranzato (2017) while providing statistics for each model across 5 random seeds. For Omniglot, following Vinyals et al. (2016) we scale the images to 28x28 and use an architecture that consists of a stack of 4 modules before a fully connected softmax layer. Each module includes a 3x3 convolution with 64 filters, a ReLU non-linearity and 2x2 max-pooling. J.1 HYPERPARAMETER SEARCH Here we report the hyper-parameter grids that we searched over in our experiments. We note in parenthesis the best values for MNIST Rotations (ROT) at each buffer size (ROT-5120, ROT-500, ROT-200), MNIST Permutations (PERM) at each buffer size (PERM-5120, PERM-500, PERM200), Many Permutations (MANY) at each buffer size (MANY-5120, MANY-500), and Omniglot (OMNI) at each buffer size (OMNI-5120, OMNI-500). • Online Learning – learning rate: [0.0001, 0.0003 (ROT), 0.001, 0.003 (PERM, MANY), 0.01, 0.03, 0.1 (OMNI)] • Independent Model Per Task – learning rate: [0.0001, 0.0003, 0.001, 0.003, 0.01 (ROT, PERM, MANY), 0.03, 0.1] • Task Specific Input Layer – learning rate: [0.0001, 0.0003, 0.001, 0.003, 0.01 (ROT, PERM), 0.03, 0.1] • EWC – learning rate: [0.001 (ROT, OMNI), 0.003 (MANY), 0.01 (PERM), 0.03, 0.1, 0.3, 1.0] – regularization: [1 (MANY), 3, 10 (PERM, OMNI), 30, 100 (ROT), 300, 1000, 3000, 10000, 30000] • GEM – learning rate: [0.001, 0.003 (MANY-500), 0.01 (ROT, PERM, OMNI, MANY-5120), 0.03, 0.1, 0.3, 1.0] – memory strength (γ): [0.0 (ROT-500, ROT-200, PERM-200, MANY-5120), 0.1 (MANY-500), 0.5 (OMNI), 1.0 (ROT-5120, PERM-5120, PERM-500)] • Experience Replay (Algorithm 4) – learning rate: [0.00003, 0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1 (ROT, PERM, MANY)] – batch size (k-1): [5 (ROT-500), 10 (ROT-200, PERM-500, PERM-200), 25 (ROT- 5120, PERM-5120, MANY), 50, 100, 250] • Experience Replay (Algorithm 5) – learning rate: [0.00003, 0.0001, 0.0003, 0.001, 0.003 (MANY-5120), 0.01 (ROT-500, ROT-200, PERM, MANY-500), 0.03 (ROT-5120), 0.1] – batch size (k-1): [5 (MANY-500), 10 (PERM-200, MANY-5120), 25 (PERM-5120, PERM-500), 50 (ROT-200), 100 (ROT-5120, ROT-500), 250] • Meta-Experience Replay (Algorithm 1) – learning rate (α): [0.01 (OMNI-5120), 0.03 (ROT-5120, PERM, MANY-500), 0.1 (ROT-500, ROT-200, OMNI-500)] – across batch meta-learning rate (γ): 1.0 – within batch meta-learning rate (β): [0.01 (ROT-500, ROT-200, MANY-5120), 0.03 (ROT-5120, PERM, MANY-500), 0.1, 0.3, 1.0 (OMNI)] – batch size (k-1): [5 (MANY, OMNI-500), 10 (ROT-500, ROT-200, PERM-200), 25 (PERM-500, OMNI-5120), 50, 100 (ROT-5120, PERM-5120)] – number of batches per example: [1, 2 (OMNI-500), 5 (ROT-200, OMNI-5120), 10 (ROT-5120, ROT-500, PERM, MANY)] • Meta-Experience Replay (Algorithm 6) – learning rate (α): [0.01, 0.03 (ROT-5120, PERM-5120, PERM-500, MANY-5120), 0.1 (ROT-500, ROT-200, PERM-200, MANY-500)] – meta-learning rate (γ): [0.03 (ROT-500, ROT-200, PERM-200, MANY-500), 0.1 (ROT-5120, PERM-5120, MANY-5120), 0.3 (PERM-500), 0.6, 1.0] – batch size (k-1): [5 (PERM-200, MANY-500), 10 (ROT-500, PERM-500) 25 (ROT- 200, MANY-5120), 50 (PERM-5120), 100 (ROT-5120), 250] – number of batches per example: 1 • Meta-Experience Replay (Algorithm 7) – learning rate (α): [0.01 (PERM-5120, PERM-500), 0.03 (ROT, PERM-200, MANY), 0.1] – within batch meta-learning rate (γ): [0.03 (ROT, MANY), 0.1 (PERM), 0.3, 1.0] – batch size (k-1): [5 (PERM-200, MANY-500), 10, 25 (PERM-500), 50 (ROT-200, ROT-500, MANY-5120), 100 (ROT-5120, PERM-5120)] – current example learning rate multiplier (s): [1, 2 (PERM-200), 5 (ROT), 10 (PERM- 5120, PERM-500, MANY)] K FORWARD TRANSFER AND INTERFERENCE Forward transfer was a metric defined in Lopez-Paz & Ranzato (2017) based on the average increased performance on a task relative to performance at random initialization before training on that task. Unfortunately, this metric does not make much sense for tasks like MNIST Permutations where inputs are totally uncorrelated across tasks or Omniglot where outputs are totally uncorrelated across tasks. As such, we only provide performance for this metric on MNIST Rotations in Table 5. L ABLATION EXPERIMENTS We plot our detailed ablation results in Table 6. In order to consider a version of GEM that uses reservoir sampling, we maintain our buffer the same way that we do for experience replay and MER. We consider everything in the buffer to be old data and solve the GEM quadratic program so that the loss is not increased on this data. We found that considering the task level gradient directions did not lead to improvements. M REPRODUCIBILITY OF RESULTS While the results so far have provided substantial evidence of the benefits of MER for continual learning, one potential concern with our experimental protocol in Appendix J.1 is that the larger hyperparameter search space used for MER may artificially inflate improvements given typical run to run variation. To validate that this is not the case, we have run extensive additional experiments in this section to see how the model performs across different random seeds and machines. The codebase presents some inherent stochasticity across runs. As such, in Tables 7, 8, and 9 we report two levels of generalization for a set of hyperparameters beyond the configuration of an individual run. In the Same Seeds column, we report the results for the original 5 model seeds (0-4) deployed on different machines. In the Different Seeds column, we report the results for a different 25 model seeds (5-29) also deployed on different machines. In all cases, we see that there are quantitative differences generalizing across seeds and machines. However, new settings do not always result in lower performance. Additionally, the differences are not qualitative in nature. In fact, in every setting we come to approximately the same qualitative conclusions about how each model performs. N CONTINUAL REINFORCEMENT LEARNING We detail the application of MER to deep Q-learning in algorithm 8, using notation from Mnih et al. (2015). Algorithm 8 Deep Q-learning with Meta-Experience Replay (MER) procedure DQN-MER(env, frameLimit, θ, α, β, γ, steps, k, EQ) // Initialize action-value function Q with parameters θ: Q← Q(θ) // Initialize action-value function Q̂ with the same parameters θ̂ = θ: Q̂← Q̂(θ̂) = Q̂(θ) // Initialize experience replay buffer: M ← {} M.age← 0 while M.age ≤ frameLimit do // Begin new episode: env.reset() // Initialize the s state with the initial observation: while episode not done do // Select with probability p an action a from set of possible actions: a = { random selection of action â p ≤ arg maxa′ Q(st, a ′; θ) p > // Perform the action a in the environment: s′, rt ← env.step(s, a) // Store current transition with reward r: M ←M ∪ {(s, a, r, s′)} (algorithm 3) B1
1. What is the main contribution of the paper regarding transfer and interference in lifelong learning? 2. What are the concerns regarding the technical contributions and analysis in the paper? 3. How does the reviewer assess the clarity and distinction of the paper's concepts and terminologies? 4. Can the proposed method be applied to other datasets and tasks beyond MNIST? 5. What are the limitations of the current implementation of the algorithm, particularly in terms of computational efficiency? 6. Are there any potential biases or confounding factors in the experiments that need to be addressed? 7. How does the reviewer evaluate the significance and impact of the paper's findings in the context of lifelong learning and continual learning?
Review
Review The transfer/ interference perspective of lifelong learning is well motivated, and combining the meta-learning literature with the continual learning literature (applying reptile twice), even if seems obvious, wasn't explored before. In addition, this paper shows that a lot of gain can be obtained if one uses more randomized and representative memory (reservoir sampling). However, I'm not entirely convinced with the technical contributions and the analysis provided to support the claims in the paper, good enough for me to accept it in its current form. Please find below my concerns and I'm more than happy to change my mind if the answers are convincing. Main concerns: 1) The trade-off between transfer and interference, which is one of the main contributions of this paper, has recently been pointed out by [1,2]. GEM[1] talks about it in terms of forward transfer and RWalk[2] in terms of "intransigence". Please clarify how "transfer" is different from these. A clear distinction will strengthen the contribution, otherwise, it seems like the paper talks about the same concepts with different terminologies, which will increase confusion in the literature. 2) Provide intuitions about equations (1) and (2). Also, why is this assumption correct in the case of "incremental learning" where the loss surface itself is changing for new tasks? 3) The paper mentions that the performance for the current task isn't an issue, which to me isn't that obvious as if the evaluation setting is "single-head [2]" then the performance on current task becomes an issue as we move forwards over tasks because of the rigidity of the network to learn new tasks. Please clarify. 4) In eq (4), the second sample (j) is also from the same dataset for which the loss is being minimized. Intuitively it makes sense to not to optimize loss for L(xj, yj) in order to enforce transfer. Please clarify. 5) Since the claim is to improve the "transfer-interference" trade-off, how can we verify this just using accuracy? Any metric to quantify these? What about forgetting and forward transfer measures as discussed in [1,2]. Without these, its hard to say what exactly the algorithm is buying. 6) Why there isn't any result showing MER without reservoir sampling. Also, please comment on the computational efficiency of the method (which is crucial for online learning), as it seems to be very slow. 7)The supervised learning experiments are only shown on the MNIST. Maybe, at least show on CONV-NET/ RESNET (CIFAR etc). 8) It is not clear from where the gains are coming. Do the ablation where instead of using two loops of reptile you use one loop. Minor: ======= 1) In the abstract, please clarify what you mean by "future gradient". Is it gradient over "unseen" task, or "unseen" data point of the same task. It's clear after reading the manuscript, but takes a while to reach that stage. 2) Please clarify the difference between stationary and non-stationary distribution, or at least cite a paper with the proper definition. 3) Please define the problem precisely. Like a mathematical problem definition is missing which makes it hard to follow the paper. Clarify the evaluation setting (multi/single head etc [2]) 4) No citation provided for "reservoir sampling" which is an important ingredient of this entire algorithm. 5) Please mention appendix sections as well when referred to appendix. 6) Provide citations for "meta-learning" in section 1. [1] GEM: Gradient episodic memory for continual learning, NIPS17. [2] RWalk: Riemannian walk for incremental learning: Understanding forgetting and intransigence, ECCV2018.
ICLR
Title Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference Abstract Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely.1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994). In continual learning we assume that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution (see Appendix A for details). We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Unfortunately, neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network We consider task agnostic future gradients, referring to gradients of the model parameters with respect to unseen data points. These can be drawn from tasks that have already been partially learned or unseen tasks. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994). In continual learning we assume that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution (see Appendix A for details). We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Unfortunately, neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network 1We consider task agnostic future gradients, referring to gradients of the model parameters with respect to unseen data points. These can be drawn from tasks that have already been partially learned or unseen tasks. stability (to preserve past knowledge) and plasticity (to rapidly learn the current experience). For example, these techniques focus on balancing limited weight sharing with some mechanism to ensure fast learning (Li & Hoiem, 2016; Riemer et al., 2016a; Lopez-Paz & Ranzato, 2017; Rosenbaum et al., 2018; Lee et al., 2018; Serrà et al., 2018). In this paper, we extend this view by noting that for continual learning over an unbounded number of distributions, we need to consider weight sharing and the stability-plasticity trade-off in both the forward and backward directions in time (Figure 1A). The transfer-interference trade-off proposed in this paper (section 2) presents a novel perspective on the goal of gradient alignment for the continual learning problem. This is right at the heart of the problem as these gradients are the update steps for SGD based optimizers during learning and there is a clear connection between gradients angles and managing the extent of weight sharing. The key difference in perspective with past conceptualizations of continual learning is that we are not just concerned with current transfer and interference with respect to past examples, but also with the dynamics of transfer and interference moving forward as we learn. Other approaches have certainly explored operational notions of transfer and interference in forward and backward directions (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2018), the link to weight sharing (French, 1991; Ajemian et al., 2013), and the idea of influencing gradient alignment for continual learning before (Lopez-Paz & Ranzato, 2017). However, in past work, ad hoc changes have been made to the dynamics of weight sharing based on current learning and past learning without formulating a consistent theory about the optimal weight sharing dynamics. This new view of the problem leads to a natural meta-learning (Schmidhuber, 1987) perspective on continual learning: we would like to learn to modify our learning to affect the dynamics of transfer and interference in a general sense. To the extent that our meta-learning into the future generalizes, this should make it easier for our model to perform continual learning in non-stationary settings. We achieve this by building off past work on experience replay (Murre, 1992; Lin, 1992; Robins, 1995) that has been a mainstay for solving non-stationary problems with neural networks. We propose a novel meta-experience replay (MER) algorithm that combines experience replay with optimization based meta-learning (section 3) as a first step towards modeling this perspective. Moreover, our experiments (sections 4, 5, and 6), confirm our theory. MER shows great promise across a variety of supervised continual learning and continual reinforcement learning settings. Critically, our approach is not reliant on any provided notion of tasks and in most of the settings we explore we must detect the concept of tasks without supervision. See Appendix B for a more detailed positioning with respect to related research. 2 THE TRANSFER-INTERFERENCE TRADE-OFF FOR CONTINUAL LEARNING At an instant in time with parameters θ and loss L, we can define2 operational measures of transfer and interference between two arbitrary distinct examples (xi, yi) and (xj , yj) while training with 2Throughout the paper we discuss ideas in terms of the supervised learning problem formulation. Extensions to the reinforcement learning formulation are straightforward. We provide more details in Appendix N. SGD. Transfer occurs when: ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ > 0, (1) where · is the dot product operator. This implies that learning example i will without repetition improve performance on example j and vice versa (Figure 1B). Interference occurs when: ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ < 0. (2) Here, in contrast, learning example i will lead to unlearning (i.e. forgetting) of example j and vice versa (Figure 1C). 3 There is weight sharing between i and j when they are learned using an overlapping set of parameters. So, potential for transfer is maximized when weight sharing is maximized while potential for interference is minimized when weight sharing is minimized (Appendix C). Past solutions for the stability-plasticity dilemma in continual learning operate in a simplified temporal context where learning is divided into two phases: all past experiences are lumped together as old memories and the data currently being learned qualifies as new learning. In this setting, the goal is to simply minimize the interference projecting backward in time, which is generally achieved by reducing the degree of weight sharing explicitly or implicitly. In Appendix D we explain how our baseline approaches (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017) fit within this paradigm. The important issue with this perspective, however, is that the system still has learning to do and what the future may bring is largely unknown. This makes it incumbent upon us to do nothing to potentially undermine the networks ability to effectively learn in an uncertain future. This consideration makes us extend the temporal horizon of the stability-plasticity problem forward, turning it, more generally, into a continual learning problem that we label as solving the Transfer-Interference Trade-off (Figure 1A). Specifically, it is important not only to reduce backward interference from our current point in time, but we must do so in a manner that does not limit our ability to learn in the future. This more general perspective acknowledges a subtlety in the problem: the issue of gradient alignment and thus weight sharing across examples arises both backward and forward in time. With this temporally symmetric perspective, the transfer-interference trade-off becomes clear. Here we propose a potential solution where we learn to learn in a way that promotes gradient alignment at each point in time. The weight sharing across examples that enables transfer to improve future performance must not disrupt performance on what has come previously. As such, our work adopts a meta-learning perspective on the continual learning problem. We would like to learn to learn each example in a way that generalizes to other examples from the overall distribution. 3 A SYSTEM FOR LEARNING TO LEARN WITHOUT FORGETTING In typical offline supervised learning, we can express our optimization objective over the stationary distribution of x, y pairs within the dataset D: θ = argmin θ E(x,y)∼D[L(x, y)], (3) where L is the loss function, which can be selected to fit the problem. If we would like to maximize transfer and minimize interference, we can imagine it would be useful to add an auxiliary loss to the objective to bias the learning process in that direction. Considering equations 1 and 2, one obviously beneficial choice would be to also directly consider the gradients with respect to the loss function evaluated at randomly chosen datapoints. If we could maximize the dot products between gradients at these different points, it would directly encourage the network to share parameters where gradient directions align and keep parameters separate where interference is caused by gradients in opposite directions. So, ideally we would like to optimize for the following objective 4: θ = argmin θ E[(xi,yi),(xj ,yj)]∼D[L(xi, yi) + L(xj , yj)− α ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ ], (4) 3We borrow our terminology from operational measures of forward transfer and backward transfer in LopezPaz & Ranzato (2017), but adopt a temporally symmetric view of the phenomenon by dropping the specification of direction. Interference commonly refers to negative transfer in either direction in the literature. 4The inclusion of L(xj , yj) is largely an arbitrary notation choice as the relative prioritization of the two types of terms can be absorbed in α. We use this notation as it is most consistant with our implementation. Algorithm 1 Meta-Experience Replay (MER) procedure TRAIN(D, θ, α, β, γ, s, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batches from buffer: B1, ..., Bs ← sample(x, y, s, k,M) θA0 ← θ for i = 1, ..., s do θWi,0 ← θ for j = 1, ..., k do xc, yc ← Bi[j] θWi,j ← SGD(xc, yc, θWi,j−1, α) end for // Within batch Reptile meta-update: θ ← θWi,0 + β(θWi,k − θWi,0) θAi ← θ end for // Across batch Reptile meta-update: θ ← θA0 + γ(θAs − θA0 ) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure where (xi, yi) and (xj , yj) are randomly sampled unique data points. We will attempt to design a continual learning system that optimizes for this objective. However, there are multiple problems that must be addressed to implement this kind of learning process in practice. The first problem is that continual learning deals with learning over a non-stationary stream of data. We address this by implementing an experience replay module that augments online learning so that we can approximately optimize over the stationary distribution of all examples seen so far. Another practical problem is that the gradients of this loss depend on the second derivative of the loss function, which is expensive to compute. We address this by indirectly approximating the objective to a first order Taylor expansion using a meta-learning algorithm with minimal computational overhead. 3.1 EXPERIENCE REPLAY Learning objective: The continual lifelong learning setting poses a challenge for the optimization of neural networks as examples come one by one in a non-stationary stream. Instead, we would like our network to optimize over the stationary distribution of all examples seen so far. Experience replay (Lin, 1992; Murre, 1992) is an old technique that remains a central component of deep learning systems attempting to learn in non-stationary settings, and we will adopt here conventions from recent work (Zhang & Sutton, 2017; Riemer et al., 2017b) leveraging this approach. The central feature of experience replay is keeping a memory of examples seen M that is interleaved with the training of the current example with the goal of making training more stable. As a result, experience replay approximates the objective in equation 3 to the extent that M approximates D: θ = argmin θ E(x,y)∼M [L(x, y)], (5) M has a current size Msize and maximum size Mmax. In our work, we update the buffer with reservoir sampling (Appendix F). This ensures that at every time-step the probability that any of the N examples seen has of being in the buffer is equal to Msize/N . The content of the buffer resembles a stationary distribution over all examples seen to the extent that the items stored captures the variation of past examples. Following the standard practice in offline learning, we train by randomly sampling a batch B from the distribution captured by M . Prioritizing the current example: the variant of experience replay we explore differs from offline learning in that the current example has a special role ensuring that it is always interleaved with the examples sampled from the replay buffer. This is because before we proceed to the next example, we want to make sure our algorithm has the ability to optimize for the current example (particularly if it is not added to the memory). Over N examples seen, this still implies that we have trained with each example as the current example with probability per step of 1/N . We provide algorithms further detailing how experience replay is used in this work in Appendix G (algorithms 4 and 5). Concerns about storing examples: Obviously, it is not scalable to store every experience seen in memory. As such, in this work we focus on showing that we can achieve greater performance than baseline techniques when each approach is provided with only a small memory buffer. 3.2 COMBINING EXPERIENCE REPLAY WITH OPTIMIZATION BASED META-LEARNING First order meta-learning: One of the most popular meta-learning algorithms to date is Model Agnostic Meta-Learning (MAML) (Finn et al., 2017). MAML is an optimization based meta-learning algorithm with nice properties such as the ability to approximate any learning algorithm and the ability to generalize well to learning data outside of the previous distribution (Finn & Levine, 2017). One aspect of MAML that limits its scalability is the need to explicitly compute second derivatives. The authors proposed a variant called first-order MAML (FOMAML), which is defined by ignoring the second derivative terms to address this issue and surprisingly found that it achieved very similar performance. Recently, this phenomenon was explained by Nichol & Schulman (2018) who noted through Taylor expansion that the two algorithms were approximately optimizing for the same loss function. Nichol & Schulman (2018) also proposed an algorithm, Reptile, that efficiently optimizes for approximately the same objective while not requiring that the data be split into training and testing splits for each task learned as MAML does. Reptile is implemented by optimizing across s batches of data sequentially with an SGD based optimizer and learning rate α. After training on these batches, we take the initial parameters before training θ0 and update them to θ0 ← θ0 + β ∗ (θk − θ0) where β is the learning rate for the meta-learning update. The process repeats for each series of s batches (algorithm 2). Shown in terms of gradients in Nichol & Schulman (2018), Reptile approximately optimizes for the following objective over a set of s batches: θ = argmin θ EB1,...,Bs∼D[2 s∑ i=1 [L(Bi)− i−1∑ j=1 α ∂L(Bi) ∂θ · ∂L(Bj) ∂θ ]], (6) where B1, ..., Bs are batches within D. This is similar to our motivation in equation 4 to the extent that gradients produced on these batches approximate samples from the stationary distribution. The MER learning objective: In this work, we modify the Reptile algorithm to properly integrate it with an experience replay module, facilitating continual learning while maximizing transfer and minimizing interference. As we describe in more detail during the derivation in Appendix I, achieving the Reptile objective in an online setting where examples are provided sequentially is non-trivial and is in part only achievable because of our sampling strategies for both the buffer and batch. Following our remarks about experience replay from the prior section, this allows us to optimize for the following objective in a continual learning setting using our proposed MER algorithm: θ = argmin θ E[(x11,y11),...,(xsk,ysk)]∼M [2 s∑ i=1 k∑ j=1 [L(xij , yij)− i−1∑ q=1 j−1∑ r=1 α ∂L(xij , yij) ∂θ ·∂L(xqr, yqr) ∂θ ]]. (7) The MER algorithm: MER maintains an experience replay style memory M with reservoir sampling and at each time step draws s batches including k − 1 random samples from the buffer to be trained alongside the current example. Each of the k examples within each batch is treated as its own Reptile batch of size 1 with an inner loop Reptile meta-update after that batch is processed. We then apply the Reptile meta-update again in an outer loop across the s batches. We provide further details for MER in algorithm 1. This procedure approximates the objective of equation 7 when β = 1. The sample function produces s batches for updates. Each batch is created by first adding the current example and then interleaving k − 1 random examples from M . Controlling the degree of regularization: In light of our ideal objective in equation 4, we can see that using a SGD batch size of 1 has an advantage over larger batches because it allows for the second derivative information conveyed to the algorithm to be fine grained on the example level. Another reason to use sample level effective batches is that for a given number of samples drawn from the buffer, we maximize s from equation 6. In equation 6, the typical offline learning loss has a weighting proportional to s and the regularizer term to maximize transfer and minimize interference has a weighting proportional to αs(s−1)/2. This implies that by maximizing the effective s we can put more weight on the regularization term. We found that for a fixed number of examples drawn from M , we consistently performed better converting to a long list of individual samples than we did using proper batches as in Nichol & Schulman (2018) for few shot learning. Prioritizing current learning: To ensure strong regularization, we would like our number of batches processed in a Reptile update to be large – enough that experience replay alone would start to overfit to M . As such, we also need to make sure we provide enough priority to learning the current example, particularly because we may not store it in M . To achieve this in algorithm 1, we sample s separate batches from M that are processed sequentially and each interleaved with the current example. In Appendix H we also outline two additional variants of MER with very similar properties in that they effectively approximate for the same objective. In one we choose one big batch of size sk − s memories and s copies of the current example (algorithm 6). In the other, we choose one memory batch of size k−1 with a special current item learning rate of sα (algorithm 7). Unique properties: In the end, our approach amounts to a quite easy to implement and computationally efficient extension of SGD, which is applied to an experience replay buffer by leveraging the machinery of past work on optimization based meta-learning. However, the emergent regularization on learning is totally different than those previously considered. Past work on optimization based meta-learning has enabled fast learning on incoming data without considering past data. Meanwhile, past work on experience replay only focused on stabilizing learning by approximating stationary conditions without altering model parameters to change the dynamics of transfer and interference. 4 EVALUATION FOR SUPERVISED CONTINUAL LIFELONG LEARNING To test the efficacy of MER we compare it to relevant baselines for continual learning of many supervised tasks from Lopez-Paz & Ranzato (2017) (see Appendix D for in-depth descriptions): • Online: represents online learning performance of a model trained straightforwardly one example at a time on the incoming non-stationary training data by simply applying SGD. • Independent: an independent predictor per task with less hidden units proportional to the number of tasks. When useful, it can be initialized by cloning the last predictor. • Task Input: has the same architecture as Online, but with a dedicated input layer per task. • EWC: Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is an algorithm that modifies online learning where the loss is regularized to avoid catastrophic forgetting. • GEM: Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017) is an approach for making efficient use of episodic storage by following gradients on incoming examples to the maximum extent while altering them so that they do not interfere with past memories. An independent adhoc analysis is performed to alter each incoming gradient. In contrast to MER, nothing generalizable is learned across examples about how to alter gradients. We follow Lopez-Paz & Ranzato (2017) and consider final retained accuracy across all tasks after training sequentially on all tasks as our main metric for comparing approaches. Moving forward we will refer to this metric as retained accuracy (RA). In order to reveal more characteristics of the learning behavior, we also report the learning accuracy (LA) which is the average accuracy for each task directly after it is learned. Additionally, we report the backward transfer and interference (BTI) as the average change in accuracy from when a task is learned to the end of training. A highly negative BTI reflects catastrophic forgetting. Forward transfer and interference (Lopez-Paz & Ranzato, 2017) is only applicable for one task we explore, so we provide details in Appendix K. Question 1 How does MER perform on supervised continual learning benchmarks? To address this question we consider two continual learning benchmarks from Lopez-Paz & Ranzato (2017). MNIST Permutations is a variant of MNIST first proposed in Kirkpatrick et al. (2017) where each task is transformed by a fixed permutation of the MNIST pixels. As such, the input distribution of each task is unrelated. MNIST Rotations is another variant of MNIST proposed in Lopez-Paz & Ranzato (2017) where each task contains digits rotated by a fixed angle between 0 and 180 degrees. We follow the standard benchmark setting from Lopez-Paz & Ranzato (2017) using a modest memory buffer of size 5120 to learn 1000 sampled examples across each of 20 tasks. We provide detailed information about our architectures and hyperparameters in Appendix J. In Table 1 we report results on these benchmarks in comparison to our baseline approaches. Clearly GEM outperforms our other baselines, but our approach adds significant value over GEM in terms of retained accuracy on both benchmarks. MER achieves this by striking a superior balance between transfer and interference with respect to the past and future data. MER displays the best adaption to incoming tasks, while also providing very strong retention of knowledge when learning future tasks. EWC and using a task specific input layer both also lead to gains over standard online learning in terms of retained accuracy. However, they are quite far below the performance of approaches that make usage of episodic storage. While EWC does not store examples, in storing the Fisher information for each task it accrues more incremental resources than the episodic storage approaches. Question 2 How do the performance gains from MER vary as a function of the buffer size? To make progress towards the greater goals of lifelong learning, we would like our algorithm to make the most use of even a modest buffer. This is because in extremely large scale settings it is unrealistic to assume a system can store a large percentage of previous examples in memory. As such, we would like to compare MER to GEM, which is known to perform well with an extremely small memory buffer (Lopez-Paz & Ranzato, 2017). We consider a buffer size of 500, that is over 10 times smaller than the standard setting on these benchmarks. Additionally, we also consider a buffer size of 200, matching the smallest setting explored in Lopez-Paz & Ranzato (2017). This setting corresponds to an average storage of 1 example for each combination of task and class. We report our results in Table 2. The benefits of MER seem to grow as the buffer becomes smaller. In the smallest setting, MER provides more than a 10% boost in retained accuracy on both benchmarks. Question 3 How effective is MER at dealing with increasingly non-stationary settings? Another larger goal of lifelong learning is to enable continual learning with only relatively few examples per task. This setting is particularly difficult because we have less data to characterize each class to learn from and our distribution is increasingly non-stationary over a fixed amount of training. We would like to explore how various models perform in this kind of setting. To do this we consider two new benchmarks. Many Permutations is a variant of MNIST Permutations that has 5 times more tasks (100 total) and 5 times less training examples per task (200 each). Meanwhile we also explore the Omniglot (Lake et al., 2011) benchmark treating each of the 50 alphabets to be a task (see Appendix J for experimental details). Following multi-task learning conventions, 90% of the data is used for training and 10% is used for testing (Yang & Hospedales, 2017). Overall there are 1623 characters. We learn each character and task sequentially with a task specific output layer. We report continual learning results using these new datasets in Table 3. The effect on Many Permutations of efficiently using episodic storage becomes even more pronounced when the setting becomes more non-stationary. GEM and MER both achieve nearly double the performance of EWC and online learning. We also see that increasingly non-stationary settings lead to a larger performance gain for MER over GEM. Gains are quite significant for Many Permutations and remarkable for Omniglot. Omniglot is even more non-stationary including slightly fewer examples per task and MER nearly quadruples the performance of baseline techniques. Considering the poor performance of online learning and EWC it is natural to question whether or not examples were learned in the first place. We experiment with using as many as 100 gradient descent steps per incoming example to ensure each example is learned when first seen. However, due to the extremely non-stationary setting no run of any variant we tried surpassed 5.5% retained accuracy. GEM also has major deficits for learning on Omniglot that are resolved by MER which achieves far better performance when it comes to quickly learning the current task. GEM maintains a buffer using a recent item based sampling strategy and thus can not deal with non-stationarity within the task nearly as well as reservoir sampling. Additionally, we found that the optimization based on the buffer was significantly less effective and less reliable as the quadratic program fails for many hyperparameter values that lead to non-positive definite matrices. Unfortunately, we could not get GEM to consistently converge on Omniglot for a memory size of 500 (significantly less than the number of classes), meanwhile MER handles it well. In fact, MER greatly outperforms GEM with an order of magnitude smaller buffer. We provide additional details about our experiments on Omniglot in Figure 3. We plot retained training accuracy, retained testing accuracy, and computation time for the entire training period using one CPU. We find that MER strikes the best balance of computational efficiency and performance even when using algorithm 1 for MER which performs more computation than algorithm 7. The computation involved in the GEM update does not scale well to large CNN models like those that are popular for Omniglot. MER is far better able to fit the training data than our baseline models while maintaining a computational efficiency closer to online update methods like EWC than GEM. 5 EVALUATION FOR CONTINUAL REINFORCEMENT LEARNING Question 4 Can MER improve a DQN with ER in continual reinforcement learning settings? We considered the evaluation of MER in a continual reinforcement learning setting where the environment is highly non-stationary. In order to produce these non-stationary environments in a controlled way suitable for our experimental purposes, we utilized arcade games provided by Tasfi (2016). Specifically, we used Catcher and Flappy Bird, two simple but interesting enough environments (see Appendix N.1 for details). For the purposes of our explanation, we will call each set of fixed game-dependent parameters a task5. The multi-task setting is then built by introducing changes in these parameters, resulting in non-stationarity across tasks. Each agent is once again evaluated based on its performance over time on all tasks. Our model uses a standard DQN model, developed for Atari (Mnih et al., 2015). See Appendix N.2 for implementation details. In Catcher, we then obtain different tasks by incrementally increasing the pellet velocity a total of 5 times during training. In Flappy Bird, the different tasks are obtained by incrementally reducing the separation between upper and lower pipes a total of 5 times during training. In Figure 4, we show the performance in Catcher when trained sequentially on 6 different tasks for 25k frames each to a maximum of 150k frames, evaluated at each point in time in all 6 tasks. Under these nonstationary conditions, a DQN using MER performs consistently better than the standard DQN with an experience replay buffer (see Appendix N.4 for further comments and ablation results). If we take as inspiration how humans perform, in the last stages of training we hope that a player that obtains good results in later tasks will also obtain good results in the first tasks, as the first tasks are subsumed in the latter ones. For example, in Catcher, the pellet moves faster in later tasks, and thus we expect to be able to do well on the first task. However, DQN forgets significantly how to get slowly moving pellets. In contrast, DQN-MER exhibits minimal or no forgetting after training on the rest of the tasks. This behavior is intuitive because we would expect transfer to happen naturally in this setting. We see similar behavior for Flappy Bird. DQN-MER becomes a Platinum player on the first task when it is learning the third task. This is a more difficult environment in which the pipe gap is noticeably smaller (see Appendix N.4). DQN-MER exhibits the kind of learning patterns expected from humans for these games, while a standard DQN struggles to generalize as the game changes and to retain knowledge over time. 6 FURTHER ANALYSIS OF THE APPROACH In this section we would like to dive deeper into how MER works. To do so we run additional detailed experiments across our three MNIST based continual learning benchmarks. Question 5 Does MER lead to a shift in the distribution of gradient dot products? We would like to directly verify that MER achieves our motivation in equation 7 and results in significant changes in the distribution of gradient dot products between new incoming examples and past examples over the course of learning when compared to experience replay (ER) from algorithm 5Agents are not provided task information, forcing them to identify changes in game play on their own. 5. For these experiments, we maintain a history of all examples seen that is totally separate from our notion of memory buffers that only include a partial history of examples. Every time we receive a new example we use the current model to extract a gradient direction and we also randomly sample five examples from the previous history. We save the dot products of the incoming example gradient with these five past example gradients and consider the mean of the distribution of dot products seen over the course of learning for each model. We run this experiment on the best hyperparamater setting for both ER and MER from algorithm 6 with one batch per example for fair comparison. Each model is evaluated five times over the course of learning. We report mean and standard deviations of the mean gradient dot product across runs in Table 4. We can thus verify that a very significant and reproducible difference in the mean gradient encountered is seen for MER in comparison to ER alone. This difference alters the learning process making incoming examples on average result in slight transfer rather than significant interference. This analysis confirms the desired effect of the objective function in equation 7. For these tasks there are enough similarities that our meta-learning generalizes very well into the future. We should also expect it to perform well during drastic domain shifts like other meta-learning algorithms driven by SGD alone (Finn & Levine, 2017). Question 6 What components of MER are most important? We would like to further analyze our MER model to understand what components add the most value and when. We want to understand how powerful our proposed variants of ER are on their own and how much is added by adding meta-learning to ER. In Appendix L we provide detailed results considering ablated baselines for our experiments on the MNIST lifelong learning benchmarks. 6 Our versions of ER consistently provide gains over GEM on their own, but the techniques perform very comparably when we also maintain GEM’s buffer with reservoir sampling or use ER with a GEM style buffer. Additionally, we see that adding meta-learning to ER consistently results in performance gains. In fact, meta-learning appears to provide increasing value for smaller buffers. In Appendix M, we provide further validation that our results are reproducible across runs and seeds. We would also like to compare the variants of MER proposed in algorithms 1, 6, and 7. Conceptually algorithms 1 and 7 represent different mechanisms of increasing the importance of the current example in algorithm 6. We find that all variants of MER result in significant improvements on ER. Meanwhile, the variants that increase the importance of the current example see a further improvement in performance, performing quite comparably to each other. Overall, in our MNIST experiments algorithm 7 displays the best tradeoff of computational efficiency and performance. Finally, we conducted experiments demonstrating that adaptive optimizers like Adam and RMSProp can not account for the gap between ER and MER. Particularly for smaller buffer sizes, these approaches overfit more on the buffer and actually hurt generalization in comparison to SGD. 7 CONCLUSION In this paper we have cast a new perspective on the problem of continual learning in terms of a fundamental trade-off between transfer and interference. Exploiting this perspective, we have in turn developed a new algorithm Meta-Experience Replay (MER) that is well suited for application to general purpose continual learning problems. We have demonstrated that MER regularizes the objective of experience replay so that gradients on incoming examples are more likely to have transfer and less likely to have interference with respect to past examples. The result is a general purpose solution to continual learning problems that outperforms strong baselines for both supervised continual learning benchmarks and continual learning in non-stationary reinforcement learning environments. Techniques for continual learning have been largely driven by different conceptualizations of the fundamental problem encountered by neural networks. We hope that the transfer-interference tradeoff can be a useful problem view for future work to exploit with MER as a first successful example. 6Code available at https://github.com/mattriemer/mer. ACKNOWLEDGMENTS We would like to thank Pouya Bashivan, Christopher Potts, Dan Jurafsky, and Joshua Greene for their input and support of this work. Additionally, we would like to thank Arslan Chaudhry and Marc’Aurelio Ranzato for their helpful comments and discussions. We also thank the three anonymous reviewers for their valuable suggestions. This research was supported by the MIT-IBM Watson AI Lab, and is based in part upon work supported by the Stanford Data Science Initiative and by the NSF under Grant No. BCS-1456077 and the NSF Award IIS-1514268. A CONTINUAL LEARNING PROBLEM FORMULATION In the classical offline supervised learning setting, a learning agent is given a fixed training data set D = {(xi, yi)}ni=1 of n samples, each containing an input feature vector xi ∈ X associated with the corresponding output (target, or label) yi ∈ Y; a common assumption is that the training samples are i.i.d. samples drawn from the same unknown joint probability distribution P (x, y). The learning task is often formulated as a function approximation problem, i.e. finding a function, or model, fθ(x) : X → Y from a given class of models (e.g., neural networks, decision trees, linear functions, etc.) where θ are the parameters estimated from data. Given a loss function L(fθ(x), y), the parameter estimation is formulated as an empirical risk minimization problem: minθ 1 |D| ∑ (xi,yi)∼D L(fθ(x), y). On the contrary, the online learning setting does not assume a fixed training dataset but rather a stream of data samples, where unlabeled feature vectors arrive one at a time, or in small minibatches, and the learner must assign labels to those inputs, receive the correct labels, and update the model accordingly, in iterative fashion. While classical online learning assumes i.i.d. samples, continual or lifelong learning does not make such an assumption, and requires a learning agent to handle non-stationary data streams. In this work, we define continual learning as online learning from a non-stationary input data stream, with a specific type of non-stationarity as defined below. Namely, we follow a commonly used setting to define non-stationary conditions for continual learning, dubbed locally i.i.d by Lopez-Paz & Ranzato (2017), where the agent learns over a sequence of separate stationary distributions one after another. We call the individual stationary distributions tasks, where each task tk is an online supervised learning problem associated with its own data probability distribution Pk(x, y). Namely, we are given a (potentially infinite) sequence (x1, y1, t1), ..., (xi, yi, ti), ..., (xi+j , yi+j , ti+j) While many continual learning methods assume the task descriptors tk are available to a learner, we are interested in developing approaches which do not have to rely on such information and can learn continuously without explicit announcement of the task change. Borrowing terminology from Chaudhry et al. (2018), we explore the single-headed setting in most of our experiments, which keeps learning a common function fθ across changing tasks. In contrast, multi-headed learning, which we consider for our Omniglot experiments, involves a separate final classification layer for each task. This makes more sense in case of Omniglot dataset, where the number of classes for each task varies considerably from task to task. We should also note that for Omniglot we consider a setting that is locally i.i.d. at the class level rather than the task level. B RELATION TO PAST WORK With regard to the continual learning setting specifically, other recent work has explored similar operational measures of transfer and interference. For example, the notions of Forward Transfer and Backward Transfer were explored in Lopez-Paz & Ranzato (2017). However, the approach of that work, GEM, was primarily concerned with solving the classic stability-plasticity dilemma (Carpenter & Grossberg, 1987) at a specific instance of time. Adjustments to gradients on the current data are made in an ad hoc manner solving a quadratic program separately for each example. In our work we try to learn a generalizable theory about weight sharing that can learn to influence the distribution of gradients not just in the past and present, but in the future as well. Additionally, in Chaudhry et al. (2018) similar ideas were explored with operational measures of intransigence (the inability to learn new data) and forgetting (the loss of previous performance). These measures are also intimately related to the stability-plasticity dilemma as intransigence is high when plasticity is low and forgetting is high when stability is low. The major distinction in the transfer-interference trade-off proposed in this work is that we aim to learn the optimal weight sharing scheme to optimize for the stability-plasticity dilemma with the hope that our learning about weight sharing will improve the stability and efficacy of learning on unseen data as well. With regard to the problem of weight-sharing in neural networks more generally, a host of different strategies have been proposed in the past to deal with the problems of catastrophic forgetting and/or the stability-plasticity dilemma (for review, see French (1999)). For example, one strategy for alleviating catastrophic forgetting is to make distributed representations less distributed – or semi-distributed (French, 1991) – for the case of past learning. Activation sharpening as introduced by French (1991) is a prominent example. A second strategy known as dual network models (McClelland et al., 1995; Ans & Rousset, 1997) is based on the neurobiological finding that both hippocampal and cortical circuits contributed differentially to memory. The cortical circuits are highly distributed with overlapping representations suitable for task generalization, while the more sparse hippocampal representations tend to be non-overlapping. The existence of dual circuits provides an extra degree of freedom for balancing the dual constraints of stability and plasticity. In a similar spirit, models have been proposed that have two classes of weights operating on two different timescales (Hinton & Plaut, 1987). A third strategy also motivated by neurobiological considerations is the use of latent synaptic dynamics (Fusi et al., 2005; Lahiri & Ganguli, 2013). Here the basic idea is that synaptic strength is determined by a multiple of variables, including latent ones not easily observed, operating at different timescales such that their net effect is to provide the system with additional degrees-of-freedom to store past experience without interfering with current learning. A fourth strategy is the use of feedback mechanisms to stabilize representations (Carpenter & Grossberg, 1987; Murre, 1992). In this class of models, a previously experienced memory will trigger top down feedback that prevents plasticity, while novel stimuli that experience no such feedback trigger plasticity. All of these approaches have their own strengths and weaknesses with respect to the stability-plasticity dilemma and, by extension, the transfer-interference trade-off we propose. Another relevant work is the POWERPLAY framework (Schmidhuber, 2004; 2013) which is a method for asymptotically optimal curriculum learning that by definition cannot forget previously learned skills. POWERPLAY also uses environment-independent replay of behavioral traces to avoid forgetting previous skills. However, POWERPLAY is orthogonal to our work as we consider a different setting where the agent cannot directly control the new tasks that will be encountered in the environment and thus must instead learn to adapt and react to non-stationarity conditions. In contrast to past work on meta-learning for few shot learning (Santoro et al., 2016; Vinyals et al., 2016; Ravi & Larochelle, 2016; Finn et al., 2017) and reinforcement learning across successive tasks (Al-Shedivat et al., 2018), we are not only trying to improve the speed of learning on new data, but also trying to do it in a way that preserves knowledge of past data and generalizes to future data. While past work has considered learning to influence gradient angles, so that there is more alignment and thus faster learning within a task, we focus on a setting where we would like to influence gradient angles from all tasks at all points in time. As our model aims to influence the dynamics of weight sharing, it bears conceptual similarity to mixtures of experts (Jacobs et al., 1991) style models for lifelong and multi-task learning (Misra et al., 2016; Riemer et al., 2016b; Aljundi et al., 2017; Fernando et al., 2017; Shazeer et al., 2017; Rosenbaum et al., 2018). MER implicitly affects the dynamics of weight sharing, but it is possible that combining it with mixtures of experts models could further amplify the ability for the model to control these dynamics. This is potentially an interesting avenue for future work. The options framework has also been considered as a solution to a similar continual RL setting to the one we explore (Mankowitz et al., 2018). Options formalize the notion of temporally abstraction actions in RL. Interestingly, generic architectures designed for shallow (Bacon et al., 2017) or deep (Riemer et al., 2018) hierarchies of options in essence learn very complex patterns of weight sharing over time. The option hierarchies constitute an explicit mechanism of controlling the extent of weight sharing for continual learning, allowing for orthogonalization of weights relating to different skills. In contrast, our work explores a method of implicitly optimizing weight sharing for continual learning that improves the efficacy of experience replay. MER should be simple to implement in concert with options based methods and combining the two is an interesting direction for future work. C THE CONNECTION BETWEEN WEIGHT SHARING AND THE TRANSFER-INTERFERENCE TRADE-OFF In this section we would like to generalize our interpretation of a large set of different weight sharing schemes including (Riemer et al., 2015; Bengio et al., 2015; Rosenbaum et al., 2018; Serrà et al., 2018) and how the concept of weight sharing impacts the dynamics of transfer (equation 1) and interference (equation 2). We will assume that we have a total parameter space θ that can be used by our network at any point in time. However, it is not a requirement that all parameters are actually used at all points in time. So, we can consider two specific instances in time. One where we receive data point (x1, y1) and leverage parameters θ1. Then, at the other instance in time, we receive data point (x2, y2) and leverage parameters θ2. θ1 and θ2 are both subsets of θ and critically the overlap between these subsets influences the possible extent of transfer and interference when training on either data point. First let us consider two extremes. In the first extreme imagine θ1 and θ2 are entirely nonoverlapping. As such ∂L(x1,y1)∂θ · ∂L(x2,y2) ∂θ = 0. On the positive side, this means that our solution has no potential for interference between the examples. On the other hand, there is no potential for transfer either. On the other extreme, we can imagine that θ1 = θ2. In this case, the potential for both transfer and interference is maximized as gradients with respect to every parameter have the possibility of a non-zero dot product with each other. From this discussion it is clear that both the extreme of full weight sharing and the extreme of no weight sharing have value depending on the relationship between data points. What we would really like for continual learning is to have a system that learns when to share weights and when not to on its own. To the extent that our learning about weight sharing generalizes, this should allow us to find an optimal solution to the transfer-interference trade-off. D FURTHER DESCRIPTIONS AND COMPARISONS WITH BASELINE ALGORITHMS Independent: originally reported in (Lopez-Paz & Ranzato, 2017) is the performance of an independent predictor per task which has the same architecture but with less hidden units proportional to the number of tasks. The independent predictor can be initialized randomly or clone the last trained predictor depending on what leads to better performance. EWC: Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is an algorithm that modifies online learning where the loss is regularized to avoid catastrophic forgetting by considering the importance of parameters in the model as measured by their fisher information. EWC follows the catastrophic forgetting view of the continual learning problem by promoting less sharing of parameters for new learning that were deemed to be important for performance on old memories. We utilize the code provided by Lopez-Paz & Ranzato (2017) in our experiments. The only difference in our setting is that we provide the model one example at a time to test true continual learning rather than providing a batch of 10 examples at a time. GEM: Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017) is an algorithm meant to enhance the effectiveness of episodic storage based continual learning techniques by allowing the model to adapt to incoming examples using SGD as long as the gradients do not interfere with examples from each task stored in a memory buffer. If gradients interfere leading to a decrease in the performance of a past task, a quadratic program is used to solve for the closest gradient to the original that does not have negative gradient dot products with the aggregate memories from any previous tasks. GEM is known to achieve superior performance in comparison to other recently proposed techniques that use episodic storage like Rebuffi et al. (2017), making superior use of small memory buffer sizes. GEM follows similar motivation to our approach in that it also considers the intelligent use of gradient dot product information to improve the use case of supervised continual learning. As a result, it is a very strong and interesting baseline to compare with our approach. We modify the original code and benchmarks provided by Lopez-Paz & Ranzato (2017). Once again the only difference in our setting is that we provide the model one example at a time to test true continual learning rather than providing a batch of 10 examples at a time. We can consider the GEM algorithm as tailored to the stability-plasticity dilemma conceptualization of continual learning in that it looks to preserve performance on past tasks while allowing for maximal plasticity to the new task. To achieve this, GEM solves a quadratic program to find an approximate gradient gnew that closely matches ∂L(xnew,ynew) ∂θ while ensuring that the following constraint holds: gnew · ∂L(xold, yold) ∂θ > 0. (8) E REPTILE ALGORITHM We detail the standard Reptile algorithm from (Nichol & Schulman, 2018) in algorithm 2. The sample function randomly samples s batches of size k from dataset D. The SGD function applies min-batch stochastic gradient descent over a batch of data given a set of current parameters and learning rate. Algorithm 2 Reptile for Stationary Data procedure TRAIN(D, θ, α, β, s, k) while not done do // Draw batches from data: B1, ..., Bs ← sample(D, s, k) θ0 ← θ for i = 1, ..., s do θi ← SGD(Bi, θi−1, α) end for // Reptile meta-update: θ ← θ0 + β(θs − θ0) end while return θ end procedure F DETAILS ON RESERVOIR SAMPLING Throughout this paper we refer to updates to our memory M as M ←M ∪{(x, y)}. We would like to now provide details on how we update our memory buffer using reservoir sampling as outlined in Vitter (1985) (algorithm 3). Reservoir sampling solves the problem of keeping some limited number M of N total items seen before with equal probability MN when you don’t know what number N will be in advance. The randomInteger function randomly draws an integer inclusively between the provided minimum and maximum values. Algorithm 3 Reservoir Sampling with Algorithm R procedure RESERVOIR(M,N, x, y) if M > N then M [N ]← (x, y) else j = randomInteger(min = 0,max = N) if j < M then M [j]← (x, y) end if end if return M end procedure G EXPERIENCE REPLAY ALGORITHMS We detail the our variant of the experience replay in algorithm 4. This procedure closely follows recent enhancements discussed in Zhang & Sutton (2017); Riemer et al. (2017b;a) The sample function randomly samples k − 1 examples from the memory buffer M and interleaves them with the current example to form a single size k batch. The SGD function applies mini-batch stochastic gradient descent over a batch of data given a set of current parameters and learning rate. Algorithm 4 Experience Replay (ER) with Reservoir Sampling procedure TRAIN(D, θ, α, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, k,M) // Update parameters with mini-batch SGD: θ ← SGD(B, θ, α) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Unfortunately, it is not straightforward to implement algorithm 4 in all circumstances. In particular, it depends whether the neural network architecture is single headed (sharing an output layer and output space among all tasks) or multi-headed (where each task gets its own unique output space). In multi-headed settings, it is common to consider the tasks in separate batches and to equally weight the sampled tasks during each update. This results in training the parameters evenly for each task and is particularly important so we pay equal attention to each set of task specific parameters. We detail an approach that separates tasks into sub-batches for a balanced update in algorithm 5. Here L is the loss given a set of parameters over a batch of data and SGD applies a mini-batch gradient descent update rule over a loss given a set of parameters and learning rate. Algorithm 5 Experience Replay (ER) with Tasks procedure TRAIN(D, θ, α, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, k,M) // Compute balanced loss across tasks loss = 0.0 for task in B do loss = loss+ L(B[task], θ) end for // Update parameters with mini-batch SGD: θ ← SGD(loss, θ, α) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Our experiments demonstrate that both variants of experience replay are very effective for continual learning. Meanwhile, each performs significantly better than the other on some datasets and settings. H THE VARIANTS OF MER We detail two additional variants of MER (algorithm 1) in algorithms 6 and 7. The sample function takes on a slightly different meaning in each variant of the algorithm. In algorithm 1 sample is used to produce s batches consisting of k − 1 random examples from the memory buffer and the current example. In algorithm 6 sample is used to produce one batch consisting of sk − s examples from the memory buffer and s copies of the current example. In algorithm 7 sample is used to produce one batch consisting of k − 1 examples from the memory buffer. In algorithm 6, sample places the current example at the end of the batch. Meanwhile, in algorithm 7, sample places the current example in a random location within the batch. In contrast, the SGD function carries a common meaning across algorithms, applying stochastic gradient descent over a particular input and output given a set of current parameters and learning rate. Algorithm 6 Meta-Experience Replay (MER) - One Big Batch procedure TRAIN(D, θ, α, γ, sk) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, s, k,M) θ0 ← θ for i = 1, ..., sk do xc, yc ← Bi[j] θi ← SGD(xc, yc, θi−1, α) end for // Reptile meta-update: θ ← θ0 + γ(θsk − θ0) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Algorithm 7 Meta-Experience Replay (MER) - Current Example Learning Rate procedure TRAIN(D, θ, α, γ, s, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B, index← sample(k − 1,M) θ0 ← θ // SGD on individual samples from batch: for i = 1, ..., k − 1 do xc, yc ← Bi[j] if j = index // High learning rate SGD on current example: θk ← SGD(x, y, θk−1, sα) else θi ← SGD(xc, yc, θi−1, α) end for // Reptile meta-update: θ ← θ0 + γ(θk − θ0) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure I DERIVING THE EFFECTIVE OBJECTIVE OF MER We would like to derive what objective Meta-Experience Replay (algorithm 1) approximates and show that it is approximately the same objective from algorithms 6 and 7. We follow conventions from Nichol & Schulman (2018) and first demonstrate what happens to the effective gradients computed by the algorithm in the most trivial case. As in Nichol & Schulman (2018), this allows us to extrapolate an effective gradient that is a function of the number of steps taken. We can then consider the effective loss function that results in this gradient. Before we begin, let us define the following terms from Nichol & Schulman (2018): gi = ∂L(θi) ∂θi (gradient obtained during SGD) (9) θi+1 = θi − αgi (sequence of parameter vectors) (10) ḡi = ∂L(θi) ∂θ0 (gradient at initial point) (11) gji = ∂L(θi) ∂θj (gradient evaluated at point i with respect to parameters j) (12) H̄i = ∂2L(θi) ∂θ20 (Hessian at initial point) (13) Hji = ∂2L(θi) ∂θ2j (Hessian evaluated at point i with respect to parameters j) (14) In Nichol & Schulman (2018) they consider the effective gradient across one loop of reptile with size k = 2. As we have both an outer loop of Reptile applied across batches and an inner loop applied within the batch to consider, we start with a setting where the number of batches s = 2 and the number of examples per batch k = 2. Let’s recall from the original paper that the gradients of Reptile with k = 2 was: gReptile,k=2,s=1 = g0 + g1 = ḡ0 + ḡ1 − αH̄1ḡ0 +O(α2) (15) So, we can also consider the gradients of Reptile if we had 4 examples in one big batch (algorithm 6) as opposed to 2 batches of 2 examples: gReptile,k=4,s=1 = g0 + g1 + g2 + g3 = ḡ0 + ḡ1 + ḡ2 + ḡ3 − αH̄1ḡ0 − αH̄2ḡ0 − αH̄2ḡ1 − αH̄3ḡ0 − αH̄3ḡ1 − αH̄3ḡ2 +O(α2) (16) Now we can consider the case for MER where we define the parameter values as follows extending algorithm 1 where A stands for across batches and W stands for within batches: θ0 = θ A 0 = θ W 00 (17) θW01 = θ W 00 − αg00 (18) θW02 = θ W 01 − αg01 (19) θA1 = θ A 0 + β (θW02 − θA0 ) α = θ0 + β (θW02 − θ0) α = θW10 (20) θW11 = θ W 10 − αg10 (21) θW12 = θ W 11 − αg11 (22) θA2 = θ A 1 + β (θW12 − θA1 ) α (23) θ = θA0 + γβ (θA2 − θA0 ) β = θA0 + γ(θ A 2 − θA0 ) (24) gMER the gradient of Meta-Experience Replay can thus be defined analogously to the gradient of Reptile as: gMER = θA0 − θA2 β = θ0 − θA2 β (25) By simply applying Reptile from equation 15 we can derive the value of the parameters θA1 after updating with Reptile within the first batch in terms of the original parameters θ0: θA1 = θ0 − βḡ00 − βḡ01 + βαH̄01ḡ00 +O(βα2) (26) By subbing equation 26 into equation 23 we can see that: θA2 = θ0 − βḡ00 − βḡ01 + βαH̄01ḡ00 − βg10 − βg11 +O(βα2) (27) We can express g10 in terms of the initial point, by considering a Taylor expansion following the Reptile paper: g10 = ḡ10 + αH̄10(θ W 10 − θ0) +O(α2) (28) Then substituting in for θW10 we express g10 in terms of θ0: g10 = ḡ10 − αβH̄10ḡ00 − αβH̄10ḡ01 +O(α2) (29) We can then rewrite g11 by taking a Taylor expansions with respect to θW10 : g11 = g 10 11 − αH1011g10 +O(α2) (30) Taking another Taylor expansion we find that we can transform our expression for the Hessian: H1011 = H̄11 +O(α) (31) We can analogously also transform our expression our expression for g1011 : g1011 = ḡ11 + αH̄11(θ W 10 − θ0) +O(α2) (32) Substituting for θW10 in terms of θ0 g1011 = ḡ11 − αβH̄11ḡ00 − αβH̄11ḡ01 +O(α2) (33) We then substitute equation 31, equation 33, and equation 29 into equation 34: g11 = ḡ11 − αβH̄11ḡ00 − αβH̄11ḡ01 − αH̄11ḡ10 +O(α2) (34) Finally, we have all of the terms we need to express θA2 and we can then derive an expression for the MER gradient gMER: gMER = ḡ00 + ḡ01 + ḡ10 + ḡ11 −αH̄01ḡ00 − αH̄11ḡ10 − αβH̄10ḡ00 − αβH̄10ḡ01 − αβH̄11ḡ00 − αβH̄11ḡ01 +O(α2) (35) This equation is quite interesting and very similar to equation 16. As we would like to approximate the same objective, we can remove one hyperparameter from our model by setting β = 1. This yields: gMER = ḡ00 + ḡ01 + ḡ10 + ḡ11 −αH̄01ḡ00 − αH̄11ḡ10 − αH̄10ḡ00 − αH̄10ḡ01 − αH̄11ḡ00 − αH̄11ḡ01 +O(α2) (36) Indeed, with β set to equal 1, we have shown that the gradient of MER is the same as one loop of Reptile with a number of steps equal to the total number of examples in all batches of MER (algorithm 6) if the current example is mixed in with the same proportion. If we include in the current example for s of sk examples in our meta-replay batch, it gets the same overall priority in both cases which is s times larger than that of a random example drawn from the buffer. As such, we can also optimize an equivalent gradient using algorithm 7 because it uses a factor s to increase the priority of the gradient given to the current example. While β = 1 is an interesting special case of MER in algorithm 1, in general we find it can be useful to set β to be a value smaller than 1. In fact, in our experiments we consider the case when β is smaller than 1 and γ = 1. The success of this approach makes sense because the higher order terms in the Taylor expansion that reflect the mismatch between parameters across replay batches disturb the learning process. By setting β to a value below 1 we can reduce our comparative weighting on promoting inter batch gradient similarities rather than intra batch gradient similarities. It was noted in (Nichol & Schulman, 2018) that the following equality holds if the examples and order are random: E[H̄1ḡ0] = E[H̄0ḡ1] = 1 2 E[ ∂ ∂θ0 (ḡ0 · ḡ1)] (37) In our work to make sure this equality holds in an online setting, we must take multiple precautions as noted in the main text. The issue is that examples are received in a non-stationary sequence so when applied in a continual learning setting the order is not totally random or arbitrary as in the original Reptile work. We address this by maintaining our buffer using reservoir sampling, which ensures that any example seen before has a probability 1N of being a particular element in the buffer. We also randomly select over these elements to form a batch. As this makes the order largely arbitrary to the extent that our buffer includes all examples seen, we are approximating the random offline setting from the original Reptile paper. As such we can view the gradients in equation 16 and equation 36 as leading to approximately the following objective function: θ = argmin θ E(x11,y11),...,(xsk,ysk)∼M [2 s∑ i=1 k∑ j=1 [L(xij , yij)− i−1∑ q=1 j−1∑ r=1 α ∂L(xij , yij) ∂θ ·∂L(xqr, yqr) ∂θ ]]. (38) This is precisely equation 7 in the main text. J SUPERVISED CONTINUAL LIFELONG LEARNING For the supervised continual learning benchmarks leveraging MNIST Rotations and MNIST Permutations, following conventions, we use a two layer MLP architecture for all models with 100 hidden units in each layer. We also model our hyperparameter search after Lopez-Paz & Ranzato (2017) while providing statistics for each model across 5 random seeds. For Omniglot, following Vinyals et al. (2016) we scale the images to 28x28 and use an architecture that consists of a stack of 4 modules before a fully connected softmax layer. Each module includes a 3x3 convolution with 64 filters, a ReLU non-linearity and 2x2 max-pooling. J.1 HYPERPARAMETER SEARCH Here we report the hyper-parameter grids that we searched over in our experiments. We note in parenthesis the best values for MNIST Rotations (ROT) at each buffer size (ROT-5120, ROT-500, ROT-200), MNIST Permutations (PERM) at each buffer size (PERM-5120, PERM-500, PERM200), Many Permutations (MANY) at each buffer size (MANY-5120, MANY-500), and Omniglot (OMNI) at each buffer size (OMNI-5120, OMNI-500). • Online Learning – learning rate: [0.0001, 0.0003 (ROT), 0.001, 0.003 (PERM, MANY), 0.01, 0.03, 0.1 (OMNI)] • Independent Model Per Task – learning rate: [0.0001, 0.0003, 0.001, 0.003, 0.01 (ROT, PERM, MANY), 0.03, 0.1] • Task Specific Input Layer – learning rate: [0.0001, 0.0003, 0.001, 0.003, 0.01 (ROT, PERM), 0.03, 0.1] • EWC – learning rate: [0.001 (ROT, OMNI), 0.003 (MANY), 0.01 (PERM), 0.03, 0.1, 0.3, 1.0] – regularization: [1 (MANY), 3, 10 (PERM, OMNI), 30, 100 (ROT), 300, 1000, 3000, 10000, 30000] • GEM – learning rate: [0.001, 0.003 (MANY-500), 0.01 (ROT, PERM, OMNI, MANY-5120), 0.03, 0.1, 0.3, 1.0] – memory strength (γ): [0.0 (ROT-500, ROT-200, PERM-200, MANY-5120), 0.1 (MANY-500), 0.5 (OMNI), 1.0 (ROT-5120, PERM-5120, PERM-500)] • Experience Replay (Algorithm 4) – learning rate: [0.00003, 0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1 (ROT, PERM, MANY)] – batch size (k-1): [5 (ROT-500), 10 (ROT-200, PERM-500, PERM-200), 25 (ROT- 5120, PERM-5120, MANY), 50, 100, 250] • Experience Replay (Algorithm 5) – learning rate: [0.00003, 0.0001, 0.0003, 0.001, 0.003 (MANY-5120), 0.01 (ROT-500, ROT-200, PERM, MANY-500), 0.03 (ROT-5120), 0.1] – batch size (k-1): [5 (MANY-500), 10 (PERM-200, MANY-5120), 25 (PERM-5120, PERM-500), 50 (ROT-200), 100 (ROT-5120, ROT-500), 250] • Meta-Experience Replay (Algorithm 1) – learning rate (α): [0.01 (OMNI-5120), 0.03 (ROT-5120, PERM, MANY-500), 0.1 (ROT-500, ROT-200, OMNI-500)] – across batch meta-learning rate (γ): 1.0 – within batch meta-learning rate (β): [0.01 (ROT-500, ROT-200, MANY-5120), 0.03 (ROT-5120, PERM, MANY-500), 0.1, 0.3, 1.0 (OMNI)] – batch size (k-1): [5 (MANY, OMNI-500), 10 (ROT-500, ROT-200, PERM-200), 25 (PERM-500, OMNI-5120), 50, 100 (ROT-5120, PERM-5120)] – number of batches per example: [1, 2 (OMNI-500), 5 (ROT-200, OMNI-5120), 10 (ROT-5120, ROT-500, PERM, MANY)] • Meta-Experience Replay (Algorithm 6) – learning rate (α): [0.01, 0.03 (ROT-5120, PERM-5120, PERM-500, MANY-5120), 0.1 (ROT-500, ROT-200, PERM-200, MANY-500)] – meta-learning rate (γ): [0.03 (ROT-500, ROT-200, PERM-200, MANY-500), 0.1 (ROT-5120, PERM-5120, MANY-5120), 0.3 (PERM-500), 0.6, 1.0] – batch size (k-1): [5 (PERM-200, MANY-500), 10 (ROT-500, PERM-500) 25 (ROT- 200, MANY-5120), 50 (PERM-5120), 100 (ROT-5120), 250] – number of batches per example: 1 • Meta-Experience Replay (Algorithm 7) – learning rate (α): [0.01 (PERM-5120, PERM-500), 0.03 (ROT, PERM-200, MANY), 0.1] – within batch meta-learning rate (γ): [0.03 (ROT, MANY), 0.1 (PERM), 0.3, 1.0] – batch size (k-1): [5 (PERM-200, MANY-500), 10, 25 (PERM-500), 50 (ROT-200, ROT-500, MANY-5120), 100 (ROT-5120, PERM-5120)] – current example learning rate multiplier (s): [1, 2 (PERM-200), 5 (ROT), 10 (PERM- 5120, PERM-500, MANY)] K FORWARD TRANSFER AND INTERFERENCE Forward transfer was a metric defined in Lopez-Paz & Ranzato (2017) based on the average increased performance on a task relative to performance at random initialization before training on that task. Unfortunately, this metric does not make much sense for tasks like MNIST Permutations where inputs are totally uncorrelated across tasks or Omniglot where outputs are totally uncorrelated across tasks. As such, we only provide performance for this metric on MNIST Rotations in Table 5. L ABLATION EXPERIMENTS We plot our detailed ablation results in Table 6. In order to consider a version of GEM that uses reservoir sampling, we maintain our buffer the same way that we do for experience replay and MER. We consider everything in the buffer to be old data and solve the GEM quadratic program so that the loss is not increased on this data. We found that considering the task level gradient directions did not lead to improvements. M REPRODUCIBILITY OF RESULTS While the results so far have provided substantial evidence of the benefits of MER for continual learning, one potential concern with our experimental protocol in Appendix J.1 is that the larger hyperparameter search space used for MER may artificially inflate improvements given typical run to run variation. To validate that this is not the case, we have run extensive additional experiments in this section to see how the model performs across different random seeds and machines. The codebase presents some inherent stochasticity across runs. As such, in Tables 7, 8, and 9 we report two levels of generalization for a set of hyperparameters beyond the configuration of an individual run. In the Same Seeds column, we report the results for the original 5 model seeds (0-4) deployed on different machines. In the Different Seeds column, we report the results for a different 25 model seeds (5-29) also deployed on different machines. In all cases, we see that there are quantitative differences generalizing across seeds and machines. However, new settings do not always result in lower performance. Additionally, the differences are not qualitative in nature. In fact, in every setting we come to approximately the same qualitative conclusions about how each model performs. N CONTINUAL REINFORCEMENT LEARNING We detail the application of MER to deep Q-learning in algorithm 8, using notation from Mnih et al. (2015). Algorithm 8 Deep Q-learning with Meta-Experience Replay (MER) procedure DQN-MER(env, frameLimit, θ, α, β, γ, steps, k, EQ) // Initialize action-value function Q with parameters θ: Q← Q(θ) // Initialize action-value function Q̂ with the same parameters θ̂ = θ: Q̂← Q̂(θ̂) = Q̂(θ) // Initialize experience replay buffer: M ← {} M.age← 0 while M.age ≤ frameLimit do // Begin new episode: env.reset() // Initialize the s state with the initial observation: while episode not done do // Select with probability p an action a from set of possible actions: a = { random selection of action â p ≤ arg maxa′ Q(st, a ′; θ) p > // Perform the action a in the environment: s′, rt ← env.step(s, a) // Store current transition with reward r: M ←M ∪ {(s, a, r, s′)} (algorithm 3) B1
1. What is the main contribution of the paper in the field of continual learning? 2. What are the strengths of the proposed algorithm, particularly in its ability to balance catastrophic forgetting and positive transfer? 3. How effective is the use of experience replay in the proposed algorithm, and how does it contribute to its performance? 4. What are some potential limitations or areas for improvement in the proposed algorithm, such as sensitivity to hyperparameters or adaptability to different task differences? 5. How does the paper's framework for trading off catastrophic forgetting against positive transfer contribute to the broader understanding of continual learning?
Review
Review The authors frame continual learning as a meta-learning problem that balances catastrophic forgetting against the capacity to learn new tasks. They propose an algorithm (MER) that combines a meta-learner (Reptile) with experience replay for continual learning. MER is evaluated on variants of MNIST (Permutated, Rotations, Many) and Omniglot against GEM and EWC. It is further tested in two reinforcement learning environments, Catcher and FlappyBird. In all cases, MER exhibits significant gains in terms of average retained accuracy. Pro's The paper is well structured and generally well written. The argument is both easy to follow and persuasive. In particular, the proposed framework for trading off catastrophic forgetting against positive transfer is enlightening and should be of interest to the community. While the idea of aligning gradients across tasks has been proposed before (Lopez-Paz & Ranzato, 2017), the authors make a non-trivial connection to Reptile that allows them to achieve the same goal in a surprisingly simple algorithm. That the algorithm does not require tasks to be identified makes it widely applicable and reported results are convincing. The authors have taken considerable care to tease out various effects, such as how MER responds to the degree of non-stationarity in the data, as well as the buffer size. I’m particularly impressed that MER can achieve such high retention rates using only a buffer size of 200. Given that multiple batches are sampled from the buffer for every input from the current task, I’m surprised MER doesn’t suffer from overfitting. How does the train-test accuracy gap change as the buffer size varies? The paper is further strengthened by empirically verifying that MER indeed does lead to a gradient alignment across tasks, and by an ablation study delineating the contribution from the ER strategy and the contribution from including Reptile. Notably, just using ER outperforms previous methods, and for a sufficient large buffer size, ER is almost equivalent to MER. This is not surprising given that, in practice, the difference between MER and ER is an additional decay rate ( \gamma) applied to gradients from previous batches. Con's I would welcome a more thorough ablation study to measure the difference between ER and MER. In particular, how sensitive is MER is to changes in \gamma? And could ER + an adaptive optimizer (e.g. Adam) emulate the effect of \gamma and perform on par with MER. Similarly, given that DQN already uses ER, it would be valuable to report how a DQN with reservoir sampling performs. I am not entirely convinced though that MER maximizes for forward transfer. It turns continual learning into multi-task learning and if the new task is sufficiently different from previous tasks, MER’s ability to learn the current task would be impaired. The paper only reports average retained accuracy, so the empirical support for the claim is ambiguous. The FlappyBird experiment could be improved. As tasks are defined by making the gap between pipes smaller, a good policy for task t is a good policy for task t-1 as well, so the trade-off between backward and forward transfer that motivates MER does not arise. Further, since the baseline DQN never finds a good policy, it is essentially a pseudo-random baseline. I suspect the only reason DQN+MER learns to play the game is because it keeps "easy" experiences with a lot of signal in the buffer for a longer period of time. That both the baseline and MER+DQN seems to unlearn from tasks 5 and 6 suggests further calibration might be needed.
ICLR
Title Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference Abstract Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely.1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994). In continual learning we assume that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution (see Appendix A for details). We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Unfortunately, neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network We consider task agnostic future gradients, referring to gradients of the model parameters with respect to unseen data points. These can be drawn from tasks that have already been partially learned or unseen tasks. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994). In continual learning we assume that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution (see Appendix A for details). We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Unfortunately, neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network 1We consider task agnostic future gradients, referring to gradients of the model parameters with respect to unseen data points. These can be drawn from tasks that have already been partially learned or unseen tasks. stability (to preserve past knowledge) and plasticity (to rapidly learn the current experience). For example, these techniques focus on balancing limited weight sharing with some mechanism to ensure fast learning (Li & Hoiem, 2016; Riemer et al., 2016a; Lopez-Paz & Ranzato, 2017; Rosenbaum et al., 2018; Lee et al., 2018; Serrà et al., 2018). In this paper, we extend this view by noting that for continual learning over an unbounded number of distributions, we need to consider weight sharing and the stability-plasticity trade-off in both the forward and backward directions in time (Figure 1A). The transfer-interference trade-off proposed in this paper (section 2) presents a novel perspective on the goal of gradient alignment for the continual learning problem. This is right at the heart of the problem as these gradients are the update steps for SGD based optimizers during learning and there is a clear connection between gradients angles and managing the extent of weight sharing. The key difference in perspective with past conceptualizations of continual learning is that we are not just concerned with current transfer and interference with respect to past examples, but also with the dynamics of transfer and interference moving forward as we learn. Other approaches have certainly explored operational notions of transfer and interference in forward and backward directions (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2018), the link to weight sharing (French, 1991; Ajemian et al., 2013), and the idea of influencing gradient alignment for continual learning before (Lopez-Paz & Ranzato, 2017). However, in past work, ad hoc changes have been made to the dynamics of weight sharing based on current learning and past learning without formulating a consistent theory about the optimal weight sharing dynamics. This new view of the problem leads to a natural meta-learning (Schmidhuber, 1987) perspective on continual learning: we would like to learn to modify our learning to affect the dynamics of transfer and interference in a general sense. To the extent that our meta-learning into the future generalizes, this should make it easier for our model to perform continual learning in non-stationary settings. We achieve this by building off past work on experience replay (Murre, 1992; Lin, 1992; Robins, 1995) that has been a mainstay for solving non-stationary problems with neural networks. We propose a novel meta-experience replay (MER) algorithm that combines experience replay with optimization based meta-learning (section 3) as a first step towards modeling this perspective. Moreover, our experiments (sections 4, 5, and 6), confirm our theory. MER shows great promise across a variety of supervised continual learning and continual reinforcement learning settings. Critically, our approach is not reliant on any provided notion of tasks and in most of the settings we explore we must detect the concept of tasks without supervision. See Appendix B for a more detailed positioning with respect to related research. 2 THE TRANSFER-INTERFERENCE TRADE-OFF FOR CONTINUAL LEARNING At an instant in time with parameters θ and loss L, we can define2 operational measures of transfer and interference between two arbitrary distinct examples (xi, yi) and (xj , yj) while training with 2Throughout the paper we discuss ideas in terms of the supervised learning problem formulation. Extensions to the reinforcement learning formulation are straightforward. We provide more details in Appendix N. SGD. Transfer occurs when: ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ > 0, (1) where · is the dot product operator. This implies that learning example i will without repetition improve performance on example j and vice versa (Figure 1B). Interference occurs when: ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ < 0. (2) Here, in contrast, learning example i will lead to unlearning (i.e. forgetting) of example j and vice versa (Figure 1C). 3 There is weight sharing between i and j when they are learned using an overlapping set of parameters. So, potential for transfer is maximized when weight sharing is maximized while potential for interference is minimized when weight sharing is minimized (Appendix C). Past solutions for the stability-plasticity dilemma in continual learning operate in a simplified temporal context where learning is divided into two phases: all past experiences are lumped together as old memories and the data currently being learned qualifies as new learning. In this setting, the goal is to simply minimize the interference projecting backward in time, which is generally achieved by reducing the degree of weight sharing explicitly or implicitly. In Appendix D we explain how our baseline approaches (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017) fit within this paradigm. The important issue with this perspective, however, is that the system still has learning to do and what the future may bring is largely unknown. This makes it incumbent upon us to do nothing to potentially undermine the networks ability to effectively learn in an uncertain future. This consideration makes us extend the temporal horizon of the stability-plasticity problem forward, turning it, more generally, into a continual learning problem that we label as solving the Transfer-Interference Trade-off (Figure 1A). Specifically, it is important not only to reduce backward interference from our current point in time, but we must do so in a manner that does not limit our ability to learn in the future. This more general perspective acknowledges a subtlety in the problem: the issue of gradient alignment and thus weight sharing across examples arises both backward and forward in time. With this temporally symmetric perspective, the transfer-interference trade-off becomes clear. Here we propose a potential solution where we learn to learn in a way that promotes gradient alignment at each point in time. The weight sharing across examples that enables transfer to improve future performance must not disrupt performance on what has come previously. As such, our work adopts a meta-learning perspective on the continual learning problem. We would like to learn to learn each example in a way that generalizes to other examples from the overall distribution. 3 A SYSTEM FOR LEARNING TO LEARN WITHOUT FORGETTING In typical offline supervised learning, we can express our optimization objective over the stationary distribution of x, y pairs within the dataset D: θ = argmin θ E(x,y)∼D[L(x, y)], (3) where L is the loss function, which can be selected to fit the problem. If we would like to maximize transfer and minimize interference, we can imagine it would be useful to add an auxiliary loss to the objective to bias the learning process in that direction. Considering equations 1 and 2, one obviously beneficial choice would be to also directly consider the gradients with respect to the loss function evaluated at randomly chosen datapoints. If we could maximize the dot products between gradients at these different points, it would directly encourage the network to share parameters where gradient directions align and keep parameters separate where interference is caused by gradients in opposite directions. So, ideally we would like to optimize for the following objective 4: θ = argmin θ E[(xi,yi),(xj ,yj)]∼D[L(xi, yi) + L(xj , yj)− α ∂L(xi, yi) ∂θ · ∂L(xj , yj) ∂θ ], (4) 3We borrow our terminology from operational measures of forward transfer and backward transfer in LopezPaz & Ranzato (2017), but adopt a temporally symmetric view of the phenomenon by dropping the specification of direction. Interference commonly refers to negative transfer in either direction in the literature. 4The inclusion of L(xj , yj) is largely an arbitrary notation choice as the relative prioritization of the two types of terms can be absorbed in α. We use this notation as it is most consistant with our implementation. Algorithm 1 Meta-Experience Replay (MER) procedure TRAIN(D, θ, α, β, γ, s, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batches from buffer: B1, ..., Bs ← sample(x, y, s, k,M) θA0 ← θ for i = 1, ..., s do θWi,0 ← θ for j = 1, ..., k do xc, yc ← Bi[j] θWi,j ← SGD(xc, yc, θWi,j−1, α) end for // Within batch Reptile meta-update: θ ← θWi,0 + β(θWi,k − θWi,0) θAi ← θ end for // Across batch Reptile meta-update: θ ← θA0 + γ(θAs − θA0 ) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure where (xi, yi) and (xj , yj) are randomly sampled unique data points. We will attempt to design a continual learning system that optimizes for this objective. However, there are multiple problems that must be addressed to implement this kind of learning process in practice. The first problem is that continual learning deals with learning over a non-stationary stream of data. We address this by implementing an experience replay module that augments online learning so that we can approximately optimize over the stationary distribution of all examples seen so far. Another practical problem is that the gradients of this loss depend on the second derivative of the loss function, which is expensive to compute. We address this by indirectly approximating the objective to a first order Taylor expansion using a meta-learning algorithm with minimal computational overhead. 3.1 EXPERIENCE REPLAY Learning objective: The continual lifelong learning setting poses a challenge for the optimization of neural networks as examples come one by one in a non-stationary stream. Instead, we would like our network to optimize over the stationary distribution of all examples seen so far. Experience replay (Lin, 1992; Murre, 1992) is an old technique that remains a central component of deep learning systems attempting to learn in non-stationary settings, and we will adopt here conventions from recent work (Zhang & Sutton, 2017; Riemer et al., 2017b) leveraging this approach. The central feature of experience replay is keeping a memory of examples seen M that is interleaved with the training of the current example with the goal of making training more stable. As a result, experience replay approximates the objective in equation 3 to the extent that M approximates D: θ = argmin θ E(x,y)∼M [L(x, y)], (5) M has a current size Msize and maximum size Mmax. In our work, we update the buffer with reservoir sampling (Appendix F). This ensures that at every time-step the probability that any of the N examples seen has of being in the buffer is equal to Msize/N . The content of the buffer resembles a stationary distribution over all examples seen to the extent that the items stored captures the variation of past examples. Following the standard practice in offline learning, we train by randomly sampling a batch B from the distribution captured by M . Prioritizing the current example: the variant of experience replay we explore differs from offline learning in that the current example has a special role ensuring that it is always interleaved with the examples sampled from the replay buffer. This is because before we proceed to the next example, we want to make sure our algorithm has the ability to optimize for the current example (particularly if it is not added to the memory). Over N examples seen, this still implies that we have trained with each example as the current example with probability per step of 1/N . We provide algorithms further detailing how experience replay is used in this work in Appendix G (algorithms 4 and 5). Concerns about storing examples: Obviously, it is not scalable to store every experience seen in memory. As such, in this work we focus on showing that we can achieve greater performance than baseline techniques when each approach is provided with only a small memory buffer. 3.2 COMBINING EXPERIENCE REPLAY WITH OPTIMIZATION BASED META-LEARNING First order meta-learning: One of the most popular meta-learning algorithms to date is Model Agnostic Meta-Learning (MAML) (Finn et al., 2017). MAML is an optimization based meta-learning algorithm with nice properties such as the ability to approximate any learning algorithm and the ability to generalize well to learning data outside of the previous distribution (Finn & Levine, 2017). One aspect of MAML that limits its scalability is the need to explicitly compute second derivatives. The authors proposed a variant called first-order MAML (FOMAML), which is defined by ignoring the second derivative terms to address this issue and surprisingly found that it achieved very similar performance. Recently, this phenomenon was explained by Nichol & Schulman (2018) who noted through Taylor expansion that the two algorithms were approximately optimizing for the same loss function. Nichol & Schulman (2018) also proposed an algorithm, Reptile, that efficiently optimizes for approximately the same objective while not requiring that the data be split into training and testing splits for each task learned as MAML does. Reptile is implemented by optimizing across s batches of data sequentially with an SGD based optimizer and learning rate α. After training on these batches, we take the initial parameters before training θ0 and update them to θ0 ← θ0 + β ∗ (θk − θ0) where β is the learning rate for the meta-learning update. The process repeats for each series of s batches (algorithm 2). Shown in terms of gradients in Nichol & Schulman (2018), Reptile approximately optimizes for the following objective over a set of s batches: θ = argmin θ EB1,...,Bs∼D[2 s∑ i=1 [L(Bi)− i−1∑ j=1 α ∂L(Bi) ∂θ · ∂L(Bj) ∂θ ]], (6) where B1, ..., Bs are batches within D. This is similar to our motivation in equation 4 to the extent that gradients produced on these batches approximate samples from the stationary distribution. The MER learning objective: In this work, we modify the Reptile algorithm to properly integrate it with an experience replay module, facilitating continual learning while maximizing transfer and minimizing interference. As we describe in more detail during the derivation in Appendix I, achieving the Reptile objective in an online setting where examples are provided sequentially is non-trivial and is in part only achievable because of our sampling strategies for both the buffer and batch. Following our remarks about experience replay from the prior section, this allows us to optimize for the following objective in a continual learning setting using our proposed MER algorithm: θ = argmin θ E[(x11,y11),...,(xsk,ysk)]∼M [2 s∑ i=1 k∑ j=1 [L(xij , yij)− i−1∑ q=1 j−1∑ r=1 α ∂L(xij , yij) ∂θ ·∂L(xqr, yqr) ∂θ ]]. (7) The MER algorithm: MER maintains an experience replay style memory M with reservoir sampling and at each time step draws s batches including k − 1 random samples from the buffer to be trained alongside the current example. Each of the k examples within each batch is treated as its own Reptile batch of size 1 with an inner loop Reptile meta-update after that batch is processed. We then apply the Reptile meta-update again in an outer loop across the s batches. We provide further details for MER in algorithm 1. This procedure approximates the objective of equation 7 when β = 1. The sample function produces s batches for updates. Each batch is created by first adding the current example and then interleaving k − 1 random examples from M . Controlling the degree of regularization: In light of our ideal objective in equation 4, we can see that using a SGD batch size of 1 has an advantage over larger batches because it allows for the second derivative information conveyed to the algorithm to be fine grained on the example level. Another reason to use sample level effective batches is that for a given number of samples drawn from the buffer, we maximize s from equation 6. In equation 6, the typical offline learning loss has a weighting proportional to s and the regularizer term to maximize transfer and minimize interference has a weighting proportional to αs(s−1)/2. This implies that by maximizing the effective s we can put more weight on the regularization term. We found that for a fixed number of examples drawn from M , we consistently performed better converting to a long list of individual samples than we did using proper batches as in Nichol & Schulman (2018) for few shot learning. Prioritizing current learning: To ensure strong regularization, we would like our number of batches processed in a Reptile update to be large – enough that experience replay alone would start to overfit to M . As such, we also need to make sure we provide enough priority to learning the current example, particularly because we may not store it in M . To achieve this in algorithm 1, we sample s separate batches from M that are processed sequentially and each interleaved with the current example. In Appendix H we also outline two additional variants of MER with very similar properties in that they effectively approximate for the same objective. In one we choose one big batch of size sk − s memories and s copies of the current example (algorithm 6). In the other, we choose one memory batch of size k−1 with a special current item learning rate of sα (algorithm 7). Unique properties: In the end, our approach amounts to a quite easy to implement and computationally efficient extension of SGD, which is applied to an experience replay buffer by leveraging the machinery of past work on optimization based meta-learning. However, the emergent regularization on learning is totally different than those previously considered. Past work on optimization based meta-learning has enabled fast learning on incoming data without considering past data. Meanwhile, past work on experience replay only focused on stabilizing learning by approximating stationary conditions without altering model parameters to change the dynamics of transfer and interference. 4 EVALUATION FOR SUPERVISED CONTINUAL LIFELONG LEARNING To test the efficacy of MER we compare it to relevant baselines for continual learning of many supervised tasks from Lopez-Paz & Ranzato (2017) (see Appendix D for in-depth descriptions): • Online: represents online learning performance of a model trained straightforwardly one example at a time on the incoming non-stationary training data by simply applying SGD. • Independent: an independent predictor per task with less hidden units proportional to the number of tasks. When useful, it can be initialized by cloning the last predictor. • Task Input: has the same architecture as Online, but with a dedicated input layer per task. • EWC: Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is an algorithm that modifies online learning where the loss is regularized to avoid catastrophic forgetting. • GEM: Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017) is an approach for making efficient use of episodic storage by following gradients on incoming examples to the maximum extent while altering them so that they do not interfere with past memories. An independent adhoc analysis is performed to alter each incoming gradient. In contrast to MER, nothing generalizable is learned across examples about how to alter gradients. We follow Lopez-Paz & Ranzato (2017) and consider final retained accuracy across all tasks after training sequentially on all tasks as our main metric for comparing approaches. Moving forward we will refer to this metric as retained accuracy (RA). In order to reveal more characteristics of the learning behavior, we also report the learning accuracy (LA) which is the average accuracy for each task directly after it is learned. Additionally, we report the backward transfer and interference (BTI) as the average change in accuracy from when a task is learned to the end of training. A highly negative BTI reflects catastrophic forgetting. Forward transfer and interference (Lopez-Paz & Ranzato, 2017) is only applicable for one task we explore, so we provide details in Appendix K. Question 1 How does MER perform on supervised continual learning benchmarks? To address this question we consider two continual learning benchmarks from Lopez-Paz & Ranzato (2017). MNIST Permutations is a variant of MNIST first proposed in Kirkpatrick et al. (2017) where each task is transformed by a fixed permutation of the MNIST pixels. As such, the input distribution of each task is unrelated. MNIST Rotations is another variant of MNIST proposed in Lopez-Paz & Ranzato (2017) where each task contains digits rotated by a fixed angle between 0 and 180 degrees. We follow the standard benchmark setting from Lopez-Paz & Ranzato (2017) using a modest memory buffer of size 5120 to learn 1000 sampled examples across each of 20 tasks. We provide detailed information about our architectures and hyperparameters in Appendix J. In Table 1 we report results on these benchmarks in comparison to our baseline approaches. Clearly GEM outperforms our other baselines, but our approach adds significant value over GEM in terms of retained accuracy on both benchmarks. MER achieves this by striking a superior balance between transfer and interference with respect to the past and future data. MER displays the best adaption to incoming tasks, while also providing very strong retention of knowledge when learning future tasks. EWC and using a task specific input layer both also lead to gains over standard online learning in terms of retained accuracy. However, they are quite far below the performance of approaches that make usage of episodic storage. While EWC does not store examples, in storing the Fisher information for each task it accrues more incremental resources than the episodic storage approaches. Question 2 How do the performance gains from MER vary as a function of the buffer size? To make progress towards the greater goals of lifelong learning, we would like our algorithm to make the most use of even a modest buffer. This is because in extremely large scale settings it is unrealistic to assume a system can store a large percentage of previous examples in memory. As such, we would like to compare MER to GEM, which is known to perform well with an extremely small memory buffer (Lopez-Paz & Ranzato, 2017). We consider a buffer size of 500, that is over 10 times smaller than the standard setting on these benchmarks. Additionally, we also consider a buffer size of 200, matching the smallest setting explored in Lopez-Paz & Ranzato (2017). This setting corresponds to an average storage of 1 example for each combination of task and class. We report our results in Table 2. The benefits of MER seem to grow as the buffer becomes smaller. In the smallest setting, MER provides more than a 10% boost in retained accuracy on both benchmarks. Question 3 How effective is MER at dealing with increasingly non-stationary settings? Another larger goal of lifelong learning is to enable continual learning with only relatively few examples per task. This setting is particularly difficult because we have less data to characterize each class to learn from and our distribution is increasingly non-stationary over a fixed amount of training. We would like to explore how various models perform in this kind of setting. To do this we consider two new benchmarks. Many Permutations is a variant of MNIST Permutations that has 5 times more tasks (100 total) and 5 times less training examples per task (200 each). Meanwhile we also explore the Omniglot (Lake et al., 2011) benchmark treating each of the 50 alphabets to be a task (see Appendix J for experimental details). Following multi-task learning conventions, 90% of the data is used for training and 10% is used for testing (Yang & Hospedales, 2017). Overall there are 1623 characters. We learn each character and task sequentially with a task specific output layer. We report continual learning results using these new datasets in Table 3. The effect on Many Permutations of efficiently using episodic storage becomes even more pronounced when the setting becomes more non-stationary. GEM and MER both achieve nearly double the performance of EWC and online learning. We also see that increasingly non-stationary settings lead to a larger performance gain for MER over GEM. Gains are quite significant for Many Permutations and remarkable for Omniglot. Omniglot is even more non-stationary including slightly fewer examples per task and MER nearly quadruples the performance of baseline techniques. Considering the poor performance of online learning and EWC it is natural to question whether or not examples were learned in the first place. We experiment with using as many as 100 gradient descent steps per incoming example to ensure each example is learned when first seen. However, due to the extremely non-stationary setting no run of any variant we tried surpassed 5.5% retained accuracy. GEM also has major deficits for learning on Omniglot that are resolved by MER which achieves far better performance when it comes to quickly learning the current task. GEM maintains a buffer using a recent item based sampling strategy and thus can not deal with non-stationarity within the task nearly as well as reservoir sampling. Additionally, we found that the optimization based on the buffer was significantly less effective and less reliable as the quadratic program fails for many hyperparameter values that lead to non-positive definite matrices. Unfortunately, we could not get GEM to consistently converge on Omniglot for a memory size of 500 (significantly less than the number of classes), meanwhile MER handles it well. In fact, MER greatly outperforms GEM with an order of magnitude smaller buffer. We provide additional details about our experiments on Omniglot in Figure 3. We plot retained training accuracy, retained testing accuracy, and computation time for the entire training period using one CPU. We find that MER strikes the best balance of computational efficiency and performance even when using algorithm 1 for MER which performs more computation than algorithm 7. The computation involved in the GEM update does not scale well to large CNN models like those that are popular for Omniglot. MER is far better able to fit the training data than our baseline models while maintaining a computational efficiency closer to online update methods like EWC than GEM. 5 EVALUATION FOR CONTINUAL REINFORCEMENT LEARNING Question 4 Can MER improve a DQN with ER in continual reinforcement learning settings? We considered the evaluation of MER in a continual reinforcement learning setting where the environment is highly non-stationary. In order to produce these non-stationary environments in a controlled way suitable for our experimental purposes, we utilized arcade games provided by Tasfi (2016). Specifically, we used Catcher and Flappy Bird, two simple but interesting enough environments (see Appendix N.1 for details). For the purposes of our explanation, we will call each set of fixed game-dependent parameters a task5. The multi-task setting is then built by introducing changes in these parameters, resulting in non-stationarity across tasks. Each agent is once again evaluated based on its performance over time on all tasks. Our model uses a standard DQN model, developed for Atari (Mnih et al., 2015). See Appendix N.2 for implementation details. In Catcher, we then obtain different tasks by incrementally increasing the pellet velocity a total of 5 times during training. In Flappy Bird, the different tasks are obtained by incrementally reducing the separation between upper and lower pipes a total of 5 times during training. In Figure 4, we show the performance in Catcher when trained sequentially on 6 different tasks for 25k frames each to a maximum of 150k frames, evaluated at each point in time in all 6 tasks. Under these nonstationary conditions, a DQN using MER performs consistently better than the standard DQN with an experience replay buffer (see Appendix N.4 for further comments and ablation results). If we take as inspiration how humans perform, in the last stages of training we hope that a player that obtains good results in later tasks will also obtain good results in the first tasks, as the first tasks are subsumed in the latter ones. For example, in Catcher, the pellet moves faster in later tasks, and thus we expect to be able to do well on the first task. However, DQN forgets significantly how to get slowly moving pellets. In contrast, DQN-MER exhibits minimal or no forgetting after training on the rest of the tasks. This behavior is intuitive because we would expect transfer to happen naturally in this setting. We see similar behavior for Flappy Bird. DQN-MER becomes a Platinum player on the first task when it is learning the third task. This is a more difficult environment in which the pipe gap is noticeably smaller (see Appendix N.4). DQN-MER exhibits the kind of learning patterns expected from humans for these games, while a standard DQN struggles to generalize as the game changes and to retain knowledge over time. 6 FURTHER ANALYSIS OF THE APPROACH In this section we would like to dive deeper into how MER works. To do so we run additional detailed experiments across our three MNIST based continual learning benchmarks. Question 5 Does MER lead to a shift in the distribution of gradient dot products? We would like to directly verify that MER achieves our motivation in equation 7 and results in significant changes in the distribution of gradient dot products between new incoming examples and past examples over the course of learning when compared to experience replay (ER) from algorithm 5Agents are not provided task information, forcing them to identify changes in game play on their own. 5. For these experiments, we maintain a history of all examples seen that is totally separate from our notion of memory buffers that only include a partial history of examples. Every time we receive a new example we use the current model to extract a gradient direction and we also randomly sample five examples from the previous history. We save the dot products of the incoming example gradient with these five past example gradients and consider the mean of the distribution of dot products seen over the course of learning for each model. We run this experiment on the best hyperparamater setting for both ER and MER from algorithm 6 with one batch per example for fair comparison. Each model is evaluated five times over the course of learning. We report mean and standard deviations of the mean gradient dot product across runs in Table 4. We can thus verify that a very significant and reproducible difference in the mean gradient encountered is seen for MER in comparison to ER alone. This difference alters the learning process making incoming examples on average result in slight transfer rather than significant interference. This analysis confirms the desired effect of the objective function in equation 7. For these tasks there are enough similarities that our meta-learning generalizes very well into the future. We should also expect it to perform well during drastic domain shifts like other meta-learning algorithms driven by SGD alone (Finn & Levine, 2017). Question 6 What components of MER are most important? We would like to further analyze our MER model to understand what components add the most value and when. We want to understand how powerful our proposed variants of ER are on their own and how much is added by adding meta-learning to ER. In Appendix L we provide detailed results considering ablated baselines for our experiments on the MNIST lifelong learning benchmarks. 6 Our versions of ER consistently provide gains over GEM on their own, but the techniques perform very comparably when we also maintain GEM’s buffer with reservoir sampling or use ER with a GEM style buffer. Additionally, we see that adding meta-learning to ER consistently results in performance gains. In fact, meta-learning appears to provide increasing value for smaller buffers. In Appendix M, we provide further validation that our results are reproducible across runs and seeds. We would also like to compare the variants of MER proposed in algorithms 1, 6, and 7. Conceptually algorithms 1 and 7 represent different mechanisms of increasing the importance of the current example in algorithm 6. We find that all variants of MER result in significant improvements on ER. Meanwhile, the variants that increase the importance of the current example see a further improvement in performance, performing quite comparably to each other. Overall, in our MNIST experiments algorithm 7 displays the best tradeoff of computational efficiency and performance. Finally, we conducted experiments demonstrating that adaptive optimizers like Adam and RMSProp can not account for the gap between ER and MER. Particularly for smaller buffer sizes, these approaches overfit more on the buffer and actually hurt generalization in comparison to SGD. 7 CONCLUSION In this paper we have cast a new perspective on the problem of continual learning in terms of a fundamental trade-off between transfer and interference. Exploiting this perspective, we have in turn developed a new algorithm Meta-Experience Replay (MER) that is well suited for application to general purpose continual learning problems. We have demonstrated that MER regularizes the objective of experience replay so that gradients on incoming examples are more likely to have transfer and less likely to have interference with respect to past examples. The result is a general purpose solution to continual learning problems that outperforms strong baselines for both supervised continual learning benchmarks and continual learning in non-stationary reinforcement learning environments. Techniques for continual learning have been largely driven by different conceptualizations of the fundamental problem encountered by neural networks. We hope that the transfer-interference tradeoff can be a useful problem view for future work to exploit with MER as a first successful example. 6Code available at https://github.com/mattriemer/mer. ACKNOWLEDGMENTS We would like to thank Pouya Bashivan, Christopher Potts, Dan Jurafsky, and Joshua Greene for their input and support of this work. Additionally, we would like to thank Arslan Chaudhry and Marc’Aurelio Ranzato for their helpful comments and discussions. We also thank the three anonymous reviewers for their valuable suggestions. This research was supported by the MIT-IBM Watson AI Lab, and is based in part upon work supported by the Stanford Data Science Initiative and by the NSF under Grant No. BCS-1456077 and the NSF Award IIS-1514268. A CONTINUAL LEARNING PROBLEM FORMULATION In the classical offline supervised learning setting, a learning agent is given a fixed training data set D = {(xi, yi)}ni=1 of n samples, each containing an input feature vector xi ∈ X associated with the corresponding output (target, or label) yi ∈ Y; a common assumption is that the training samples are i.i.d. samples drawn from the same unknown joint probability distribution P (x, y). The learning task is often formulated as a function approximation problem, i.e. finding a function, or model, fθ(x) : X → Y from a given class of models (e.g., neural networks, decision trees, linear functions, etc.) where θ are the parameters estimated from data. Given a loss function L(fθ(x), y), the parameter estimation is formulated as an empirical risk minimization problem: minθ 1 |D| ∑ (xi,yi)∼D L(fθ(x), y). On the contrary, the online learning setting does not assume a fixed training dataset but rather a stream of data samples, where unlabeled feature vectors arrive one at a time, or in small minibatches, and the learner must assign labels to those inputs, receive the correct labels, and update the model accordingly, in iterative fashion. While classical online learning assumes i.i.d. samples, continual or lifelong learning does not make such an assumption, and requires a learning agent to handle non-stationary data streams. In this work, we define continual learning as online learning from a non-stationary input data stream, with a specific type of non-stationarity as defined below. Namely, we follow a commonly used setting to define non-stationary conditions for continual learning, dubbed locally i.i.d by Lopez-Paz & Ranzato (2017), where the agent learns over a sequence of separate stationary distributions one after another. We call the individual stationary distributions tasks, where each task tk is an online supervised learning problem associated with its own data probability distribution Pk(x, y). Namely, we are given a (potentially infinite) sequence (x1, y1, t1), ..., (xi, yi, ti), ..., (xi+j , yi+j , ti+j) While many continual learning methods assume the task descriptors tk are available to a learner, we are interested in developing approaches which do not have to rely on such information and can learn continuously without explicit announcement of the task change. Borrowing terminology from Chaudhry et al. (2018), we explore the single-headed setting in most of our experiments, which keeps learning a common function fθ across changing tasks. In contrast, multi-headed learning, which we consider for our Omniglot experiments, involves a separate final classification layer for each task. This makes more sense in case of Omniglot dataset, where the number of classes for each task varies considerably from task to task. We should also note that for Omniglot we consider a setting that is locally i.i.d. at the class level rather than the task level. B RELATION TO PAST WORK With regard to the continual learning setting specifically, other recent work has explored similar operational measures of transfer and interference. For example, the notions of Forward Transfer and Backward Transfer were explored in Lopez-Paz & Ranzato (2017). However, the approach of that work, GEM, was primarily concerned with solving the classic stability-plasticity dilemma (Carpenter & Grossberg, 1987) at a specific instance of time. Adjustments to gradients on the current data are made in an ad hoc manner solving a quadratic program separately for each example. In our work we try to learn a generalizable theory about weight sharing that can learn to influence the distribution of gradients not just in the past and present, but in the future as well. Additionally, in Chaudhry et al. (2018) similar ideas were explored with operational measures of intransigence (the inability to learn new data) and forgetting (the loss of previous performance). These measures are also intimately related to the stability-plasticity dilemma as intransigence is high when plasticity is low and forgetting is high when stability is low. The major distinction in the transfer-interference trade-off proposed in this work is that we aim to learn the optimal weight sharing scheme to optimize for the stability-plasticity dilemma with the hope that our learning about weight sharing will improve the stability and efficacy of learning on unseen data as well. With regard to the problem of weight-sharing in neural networks more generally, a host of different strategies have been proposed in the past to deal with the problems of catastrophic forgetting and/or the stability-plasticity dilemma (for review, see French (1999)). For example, one strategy for alleviating catastrophic forgetting is to make distributed representations less distributed – or semi-distributed (French, 1991) – for the case of past learning. Activation sharpening as introduced by French (1991) is a prominent example. A second strategy known as dual network models (McClelland et al., 1995; Ans & Rousset, 1997) is based on the neurobiological finding that both hippocampal and cortical circuits contributed differentially to memory. The cortical circuits are highly distributed with overlapping representations suitable for task generalization, while the more sparse hippocampal representations tend to be non-overlapping. The existence of dual circuits provides an extra degree of freedom for balancing the dual constraints of stability and plasticity. In a similar spirit, models have been proposed that have two classes of weights operating on two different timescales (Hinton & Plaut, 1987). A third strategy also motivated by neurobiological considerations is the use of latent synaptic dynamics (Fusi et al., 2005; Lahiri & Ganguli, 2013). Here the basic idea is that synaptic strength is determined by a multiple of variables, including latent ones not easily observed, operating at different timescales such that their net effect is to provide the system with additional degrees-of-freedom to store past experience without interfering with current learning. A fourth strategy is the use of feedback mechanisms to stabilize representations (Carpenter & Grossberg, 1987; Murre, 1992). In this class of models, a previously experienced memory will trigger top down feedback that prevents plasticity, while novel stimuli that experience no such feedback trigger plasticity. All of these approaches have their own strengths and weaknesses with respect to the stability-plasticity dilemma and, by extension, the transfer-interference trade-off we propose. Another relevant work is the POWERPLAY framework (Schmidhuber, 2004; 2013) which is a method for asymptotically optimal curriculum learning that by definition cannot forget previously learned skills. POWERPLAY also uses environment-independent replay of behavioral traces to avoid forgetting previous skills. However, POWERPLAY is orthogonal to our work as we consider a different setting where the agent cannot directly control the new tasks that will be encountered in the environment and thus must instead learn to adapt and react to non-stationarity conditions. In contrast to past work on meta-learning for few shot learning (Santoro et al., 2016; Vinyals et al., 2016; Ravi & Larochelle, 2016; Finn et al., 2017) and reinforcement learning across successive tasks (Al-Shedivat et al., 2018), we are not only trying to improve the speed of learning on new data, but also trying to do it in a way that preserves knowledge of past data and generalizes to future data. While past work has considered learning to influence gradient angles, so that there is more alignment and thus faster learning within a task, we focus on a setting where we would like to influence gradient angles from all tasks at all points in time. As our model aims to influence the dynamics of weight sharing, it bears conceptual similarity to mixtures of experts (Jacobs et al., 1991) style models for lifelong and multi-task learning (Misra et al., 2016; Riemer et al., 2016b; Aljundi et al., 2017; Fernando et al., 2017; Shazeer et al., 2017; Rosenbaum et al., 2018). MER implicitly affects the dynamics of weight sharing, but it is possible that combining it with mixtures of experts models could further amplify the ability for the model to control these dynamics. This is potentially an interesting avenue for future work. The options framework has also been considered as a solution to a similar continual RL setting to the one we explore (Mankowitz et al., 2018). Options formalize the notion of temporally abstraction actions in RL. Interestingly, generic architectures designed for shallow (Bacon et al., 2017) or deep (Riemer et al., 2018) hierarchies of options in essence learn very complex patterns of weight sharing over time. The option hierarchies constitute an explicit mechanism of controlling the extent of weight sharing for continual learning, allowing for orthogonalization of weights relating to different skills. In contrast, our work explores a method of implicitly optimizing weight sharing for continual learning that improves the efficacy of experience replay. MER should be simple to implement in concert with options based methods and combining the two is an interesting direction for future work. C THE CONNECTION BETWEEN WEIGHT SHARING AND THE TRANSFER-INTERFERENCE TRADE-OFF In this section we would like to generalize our interpretation of a large set of different weight sharing schemes including (Riemer et al., 2015; Bengio et al., 2015; Rosenbaum et al., 2018; Serrà et al., 2018) and how the concept of weight sharing impacts the dynamics of transfer (equation 1) and interference (equation 2). We will assume that we have a total parameter space θ that can be used by our network at any point in time. However, it is not a requirement that all parameters are actually used at all points in time. So, we can consider two specific instances in time. One where we receive data point (x1, y1) and leverage parameters θ1. Then, at the other instance in time, we receive data point (x2, y2) and leverage parameters θ2. θ1 and θ2 are both subsets of θ and critically the overlap between these subsets influences the possible extent of transfer and interference when training on either data point. First let us consider two extremes. In the first extreme imagine θ1 and θ2 are entirely nonoverlapping. As such ∂L(x1,y1)∂θ · ∂L(x2,y2) ∂θ = 0. On the positive side, this means that our solution has no potential for interference between the examples. On the other hand, there is no potential for transfer either. On the other extreme, we can imagine that θ1 = θ2. In this case, the potential for both transfer and interference is maximized as gradients with respect to every parameter have the possibility of a non-zero dot product with each other. From this discussion it is clear that both the extreme of full weight sharing and the extreme of no weight sharing have value depending on the relationship between data points. What we would really like for continual learning is to have a system that learns when to share weights and when not to on its own. To the extent that our learning about weight sharing generalizes, this should allow us to find an optimal solution to the transfer-interference trade-off. D FURTHER DESCRIPTIONS AND COMPARISONS WITH BASELINE ALGORITHMS Independent: originally reported in (Lopez-Paz & Ranzato, 2017) is the performance of an independent predictor per task which has the same architecture but with less hidden units proportional to the number of tasks. The independent predictor can be initialized randomly or clone the last trained predictor depending on what leads to better performance. EWC: Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is an algorithm that modifies online learning where the loss is regularized to avoid catastrophic forgetting by considering the importance of parameters in the model as measured by their fisher information. EWC follows the catastrophic forgetting view of the continual learning problem by promoting less sharing of parameters for new learning that were deemed to be important for performance on old memories. We utilize the code provided by Lopez-Paz & Ranzato (2017) in our experiments. The only difference in our setting is that we provide the model one example at a time to test true continual learning rather than providing a batch of 10 examples at a time. GEM: Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017) is an algorithm meant to enhance the effectiveness of episodic storage based continual learning techniques by allowing the model to adapt to incoming examples using SGD as long as the gradients do not interfere with examples from each task stored in a memory buffer. If gradients interfere leading to a decrease in the performance of a past task, a quadratic program is used to solve for the closest gradient to the original that does not have negative gradient dot products with the aggregate memories from any previous tasks. GEM is known to achieve superior performance in comparison to other recently proposed techniques that use episodic storage like Rebuffi et al. (2017), making superior use of small memory buffer sizes. GEM follows similar motivation to our approach in that it also considers the intelligent use of gradient dot product information to improve the use case of supervised continual learning. As a result, it is a very strong and interesting baseline to compare with our approach. We modify the original code and benchmarks provided by Lopez-Paz & Ranzato (2017). Once again the only difference in our setting is that we provide the model one example at a time to test true continual learning rather than providing a batch of 10 examples at a time. We can consider the GEM algorithm as tailored to the stability-plasticity dilemma conceptualization of continual learning in that it looks to preserve performance on past tasks while allowing for maximal plasticity to the new task. To achieve this, GEM solves a quadratic program to find an approximate gradient gnew that closely matches ∂L(xnew,ynew) ∂θ while ensuring that the following constraint holds: gnew · ∂L(xold, yold) ∂θ > 0. (8) E REPTILE ALGORITHM We detail the standard Reptile algorithm from (Nichol & Schulman, 2018) in algorithm 2. The sample function randomly samples s batches of size k from dataset D. The SGD function applies min-batch stochastic gradient descent over a batch of data given a set of current parameters and learning rate. Algorithm 2 Reptile for Stationary Data procedure TRAIN(D, θ, α, β, s, k) while not done do // Draw batches from data: B1, ..., Bs ← sample(D, s, k) θ0 ← θ for i = 1, ..., s do θi ← SGD(Bi, θi−1, α) end for // Reptile meta-update: θ ← θ0 + β(θs − θ0) end while return θ end procedure F DETAILS ON RESERVOIR SAMPLING Throughout this paper we refer to updates to our memory M as M ←M ∪{(x, y)}. We would like to now provide details on how we update our memory buffer using reservoir sampling as outlined in Vitter (1985) (algorithm 3). Reservoir sampling solves the problem of keeping some limited number M of N total items seen before with equal probability MN when you don’t know what number N will be in advance. The randomInteger function randomly draws an integer inclusively between the provided minimum and maximum values. Algorithm 3 Reservoir Sampling with Algorithm R procedure RESERVOIR(M,N, x, y) if M > N then M [N ]← (x, y) else j = randomInteger(min = 0,max = N) if j < M then M [j]← (x, y) end if end if return M end procedure G EXPERIENCE REPLAY ALGORITHMS We detail the our variant of the experience replay in algorithm 4. This procedure closely follows recent enhancements discussed in Zhang & Sutton (2017); Riemer et al. (2017b;a) The sample function randomly samples k − 1 examples from the memory buffer M and interleaves them with the current example to form a single size k batch. The SGD function applies mini-batch stochastic gradient descent over a batch of data given a set of current parameters and learning rate. Algorithm 4 Experience Replay (ER) with Reservoir Sampling procedure TRAIN(D, θ, α, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, k,M) // Update parameters with mini-batch SGD: θ ← SGD(B, θ, α) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Unfortunately, it is not straightforward to implement algorithm 4 in all circumstances. In particular, it depends whether the neural network architecture is single headed (sharing an output layer and output space among all tasks) or multi-headed (where each task gets its own unique output space). In multi-headed settings, it is common to consider the tasks in separate batches and to equally weight the sampled tasks during each update. This results in training the parameters evenly for each task and is particularly important so we pay equal attention to each set of task specific parameters. We detail an approach that separates tasks into sub-batches for a balanced update in algorithm 5. Here L is the loss given a set of parameters over a batch of data and SGD applies a mini-batch gradient descent update rule over a loss given a set of parameters and learning rate. Algorithm 5 Experience Replay (ER) with Tasks procedure TRAIN(D, θ, α, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, k,M) // Compute balanced loss across tasks loss = 0.0 for task in B do loss = loss+ L(B[task], θ) end for // Update parameters with mini-batch SGD: θ ← SGD(loss, θ, α) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Our experiments demonstrate that both variants of experience replay are very effective for continual learning. Meanwhile, each performs significantly better than the other on some datasets and settings. H THE VARIANTS OF MER We detail two additional variants of MER (algorithm 1) in algorithms 6 and 7. The sample function takes on a slightly different meaning in each variant of the algorithm. In algorithm 1 sample is used to produce s batches consisting of k − 1 random examples from the memory buffer and the current example. In algorithm 6 sample is used to produce one batch consisting of sk − s examples from the memory buffer and s copies of the current example. In algorithm 7 sample is used to produce one batch consisting of k − 1 examples from the memory buffer. In algorithm 6, sample places the current example at the end of the batch. Meanwhile, in algorithm 7, sample places the current example in a random location within the batch. In contrast, the SGD function carries a common meaning across algorithms, applying stochastic gradient descent over a particular input and output given a set of current parameters and learning rate. Algorithm 6 Meta-Experience Replay (MER) - One Big Batch procedure TRAIN(D, θ, α, γ, sk) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B ← sample(x, y, s, k,M) θ0 ← θ for i = 1, ..., sk do xc, yc ← Bi[j] θi ← SGD(xc, yc, θi−1, α) end for // Reptile meta-update: θ ← θ0 + γ(θsk − θ0) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure Algorithm 7 Meta-Experience Replay (MER) - Current Example Learning Rate procedure TRAIN(D, θ, α, γ, s, k) M ← {} for t = 1, ..., T do for (x, y) in Dt do // Draw batch from buffer: B, index← sample(k − 1,M) θ0 ← θ // SGD on individual samples from batch: for i = 1, ..., k − 1 do xc, yc ← Bi[j] if j = index // High learning rate SGD on current example: θk ← SGD(x, y, θk−1, sα) else θi ← SGD(xc, yc, θi−1, α) end for // Reptile meta-update: θ ← θ0 + γ(θk − θ0) // Reservoir sampling memory update: M ←M ∪ {(x, y)} (algorithm 3) end for end for return θ,M end procedure I DERIVING THE EFFECTIVE OBJECTIVE OF MER We would like to derive what objective Meta-Experience Replay (algorithm 1) approximates and show that it is approximately the same objective from algorithms 6 and 7. We follow conventions from Nichol & Schulman (2018) and first demonstrate what happens to the effective gradients computed by the algorithm in the most trivial case. As in Nichol & Schulman (2018), this allows us to extrapolate an effective gradient that is a function of the number of steps taken. We can then consider the effective loss function that results in this gradient. Before we begin, let us define the following terms from Nichol & Schulman (2018): gi = ∂L(θi) ∂θi (gradient obtained during SGD) (9) θi+1 = θi − αgi (sequence of parameter vectors) (10) ḡi = ∂L(θi) ∂θ0 (gradient at initial point) (11) gji = ∂L(θi) ∂θj (gradient evaluated at point i with respect to parameters j) (12) H̄i = ∂2L(θi) ∂θ20 (Hessian at initial point) (13) Hji = ∂2L(θi) ∂θ2j (Hessian evaluated at point i with respect to parameters j) (14) In Nichol & Schulman (2018) they consider the effective gradient across one loop of reptile with size k = 2. As we have both an outer loop of Reptile applied across batches and an inner loop applied within the batch to consider, we start with a setting where the number of batches s = 2 and the number of examples per batch k = 2. Let’s recall from the original paper that the gradients of Reptile with k = 2 was: gReptile,k=2,s=1 = g0 + g1 = ḡ0 + ḡ1 − αH̄1ḡ0 +O(α2) (15) So, we can also consider the gradients of Reptile if we had 4 examples in one big batch (algorithm 6) as opposed to 2 batches of 2 examples: gReptile,k=4,s=1 = g0 + g1 + g2 + g3 = ḡ0 + ḡ1 + ḡ2 + ḡ3 − αH̄1ḡ0 − αH̄2ḡ0 − αH̄2ḡ1 − αH̄3ḡ0 − αH̄3ḡ1 − αH̄3ḡ2 +O(α2) (16) Now we can consider the case for MER where we define the parameter values as follows extending algorithm 1 where A stands for across batches and W stands for within batches: θ0 = θ A 0 = θ W 00 (17) θW01 = θ W 00 − αg00 (18) θW02 = θ W 01 − αg01 (19) θA1 = θ A 0 + β (θW02 − θA0 ) α = θ0 + β (θW02 − θ0) α = θW10 (20) θW11 = θ W 10 − αg10 (21) θW12 = θ W 11 − αg11 (22) θA2 = θ A 1 + β (θW12 − θA1 ) α (23) θ = θA0 + γβ (θA2 − θA0 ) β = θA0 + γ(θ A 2 − θA0 ) (24) gMER the gradient of Meta-Experience Replay can thus be defined analogously to the gradient of Reptile as: gMER = θA0 − θA2 β = θ0 − θA2 β (25) By simply applying Reptile from equation 15 we can derive the value of the parameters θA1 after updating with Reptile within the first batch in terms of the original parameters θ0: θA1 = θ0 − βḡ00 − βḡ01 + βαH̄01ḡ00 +O(βα2) (26) By subbing equation 26 into equation 23 we can see that: θA2 = θ0 − βḡ00 − βḡ01 + βαH̄01ḡ00 − βg10 − βg11 +O(βα2) (27) We can express g10 in terms of the initial point, by considering a Taylor expansion following the Reptile paper: g10 = ḡ10 + αH̄10(θ W 10 − θ0) +O(α2) (28) Then substituting in for θW10 we express g10 in terms of θ0: g10 = ḡ10 − αβH̄10ḡ00 − αβH̄10ḡ01 +O(α2) (29) We can then rewrite g11 by taking a Taylor expansions with respect to θW10 : g11 = g 10 11 − αH1011g10 +O(α2) (30) Taking another Taylor expansion we find that we can transform our expression for the Hessian: H1011 = H̄11 +O(α) (31) We can analogously also transform our expression our expression for g1011 : g1011 = ḡ11 + αH̄11(θ W 10 − θ0) +O(α2) (32) Substituting for θW10 in terms of θ0 g1011 = ḡ11 − αβH̄11ḡ00 − αβH̄11ḡ01 +O(α2) (33) We then substitute equation 31, equation 33, and equation 29 into equation 34: g11 = ḡ11 − αβH̄11ḡ00 − αβH̄11ḡ01 − αH̄11ḡ10 +O(α2) (34) Finally, we have all of the terms we need to express θA2 and we can then derive an expression for the MER gradient gMER: gMER = ḡ00 + ḡ01 + ḡ10 + ḡ11 −αH̄01ḡ00 − αH̄11ḡ10 − αβH̄10ḡ00 − αβH̄10ḡ01 − αβH̄11ḡ00 − αβH̄11ḡ01 +O(α2) (35) This equation is quite interesting and very similar to equation 16. As we would like to approximate the same objective, we can remove one hyperparameter from our model by setting β = 1. This yields: gMER = ḡ00 + ḡ01 + ḡ10 + ḡ11 −αH̄01ḡ00 − αH̄11ḡ10 − αH̄10ḡ00 − αH̄10ḡ01 − αH̄11ḡ00 − αH̄11ḡ01 +O(α2) (36) Indeed, with β set to equal 1, we have shown that the gradient of MER is the same as one loop of Reptile with a number of steps equal to the total number of examples in all batches of MER (algorithm 6) if the current example is mixed in with the same proportion. If we include in the current example for s of sk examples in our meta-replay batch, it gets the same overall priority in both cases which is s times larger than that of a random example drawn from the buffer. As such, we can also optimize an equivalent gradient using algorithm 7 because it uses a factor s to increase the priority of the gradient given to the current example. While β = 1 is an interesting special case of MER in algorithm 1, in general we find it can be useful to set β to be a value smaller than 1. In fact, in our experiments we consider the case when β is smaller than 1 and γ = 1. The success of this approach makes sense because the higher order terms in the Taylor expansion that reflect the mismatch between parameters across replay batches disturb the learning process. By setting β to a value below 1 we can reduce our comparative weighting on promoting inter batch gradient similarities rather than intra batch gradient similarities. It was noted in (Nichol & Schulman, 2018) that the following equality holds if the examples and order are random: E[H̄1ḡ0] = E[H̄0ḡ1] = 1 2 E[ ∂ ∂θ0 (ḡ0 · ḡ1)] (37) In our work to make sure this equality holds in an online setting, we must take multiple precautions as noted in the main text. The issue is that examples are received in a non-stationary sequence so when applied in a continual learning setting the order is not totally random or arbitrary as in the original Reptile work. We address this by maintaining our buffer using reservoir sampling, which ensures that any example seen before has a probability 1N of being a particular element in the buffer. We also randomly select over these elements to form a batch. As this makes the order largely arbitrary to the extent that our buffer includes all examples seen, we are approximating the random offline setting from the original Reptile paper. As such we can view the gradients in equation 16 and equation 36 as leading to approximately the following objective function: θ = argmin θ E(x11,y11),...,(xsk,ysk)∼M [2 s∑ i=1 k∑ j=1 [L(xij , yij)− i−1∑ q=1 j−1∑ r=1 α ∂L(xij , yij) ∂θ ·∂L(xqr, yqr) ∂θ ]]. (38) This is precisely equation 7 in the main text. J SUPERVISED CONTINUAL LIFELONG LEARNING For the supervised continual learning benchmarks leveraging MNIST Rotations and MNIST Permutations, following conventions, we use a two layer MLP architecture for all models with 100 hidden units in each layer. We also model our hyperparameter search after Lopez-Paz & Ranzato (2017) while providing statistics for each model across 5 random seeds. For Omniglot, following Vinyals et al. (2016) we scale the images to 28x28 and use an architecture that consists of a stack of 4 modules before a fully connected softmax layer. Each module includes a 3x3 convolution with 64 filters, a ReLU non-linearity and 2x2 max-pooling. J.1 HYPERPARAMETER SEARCH Here we report the hyper-parameter grids that we searched over in our experiments. We note in parenthesis the best values for MNIST Rotations (ROT) at each buffer size (ROT-5120, ROT-500, ROT-200), MNIST Permutations (PERM) at each buffer size (PERM-5120, PERM-500, PERM200), Many Permutations (MANY) at each buffer size (MANY-5120, MANY-500), and Omniglot (OMNI) at each buffer size (OMNI-5120, OMNI-500). • Online Learning – learning rate: [0.0001, 0.0003 (ROT), 0.001, 0.003 (PERM, MANY), 0.01, 0.03, 0.1 (OMNI)] • Independent Model Per Task – learning rate: [0.0001, 0.0003, 0.001, 0.003, 0.01 (ROT, PERM, MANY), 0.03, 0.1] • Task Specific Input Layer – learning rate: [0.0001, 0.0003, 0.001, 0.003, 0.01 (ROT, PERM), 0.03, 0.1] • EWC – learning rate: [0.001 (ROT, OMNI), 0.003 (MANY), 0.01 (PERM), 0.03, 0.1, 0.3, 1.0] – regularization: [1 (MANY), 3, 10 (PERM, OMNI), 30, 100 (ROT), 300, 1000, 3000, 10000, 30000] • GEM – learning rate: [0.001, 0.003 (MANY-500), 0.01 (ROT, PERM, OMNI, MANY-5120), 0.03, 0.1, 0.3, 1.0] – memory strength (γ): [0.0 (ROT-500, ROT-200, PERM-200, MANY-5120), 0.1 (MANY-500), 0.5 (OMNI), 1.0 (ROT-5120, PERM-5120, PERM-500)] • Experience Replay (Algorithm 4) – learning rate: [0.00003, 0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1 (ROT, PERM, MANY)] – batch size (k-1): [5 (ROT-500), 10 (ROT-200, PERM-500, PERM-200), 25 (ROT- 5120, PERM-5120, MANY), 50, 100, 250] • Experience Replay (Algorithm 5) – learning rate: [0.00003, 0.0001, 0.0003, 0.001, 0.003 (MANY-5120), 0.01 (ROT-500, ROT-200, PERM, MANY-500), 0.03 (ROT-5120), 0.1] – batch size (k-1): [5 (MANY-500), 10 (PERM-200, MANY-5120), 25 (PERM-5120, PERM-500), 50 (ROT-200), 100 (ROT-5120, ROT-500), 250] • Meta-Experience Replay (Algorithm 1) – learning rate (α): [0.01 (OMNI-5120), 0.03 (ROT-5120, PERM, MANY-500), 0.1 (ROT-500, ROT-200, OMNI-500)] – across batch meta-learning rate (γ): 1.0 – within batch meta-learning rate (β): [0.01 (ROT-500, ROT-200, MANY-5120), 0.03 (ROT-5120, PERM, MANY-500), 0.1, 0.3, 1.0 (OMNI)] – batch size (k-1): [5 (MANY, OMNI-500), 10 (ROT-500, ROT-200, PERM-200), 25 (PERM-500, OMNI-5120), 50, 100 (ROT-5120, PERM-5120)] – number of batches per example: [1, 2 (OMNI-500), 5 (ROT-200, OMNI-5120), 10 (ROT-5120, ROT-500, PERM, MANY)] • Meta-Experience Replay (Algorithm 6) – learning rate (α): [0.01, 0.03 (ROT-5120, PERM-5120, PERM-500, MANY-5120), 0.1 (ROT-500, ROT-200, PERM-200, MANY-500)] – meta-learning rate (γ): [0.03 (ROT-500, ROT-200, PERM-200, MANY-500), 0.1 (ROT-5120, PERM-5120, MANY-5120), 0.3 (PERM-500), 0.6, 1.0] – batch size (k-1): [5 (PERM-200, MANY-500), 10 (ROT-500, PERM-500) 25 (ROT- 200, MANY-5120), 50 (PERM-5120), 100 (ROT-5120), 250] – number of batches per example: 1 • Meta-Experience Replay (Algorithm 7) – learning rate (α): [0.01 (PERM-5120, PERM-500), 0.03 (ROT, PERM-200, MANY), 0.1] – within batch meta-learning rate (γ): [0.03 (ROT, MANY), 0.1 (PERM), 0.3, 1.0] – batch size (k-1): [5 (PERM-200, MANY-500), 10, 25 (PERM-500), 50 (ROT-200, ROT-500, MANY-5120), 100 (ROT-5120, PERM-5120)] – current example learning rate multiplier (s): [1, 2 (PERM-200), 5 (ROT), 10 (PERM- 5120, PERM-500, MANY)] K FORWARD TRANSFER AND INTERFERENCE Forward transfer was a metric defined in Lopez-Paz & Ranzato (2017) based on the average increased performance on a task relative to performance at random initialization before training on that task. Unfortunately, this metric does not make much sense for tasks like MNIST Permutations where inputs are totally uncorrelated across tasks or Omniglot where outputs are totally uncorrelated across tasks. As such, we only provide performance for this metric on MNIST Rotations in Table 5. L ABLATION EXPERIMENTS We plot our detailed ablation results in Table 6. In order to consider a version of GEM that uses reservoir sampling, we maintain our buffer the same way that we do for experience replay and MER. We consider everything in the buffer to be old data and solve the GEM quadratic program so that the loss is not increased on this data. We found that considering the task level gradient directions did not lead to improvements. M REPRODUCIBILITY OF RESULTS While the results so far have provided substantial evidence of the benefits of MER for continual learning, one potential concern with our experimental protocol in Appendix J.1 is that the larger hyperparameter search space used for MER may artificially inflate improvements given typical run to run variation. To validate that this is not the case, we have run extensive additional experiments in this section to see how the model performs across different random seeds and machines. The codebase presents some inherent stochasticity across runs. As such, in Tables 7, 8, and 9 we report two levels of generalization for a set of hyperparameters beyond the configuration of an individual run. In the Same Seeds column, we report the results for the original 5 model seeds (0-4) deployed on different machines. In the Different Seeds column, we report the results for a different 25 model seeds (5-29) also deployed on different machines. In all cases, we see that there are quantitative differences generalizing across seeds and machines. However, new settings do not always result in lower performance. Additionally, the differences are not qualitative in nature. In fact, in every setting we come to approximately the same qualitative conclusions about how each model performs. N CONTINUAL REINFORCEMENT LEARNING We detail the application of MER to deep Q-learning in algorithm 8, using notation from Mnih et al. (2015). Algorithm 8 Deep Q-learning with Meta-Experience Replay (MER) procedure DQN-MER(env, frameLimit, θ, α, β, γ, steps, k, EQ) // Initialize action-value function Q with parameters θ: Q← Q(θ) // Initialize action-value function Q̂ with the same parameters θ̂ = θ: Q̂← Q̂(θ̂) = Q̂(θ) // Initialize experience replay buffer: M ← {} M.age← 0 while M.age ≤ frameLimit do // Begin new episode: env.reset() // Initialize the s state with the initial observation: while episode not done do // Select with probability p an action a from set of possible actions: a = { random selection of action â p ≤ arg maxa′ Q(st, a ′; θ) p > // Perform the action a in the environment: s′, rt ← env.step(s, a) // Store current transition with reward r: M ←M ∪ {(s, a, r, s′)} (algorithm 3) B1
1. What is the focus of the paper regarding streaming learning and continual learning? 2. What are the strengths of the proposed method, particularly in its implementation and results? 3. What are the weaknesses of the paper, especially regarding its experiments and comparisons with other works? 4. How does the reviewer assess the significance and validity of the results, especially in the context of continual learning research? 5. Are there any suggestions or recommendations for future work related to the paper's topic and contributions?
Review
Review The paper considers a number of streaming learning settings with various forms of dataset shift/drift of interest for continual learning research, and proposes a novel regularization-based objective enabled by a replay memory managed using the well known reservoir sampling algorithm. Pros: The new objective is not too surprising, but figuring out how to effectively implement this objective in a streaming setting is the strong point of this paper. Task labels are not used, yet performance seems superior to competing methods, many of which use task labels. Results are good on popular benchmarks, I find the baselines convincing in the supervised case. Cons: Despite somewhat frequent usage, I would like to respectfully point out that Permuted MNIST experiments are not very indicative for a majority of desiderata of interest in continual learning, and i.m.h.o. should be used only as a prototyping tool. To pick one issue, such results can be misleading since the benchmark allows for “trivial” solutions which effectively freeze the upper part of the network and only change first (few) layer(s) which “undo” the permutation. This is an artificial type of dataset shift, and is not realistic for the type of continual learning issues which appear even in single task deep reinforcement learning, where policies or value functions represented by the model need to change substantially across learning. I was pleased to see the RL experiments, which I find more convincing because dataset drifts/shifts are more interesting. Also, such applications of continual learning solutions are attempting to solve a ‘real problem’, or at least something which researchers in that field struggle with. That said, I do have a few suggestions. At first glance, it’s not clear whether anything is learned in the last 3 versions of Catcher, also what the y axis actually means. What is good performance for each game is very specific to your actual settings so I have no reference to compare the scores with. The sequence of games is progressively harder, so it makes sense that scores are lower, but it’s not clear whether your approach impedes learning of new tasks, i.e. what is the price to pay for not forgetting? This is particularly important for the points you’re trying to make because a large number of competing approaches either saturate the available capacity and memory with the first few tasks, or they faithfully model the recent ones. Any improvement there is worth a lot of attention, given proper comparisons. Even if this approach does not strike the ‘optimal’ balance, it is still worth knowing how much training would be required to reach full single-task performance on each game variant, and what kind of forgetting that induces.
ICLR
Title Orthogonalizing Convolutional Layers with the Cayley Transform Abstract Recent work has highlighted several advantages of enforcing orthogonality in the weight layers of deep networks, such as maintaining the stability of activations, preserving gradient norms, and enhancing adversarial robustness by enforcing low Lipschitz constants. Although numerous methods exist for enforcing the orthogonality of fully-connected layers, those for convolutional layers are more heuristic in nature, often focusing on penalty methods or limited classes of convolutions. In this work, we propose and evaluate an alternative approach to directly parameterize convolutional layers that are constrained to be orthogonal. Specifically, we propose to apply the Cayley transform to a skew-symmetric convolution in the Fourier domain, so that the inverse convolution needed by the Cayley transform can be computed efficiently. We compare our method to previous Lipschitz-constrained and orthogonal convolutional layers and show that it indeed preserves orthogonality to a high degree even for large convolutions. Applied to the problem of certified adversarial robustness, we show that networks incorporating the layer outperform existing deterministic methods for certified defense against `2-norm-bounded adversaries, while scaling to larger architectures than previously investigated. Code is available at https://github.com/locuslab/orthogonal-convolutions. 1 Introduction Encouraging orthogonality in neural networks has proven to yield several compelling benefits. For example, orthogonal initializations allow extremely deep vanilla convolutional neural networks to be trained quickly and stably (Xiao et al., 2018; Saxe et al., 2013). And initializations that remain closer to orthogonality throughout training seem to learn faster and generalize better (Pennington et al., 2017). Unlike Lipschitz-constrained layers, orthogonal layers are gradient-norm-preserving (Anil et al., 2019), discouraging vanishing and exploding gradients and stabilizing activations (Rodríguez et al., 2017). Orthogonality is thus a potential alternative to batch normalization in CNNs and can help to remember long-term dependencies in RNNs (Arjovsky et al., 2016; Vorontsov et al., 2017). Constraints and penalty terms encouraging orthogonality can improve generalization in practice (Bansal et al., 2018; Sedghi et al., 2018), improve adversarial robustness by enforcing low Lipschitz constants, and allow deterministic certificates of robustness (Tsuzuku et al., 2018). Despite evidence for the benefits of orthogonality constraints, and while there are many methods to orthogonalize fully-connected layers, the orthogonalization of convolutions has posed challenges. More broadly, current Lipschitz-constrained convolutions rely on spectral normalization and kernel reshaping methods (Tsuzuku et al., 2018), which only allow loose bounds and can cause vanishing gradients. Sedghi et al. (2018) showed how to clip the singular values of convolutions and thus enforce orthogonality, but relied on costly alternating projections to achieve tight constraints. Most recently, Li et al. (2019) introduced the Block Convolution Orthogonal Parameterization (BCOP), which cannot express the full space of orthogonal convolutions. In contrast to previous work, we provide a direct, expressive, and scalable parameterization of orthogonal convolutions. Our method relies on the Cayley transform, which is well-known for parameterizing orthogonal matrices in terms of skew-symmetric matrices, and can be easily extended to non-square weight matrices. The transform requires efficiently computing the inverse of a particular convolution in the Fourier domain, which we show works well in practice. We demonstrate that our Cayley layer is indeed orthogonal in practice when implemented in 32-bit precision, irrespective of the number of channels. Further, we compare it to alternative convolutional and Lipschitz-constrained layers: we include them in several architectures and evaluate their deterministic certifiable robustness against an `2-norm-bounded adversary. Our layer provides stateof-the-art results on this task. We also demonstrate that the layers empirically endow a considerable degree of robustness without adversarial training. Our layer generally outperforms the alternatives, particularly for larger architectures. 2 Related Work Orthogonality in neural networks. The benefits of orthogonal weight initializations for dynamical isometry, i.e., ensuring signals propagate through deep networks, are explained by Saxe et al. (2013) and Pennington et al. (2017), with limited theoretical guarantees investigated by Hu et al. (2020). Xiao et al. (2018) provided a method to initialize orthogonal convolutions, and demonstrated that it allows the training of extremely deep CNNs without batch normalization or residual connections. Further, Qi et al. (2020) developed a novel regularization term to encourage orthogonality throughout training and showed its effectiveness for training very deep vanilla networks. The signal-preserving properties of orthogonality can also help with remembering long-term dependencies in RNNs, on which there has been much work (Helfrich et al., 2018; Arjovsky et al., 2016). One way to orthogonalize weight matrices is with the Cayley transform, which is often used in Riemannian optimization (Absil et al., 2009). Helfrich et al. (2018) and Maduranga et al. (2019) avoid vanishing/exploding gradients in RNNs using the scaled Cayley transform. Similarly, LezcanoCasado&Martínez-Rubio (2019) use the exponentialmap, which theCayley transform approximates. Li et al. (2020) derive an iterative approximation of the Cayley transform for orthogonally-constrained optimizers and show it speeds the convergence of CNNs and RNNs. However, they merely orthogonalize a matrix obtained by reshaping the kernel, which is not the same as an orthogonal convolution (Sedghi et al., 2018). Our contribution is unique here in that we parameterize orthogonal convolutions directly, as opposed to reshaping kernels. Bounding neural network Lipschitzness. Orthogonality imposes a strict constraint on the Lipschitz constant, which itself comes with many benefits: Lower Lipschitz constants are associated with improved robustness (Yang et al., 2020) and better generalization bounds (Bartlett et al., 2017). Tsuzuku et al. (2018) showed that neural network classifications can be certified as robust to `2- norm-bounded perturbations given aLipschitz bound and sufficiently confident classifications. Along with Szegedy et al. (2013), they noted that the Lipschitz constant of neural networks can be bounded if the constants of the layers are known. Thus, there is substantial work on Lipschitz-constrained and regularized layers, which we review in Sec. 5. However, Anil et al. (2019) realized that mere Lipschitz constraints can attenuate gradients, unlike orthogonal layers. There have been other ideas for calculating and controlling the minimal Lipschitzness of neural networks, e.g., through regularization (Hein & Andriushchenko, 2017), extreme value theory (Weng et al., 2018), or using semi-definite programming (Latorre et al., 2020; Chen et al., 2020; Fazlyab et al., 2019), but constructing bounds from Lipschitz-constrained layers is more scalable and efficient. Besides Tsuzuku et al. (2018)’s strategy for deterministic certifiable robustness, there are many approaches to deterministically verifying neural network defenses using SMT solvers (Huang et al., 2017; Ehlers, 2017; Carlini & Wagner, 2017), integer programming approaches (Lomuscio & Maganti, 2017; Tjeng & Tedrake, 2017; Cheng et al., 2017), or semi-definite programming (Raghunathan et al., 2018). Wong et al. (2018)’s approach to minimize an LP-based bound on the robust loss is more scalable, but networks made from Lipschitz-constrained components can be more efficient still, as shown by Li et al. (2019) who outperform their approach. However, none of these methods yet perform as well as probabilistic methods (Cohen et al., 2019). Consequently, orthogonal layers appear to be an important component to enhance the convergence of deep networks while encouraging robustness and generalization. 3 Background Orthogonality. Since we are concerned with orthogonal convolutions, we review orthogonal matrices: A matrixQ ∈ Rn×n is orthogonal ifQTQ = QQT = I . However, in building neural networks, layers do not always have equal input and output dimensions: more generally, a matrix U ∈ Rm×n is semi-orthogonal if UTU = I or UUT = I . Importantly, ifm ≥ n, then U is also norm-preserving: ‖Ux‖2 = ‖x‖2 for all x ∈ Rn. Ifm < n, then the mapping is merely non-expansive (a contraction), i.e., ‖Ux‖2 ≤ ‖x‖2. A matrix having all singular values equal to 1 is orthogonal, and vice versa. Orthogonal convolutions. The same concept of orthogonality applies to convolutional layers, which are also linear transformations. A convolutional layer conv : Rc×n×n → Rc×n×n with c = cin = cout input and output channels is orthogonal if and only if ‖conv(X)‖F = ‖X‖F for all input tensors X ∈ Rc×n×n; the notion of semi-orthogonality extends similarly for cin 6= cout. Note that orthogonalizing each convolutional kernel as in Lezcano-Casado & Martínez-Rubio (2019); Lezcano-Casado (2019) does not yield an orthogonal (norm-preserving) convolution. Lipschitzness under the `2 norm. A consequence of orthogonality is 1-Lipschitzness. A function f : Rn → Rm is L-Lipschitz with respect to the `2 norm iff ‖f(x) − f(y)‖2 ≤ L‖x − y‖2 for all x, y ∈ Rn. IfL is the smallest such constant for f , then it’s called the Lipschitz constant of f , denoted by Lip(f). An useful property for certifiable robustness is that the Lipschitz constant of the composition of f and g is upper-bounded by the product of their constants: Lip(f ◦ g) ≤ Lip(f)Lip(g). Since simple neural networks are fundamentally just composed functions, this allows us to bound their Lipschitz constants, albeit loosely. We can extend this idea to residual networks using the fact that Lip(f + g) ≤ Lip(f) + Lip(g), which motivates using a convex combination in residual connections. More details can be found in Li et al. (2019); Szegedy et al. (2013). Lipschitz bounds for provable robustness. If we know the Lipschitz constant of the neural network, we can certify that a classification with sufficiently a large margin is robust to `2 perturbations below a certain magnitude. Specifically, denote the margin of a classification with label t as Mf (x) = max(0, yt −max i 6=t yi), (1) which can be interpreted as the distance between the correct logit and the next largest logit. Then if the logit function f has Lipschitz constant L, andMf (x) > √ 2L , then f(x) is certifiably robust to perturbations {δ : ‖δ‖2 ≤ }. Tsuzuku et al. (2018) and Li et al. (2019) provide proofs. 4 The Cayley transform of a Convolution Before describing our method, we first review discrete convolutions and the Cayley transform; then, we show the need for inverse convolutions and how to compute them efficiently in the Fourier domain, which lets us parameterize orthogonal convolutions via the Cayley transform. The key idea in our method is that multi-channel convolution in the Fourier domain reduces to a batch of matrix-vector products, and making each of those matrices orthogonal makes the convolution orthogonal. We describe our method in more detail in Appendix A and provide a minimal implementation in PyTorch in Appendix E. An unstrided convolutional layer with cin input channels and cout output channels has a weight tensor W of shape Rcout×cin×n×n and takes an inputX of shape Rcin×n×n to produce an output Y of shape Rcout×n×n, i.e., convW : Rcin×n×n → Rcout×n×n. It is easiest to analyze convolutions when they are circular: if the kernel goes out of bounds ofX , it wraps around to the other side—this operation can be carried out efficiently in the Fourier domain. Consequently, we focus on circular convolutions. We define convW (X) as the circular convolutional layer with weight tensor W ∈ Rcout×cin×n×n applied to an input tensor X ∈ Rcin×n×n yielding an output tensor Y = convW (X) ∈ Rcout×n×n. Equivalently, we can view convW (X) as the doubly block-circulant matrix C ∈ Rcoutn 2×cinn2 corresponding to the circular convolution with weight tensorW applied to the unrolled input tensor vecX ∈ Rcinn2×1. Similarly, we denote by convTW (X) the transpose CT of the same convolution, which can be obtained by transposing the first two channel dimensions of W and flipping each of the last two (kernel) dimensions vertically and horizontally, calling the result W ′, and computing convW ′(X). We denote conv−1W (X) as the inverse of the convolution, i.e., with corresponding matrix C−1, which is more difficult to efficiently compute. Now we review how to perform a convolution in the spatial domain. We refer to a pixel as a cin or cout-dimensional slice of a tensor, like X[:, i, j]. Each of the n2 (i, j) output pixels Y [:, i, j] are computed as follows: for each c ∈ [cout], compute Y [c, i, j] by centering the tensor W [c] on the (i, j)th pixel of the input and taking a dot product, wrapping around pixels of W that go out-ofbounds. Typically,W is zero except for a k×k region of the last two (spatial) dimensions, which we call the kernel or the receptive field. Typically, convolutional layers have small kernels, e.g., k = 3. Considering now matrices instead of tensors, the Cayley transform is a bijection between skewsymmetric matrices A and orthogonal matrices Q without −1 eigenvalues: Q = (I −A)(I +A)−1. (2) A matrix is skew-symmetric if A = −AT , and we can skew-symmetrize any square matrix B by computing the skew-symmetric partA = B−BT . The Cayley transform of such a skew-symmetric matrix is always orthogonal, which can be seen by multiplying Q by its transpose and rearranging. We can also apply the Cayley transform to convolutions, noting they are also linear transformations that can be represented as doubly block circulant matrices. While it is possible to construct the matrix C corresponding to a convolution convW and apply the Cayley transform to it, this is highly inefficient in practice: Convolutions can be easily skew-symmetrized by computing convW (X)− convTW (X), but finding their inverse is challenging; instead, we manipulate convolutions in the Fourier domain, taking advantage of the convolution theorem and the efficiency of the fast Fourier transform. According to the 2D convolution theorem (Jain, 1989), the circular convolution of two matrices in the Fourier domain is simply their elementwise product. We will show that the convolution theorem extends to multi-channel convolutions of tensors, in which case convolution reduces to a batch of complex matrix-vector products rather than elementwise products: inverting these smaller matrices is equivalent to inverting the convolution, and finding their skew-Hermitian part is equivalent to skew-symmetrizing the convolution, which allows us to compute the Cayley transform. We define the 2D Discrete (Fast) Fourier Transform for tensors of order ≥ 2 as a mapping FFT : Rm1×...×mr×n×n → Cm1×...×mr×n×n defined by FFT(X)[i1, ..., ir] = FnX[i1, ..., ir]Fn for il ∈ 1, ...,ml and l ∈ 1, ..., r and r ≥ 0, where Fn[i, j] = 1√n exp( −2πı n ) (i−1)(j−1). That is, we treat all but the last two dimensions as batch dimensions. We denote X̃ = FFT(X) for a tensor X . Using the convolution theorem, in the Fourier domain the cth output channel is the sum of the elementwise products of the cin input and weight channels: that is, Ỹ [c] = ∑cin k=1 W̃ [c, k] X̃[k]. Equivalently, working in the Fourier domain, the (i, j)th pixel of the cth output channel is the dot product of the (i, j)th pixel of the cth weight with the (i, j)th input pixel: Ỹ [c, i, j] = W̃ [c, :, i, j] · X̃[:, i, j]. From this, we can see that the whole (i, j)th Fourier-domain output pixel is the matrix-vector product FFT(convW (X))[:, i, j] = W̃ [:, :, i, j]X̃[:, i, j]. (3) This interpretation gives a way to compute the inverse convolution as required for the Cayley transform, assuming cin = cout: FFT(conv−1W (X))[:, i, j] = W̃ [:, :, i, j] −1X̃[:, i, j]. (4) Given this method to compute inverse convolutions, we can now parameterize an orthogonal convolution with a skew-symmetric convolution through the Cayley transform, highlighted in Algorithm 1: In line 1, we use the Fast Fourier Transform on the weight and input tensors. In line 4, we compute the Fourier domain weights for the skew-symmetric convolution (the Fourier representation is skew-Hermitian, thus the use of the conjugate transpose). Next, in lines 4–5 we compute the inverses required for FFT(conv−1I+A(x)) and use them to compute the Cayley transform written as (I+A)−1−A(I+A)−1 in line 6. Finally, we get our spatial domain result with the inverse FFT,which is always exactly real despite workingwith complexmatrices in the Fourier domain (seeAppendixA). 4.1 Properties of our approach It is important to note that the inverse in the Cayley transform always exists: Because A is skewsymmetric, it has all imaginary eigenvalues, so I + A has all nonzero eigenvalues and is thus nonsingular. Since only square matrices can be skew-symmetrized and inverted, Algorithm 1 only Algorithm 1: Orthogonal convolution via the Cayley transform. Input: A tensor X ∈ Rcin×n×n and convolution weightsW ∈ Rcout×cin×n×n, with cin = cout. Output: A tensor Y ∈ Rcout×n×n, the orthogonal convolution parameterized byW applied toX . 1 W̃ := FFT(W ) ∈ Ccout×cin×n×n, X̃ := FFT(X) ∈ Ccin×n×n 2 for all i, j ∈ 1, . . . , n // In parallel 3 do 4 Ã[:, :, i, j] := W̃ [:, :, i, j]− W̃ [:, :, i, j]∗ 5 Ỹ [:, i, j] := (I + Ã[:, :, i, j])−1X̃[:, i, j] 6 Z̃[:, i, j] := Ỹ [:, i, j]− Ã[:, :, i, j]Ỹ [:, i, j] 7 end 8 return FFT−1(Z̃).real works for cin = cout, but can be extended to the rectangular case where cout ≥ cin by padding the matrix with zeros and then projecting out the first cin columns after the transform, resulting in a norm-preserving semi-orthogonal matrix; the case cin ≥ cout follows similarly, but the resulting matrix is merely non-expansive. With efficient implementation in terms of the Schur complement (Appendix A.1, Eq. A22), this only requires inverting a square matrix of order min(cin, cout). We saw that learning was easier if we parameterized W in Algorithm 1 by W = gV/‖V ‖F for a learnable scalar g and tensor V , as in weight normalization (Salimans & Kingma, 2016). Comparison to BCOP.While the Block Convolution Orthogonal Parameterization (BCOP) can only express orthogonal convolutions with fixed k × k-sized kernels, a Cayley convolutional layer can represent orthogonal convolutions with a learnable kernel size up to the input size, and it does this without costly projections unlike Sedghi et al. (2018). However, our parameterization as presented is limited to orthogonal convolutions without -1 eigenvalues. Hence, our parameterization is incomplete; besides kernel size restrictions, BCOP was also demonstrated to incompletely represent the space of orthogonal convolutions, though the details of the problem were unresolved (Li et al., 2019). Our method can represent such orthogonal convolutions by multiplying the Cayley transform by a fixed diagonal matrix with ±1 entries (Gallier, 2006; Helfrich et al., 2018); however, we cannot optimize over the discrete set of such scaling matrices, so our method cannot optimize over all orthogonal convolutions, nor all special orthogonal convolutions. In our experiments, we did not find improvements from adding randomly initialized scaling matrices as in Helfrich et al. (2018). Limitations of ourmethod. As ourmethod requires computing an inverse convolution, it is generally incompatible with strided convolutions; e.g., a convolution with stride 2 cannot be inverted since it involves noninvertible downsampling. It is possible to apply our method to stride-2 convolutions by simultaneously increasing the number of output channels by 4× to compensate for the 2× downsampling of the two spatial dimensions, though we found this to be computationally inefficient. Instead, we use the invertible downsampling layer from (Jacobsen et al., 2018) to emulate striding. The convolution resulting from ourmethod is circular, which is the same as using the circular padding mode instead of zero padding in, e.g., PyTorch, and will not have a large impact on performance if subjects tend to be centered in images in the data set. BCOP (Li et al., 2019) and Sedghi et al. (2018) also restricted their attention to circular convolutions. Our method is substantially more expensive than plain convolutional layers, though in most practical settings it is more efficient than existing work: We plot the runtimes of our Cayley layer, BCOP, and plain convolutions in a variety of settings in Figure 6 for comparison, and we also report runtimes in Tables 4 and 5 (see Appendix C). Runtime comparisonOur Cayley layer does cincout FFTs on n×nmatrices (i.e., the kernels padded to the input size), and cin FFTs for each n× n input. These have complexity O(cincoutn2 log n) and O(coutn2 log n) respectively. Themost expensive step is computing the inverse of n2 square matrices of order c = min(cin, cout), with complexityO(n2c3), similarly to the method of Sedghi et al. (2018). We note like the authors that parallelization could effectively make this O(n2 log n + c3), and it is quite feasible in practice. As in Li et al. (2020), the inverse could be replaced with an iterative approximation, but we did not find it necessary for our relatively small architectures. For comparison, the related layers BCOP and RKO (Sec. 5) take only O(c3) to orthogonalize the convolution, and OSSN takesO(n2c3) (Li et al., 2019). In practice, we found our Cayley layer takes anywhere from 1/2× to 4× as long as BCOP, depending on the architecture (see Appendix C). 5 Experiments Our experiments have two goals: First, we show that our layer remains orthogonal in practice. Second, we compare the performance of our layer versus alternatives (particularly BCOP) on two adversarial robustness tasks on CIFAR-10: We investigate the certifiable robustness against an `2-norm-bounded adversary using the idea of Lipschitz Margin Training (Tsuzuku et al., 2018), and then we look at robustness in practice against a powerful adversary. We find that our layer is always orthogonal and performs relatively well in the robustness tasks. Separately, we show our layer improves on the Wasserstein distance estimation task from Li et al. (2019) in Appendix D.2. For alternative layers, we adopt the naming scheme for previous work on Lipschitz-constrained convolutions from Li et al. (2019), and we compare directly against their implementations. We outline the methods below. RKO. A convolution can be represented as a matrix-vector product, e.g., using a doubly blockcirculant matrix and the unrolled input. Alternatively, one could stack each k×k receptive field, and multiply by the cout × k2cin reshaped kernel matrix (Cisse et al., 2017). The spectral norm of this reshaped matrix is bounded by the convolution’s true spectral norm (Tsuzuku et al., 2018). Consequently, reshaped kernel methods orthogonalize this reshaped matrix, upper-bounding the singular values of the convolution by 1. Cisse et al. (2017) created a penalty term based on this matrix; instead, like Li et al. (2019), we orthogonalize the reshaped matrix directly, called reshaped kernel orthogonalization (RKO). They used an iterative algorithm for orthogonalization (Björck & Bowie, 1971); for comparison, we implementRKO using the Cayley transform instead of Björck orthogonalization, called CRKO. OSSN. A prevalent idea to constrain the Lipschitz constants of convolutions is to approximate the maximum singular value and normalize it out: Miyato et al. (2018) used the power method on the matrix W associated with the convolution, i.e., si+1 := WTWsi, and σmax ≈ ‖Wsn‖/‖sn‖. Gouk et al. (2018) improved upon this idea by applying the power method directly to convolutions, using the transposed convolution forWT . However, this one-sided spectral normalization is quite restrictive; dividing out σmax can make other singular values vanishingly small. SVCM. Sedghi et al. (2018) showed how to exactly compute the singular values of convolutional layers using the Fourier transform before the SVD, and proposed a singular value clipping method. However, the clipped convolution can have an arbitrarily large kernel size, so they resorted to alternating projections between orthogonal convolutions and k × k-kernel convolutions, which can be expensive. Like Li et al. (2019), we found that≈ 50 projections are needed for orthogonalization. BCOP. The Block Convolution Orthogonal Parameterization extends the orthogonal initialization method of Xiao et al. (2018). It differentiably parameterizes k × k orthogonal convolutions with an orthogonal matrix and 2(k− 1) symmetric projection matrices. The method only parameterizes the subspace of orthogonal convolutions with k × k-sized kernels, but is quite expressive empirically. Internally, orthogonalization is done with the method by Björck & Bowie (1971). Note that BCOP and SVCM are the only other orthogonal convolutional layers, and SVCM only for a large number of projections. RKO, CRKO, and OSSN merely upper-bound the Lipschitz constant of the layer by 1. 5.1 Training and Architectural Details Training details. For all experiments, we used CIFAR-10 with standard augmentation, i.e., random cropping and flipping. Inputs to the model are always in the range [0, 1]; we implement normalization as a layer for compatibility with AutoAttack. For each architecture/convolution pair, we tried learning rates in {10−5, 10−4, 10−3, 10−2, 10−1}, choosing the one with the best test accuracy. Most often, 0.001 is appropriate. We found that a piecewise triangular learning rate, as used in top performers in the DAWNBench competition (Coleman et al., 2017), performed best. Adam (Kingma & Ba, 2014) showed a significant improvement over plain SGD, and we used it for all experiments. Loss function. Inspired by Tsuzuku et al. (2018), Anil et al. (2019) and Li et al. (2019) used multi-class hinge loss where the margin is the robustness certificate √ 2L 0. We corroborate their finding that this works better than cross-entropy, and similarly use 0 = 0.5. Varying 0 controls a tradeoff between accuracy and robustness (see Fig. 5). Initialization. We found that the standard uniform initialization in PyTorch performed well for our layer. We adjusted the variance, but significant differences required order-of-magnitude changes. For residual networks, we tried Fixup initialization (Zhang et al., 2019), but saw no significant improvement. We hypothesize this is due to (1) the learnable scaling parameter inside the Cayley transform, which changes significantly during training and (2) the dynamical isometry inherent with orthogonal layers. For alternative layers, we used the initializations from Li et al. (2019). Architecture considerations. For fair comparison with previous work, we use the “large” network from Li et al. (2019), which was first implemented in Kolter & Wong (2017)’s work on certifiable robustness. We also compare the performance of the different layers in a 1-Lipschitz-constrained version of ResNet9 (He et al., 2016) and WideResNet10-10 (Zagoruyko & Komodakis, 2016). The architectureswe could investigatewere limited by compute andmemory, as all the layers compared are relatively expensive. For RKO, OSSN, SVCM, and BCOP, we use Björck orthogonalization (Björck & Bowie, 1971) for fully-connected layers, as reported in Li et al. (2019); Anil et al. (2019). For our Cayley convolutional layer and CRKO, we orthogonalize the fully-connected layers with the Cayley transform to be consistent with our method. We found the gradient-norm-preserving GroupSort activation function from Anil et al. (2019) to be more effective than ReLU, and we used a group size of 2, i.e.,MaxMin. Strided convolutions. For the KWLarge network, we used “invertible downsampling”, which emulates striding by rearranging the inputs to have 4× more channels while halving the two spatial dimensions and reducing the kernel size to bk/2c (Jacobsen et al., 2018). For the residual networks, we simply used a version of pooling, noting that average pooling is still non-expansive when multiplied by its kernel size, which allows us to use more of the network’s capacity. We also halved the kernel size of the last pooling layer, instead adding another fully-connected layer; empirically, this resulted in higher local Lipschitz constants. Ensuring Lipschitz constraints. Batch normalization layers scale their output, so they can’t be included in our 1-Lipschitz-constrained architecture; the gradient-norm-preserving properties of our layers compensate for this. We ensure residual connections are non-expansive by making them a convex combination with a new learnable parameter α, i.e., g(x) = αf(x)+(1−α)x, for α ∈ [0, 1]. To ensure the latter constraint, we use sigmoid(α). We can tune the overall Lipschitz bound to a given L using the Lipschitz composition property, multiplying each of them layers by L1/m. 5.2 Adversarial Robustness For certifiable robustness, we report the fraction of certifiable test points: i.e., thosewith classification marginMf (x) greater than √ 2L , where = 36/255. For empirical defense, we use both vanilla projected gradient descent andAutoAttack byCroce&Hein (2020). For PGD,we useα = /4.0with 10 iterations. Within AutoAttack, we use both APGD-CE and APGD-DLR, finding the decisionbased attacks provided no improvements. We report on = 36/255 for consistency with Li et al. (2019) and previous work on deterministic certifiable robustness (Wong et al., 2018). Additionally, we found it useful to report on empirical local Lipschitz constants throughout training using the PGD-like method from Yang et al. (2020). 5.3 Results Practical orthogonality. We show that our layer remains very close to orthogonality in practice, both before and after learning, when implemented in 32-bit precision. We investigated Cayley layers from one of our ResNet9 architectures, running them on random tensors to see if their norm is preserved, which is equivalent to orthogonality. We found that ‖Conv(x)‖/‖x‖, the extent to which our layer is gradient norm preserving, is always extremely close to 1. We illustrate the small discrepancies, easily bounded between 0.99999 and 1.00001, in Figure 1. Cayley layers which do not change or increase the number of channels are guaranteed to be orthogonal, which we see in practice for graphs (b, c, d, e). Those which decrease the number of channels can only be non-expansive, and in fact the layer seems to become slightly more norm-preserving after training (a). In short, our Cayley layer can capture the full benefits of orthogonality. Certifiable robustness. We use our layer and alternatives within the KWLarge architecture for a more direct comparison to previous work on deterministic certifable robustness (Li et al., 2019; Wong et al., 2018). As in (Li et al., 2019), we got the best performance without normalizing inputs, and can thus say that all networks compared here are at most 1-Lipschitz. Our layer outperforms BCOP on this task (see Table 1), and is thus state-of-the-art, getting on average 75.33% clean test accuracy and 59.16% certifiable robust accuracy against adversarial perturbations with norm less than = 36/255. In contrast, BCOP gets 75.11% test accuracy and 58.29% certifiable robust accuracy. The reshaped kernel methods perform only a percent or two worse on this task, while the spectral normalization and clipping methods lag behind. We assumed that a layer is only meaningfully better than the other if both the test and robust accuracy are improved; otherwise, the methods may simply occupy different parts of the tradeoff curve. Since reshaped kernel methods can encourage smaller Lipschitz constants than orthogonal layers (Sedghi et al., 2018), we investigated the clean vs. certifiable robust accuracy tradeoff enabled by scaling the Lipschitz upper bound of the network, visualized in Figure 2. To that end, in light of the competitiveness of RKO, we chose a Lipschitz upper-bound of 0.85 which gave our Cayley layer similar test accuracy; this allowed for even higher certifiable robustness of 59.99%, but lower test accuracy of 74.35%. Overall, we were surprised by the similarity between the four top-performing methods after scaling Lipschitz constants. We were not able to improve certifiable accuracy with ResNets. However, it was useful to increase the kernel size: we found 5 was an improvement in accuracy, while 7 and 9 were slightly worse. (Since our method operates in the Fourier domain, increases in kernel size incur no extra cost.) We also saw an improvement from scaling up the width of each layer of KWLarge, and our Cayley layer was substantially faster than BCOP as the width of KWLarge increased (see Appendix C). Multiplying the width by 3 and increasing the kernel size to 5, we were able to get 61.13% certified robust accuracy with our layer, and 60.55% with BCOP. Empirical robustness. Previous work has shown that adversarial robustness correlates with lower Lipschitz constants. Thus, we investigated the robustness endowed by our layer against `2 gradient-based adversaries. Here, we got better accuracy with the standard practice of normalizing inputs. Our layer outperformed the others in ResNet9 and WideResNet10-10 architectures; results were less decisive for KWLarge (see Appendix B). For the WideResNet, we got 82.99% clean accuracy and 73.16% robust accuracy for = 36/255. For comparison, the state-of-the-art achieves 91.08% clean accuracy and 72.91% robust accuracy for = 0.5 using a ResNet50 with adversarial training and additional unlabeled data (Augustin et al., 2020). We visualize the tradeoffs for our residual networks in Figure 3, noting that they empirically have smaller local Lipschitz constants than KWLarge. While our layer outperforms others for the default Lipschitz bound of 1, and is consistently slightly better than BCOP, RKO can perform similarly well for larger bounds. This provides some support for studies showing that hard constraints like ours may not match the performance of softer constraints, such as RKO and penalty terms (Bansal et al., 2018; Vorontsov et al., 2017). 6 Conclusion In this paper, we presented a new, expressive parameterization of orthogonal convolutions using the Cayley transform. Unlike previous approaches to Lipschitz-constrained convolutions, ours gives deep networks the full benefits of orthogonality, such as gradient norm preservation. We showed empirically that our method indeed maintains a high degree of orthogonality both before and after learning, and also scales better to some architectures than previous approaches. Using our layer, we were able to improve upon the state-of-the-art in deterministic certifiable robustness against an `2-norm-bounded adversary, and also showed that it endows networks with considerable inherent robustness empirically. While our layer offers benefits theoretically, we observed that heuristics involving orthogonalizing reshaped kernels were also quite effective for empirical robustness. Orthogonal convolutions may only show their true advantage in gradient norm preservation for deeper networks than we investigated. In light of our experiments in scaling the Lipschitz bound, we hypothesize that not orthogonality, but insead the ability of layers such as ours to exert control over the Lipschitz constant, may be best for the robustness/accuracy tradeoff. Future work may avoid expensive inverses using approximations or the exponential map, or compare various orthogonal and Lipschitz-constrained layers in the context of very deep networks. Acknowledgments We thank Shaojie Bai, Chun Kai Ling, EricWong, and the anonymous reviewers for helpful feedback and discussions. This work was partially supported under DARPA grant number HR00112020006. A Orthogonalizing Convolutions in the Fourier Domain Our method relies on the fact that a multi-channel circular convolution can be block-diagonalized by a suitable Discrete Fourier Transform matrix. We show how this follows from the 2D convolution theorem (Jain, 1989, p. 145) below. Definition A.1. Fn is the DFT matrix for sequences of length n; we drop the subscript when it can be inferred from context. Definition A.2. We define convW (X) as in Section 4; if cin = cout = 1, we drop the channel axes, i.e., for X,W ∈ Rn×n, the 2D circular convolution of X withW is convW (X) ∈ Rn×n. Theorem A.1. If C ∈ Rn2×n2 represents a 2D circular convolution with weights W ∈ Rn×n operating on a vectorized input vec(X) ∈ Rn2×1, with X ∈ Rn×n, then it can be diagonalized as (F ⊗ F )C(F ∗ ⊗ F ∗) = D. Proof. According to the 2D convolution theorem, we can implement a single-channel 2D circular convolution by computing the elementwise product of the DFT of the filter and input signals: FWF FXF = F convW (X)F. (A1) This elementwise product is easier to work mathematically with if we represent it as a diagonalmatrix-vector product after vectorizing the equation: diag(vec(FWF )) vec(FXF ) = vec(F convW (X)F ). (A2) We can then rearrange this using vec(ABC) = (CT ⊗A) vec(B) and the symmetry of F : diag(vec(FWF ))(F ⊗ F ) vec(X) = (F ⊗ F ) vec(convW (X)). (A3) Left-multiplying by the inverse of F ⊗F and notingC vec(X) = vec(convW (X)), we get the result (F ∗ ⊗ F ∗) diag(vec(FWF ))(F ⊗ F ) = C ⇒ diag(vec(FWF )) = (F ⊗ F )C(F ∗ ⊗ F ∗), (A4) which shows that the (doubly-block-ciculant) matrixC is diagonalized by F ⊗F . An alternate proof can be found in Jain (1989, p. 150). Now we can consider the case where we have a 2D circular convolution C ∈ Rcoutn2×cinn2 with cin input channels and cout output channels. Here, C has cout × cin blocks, each of which is a circular convolution Cij ∈ Rn 2×n2 . The input image is vecX = [ vecT X1, . . . , vec T Xcin ]T ∈ Rcinn2×1, where Xi is the ith channel of X . Corollary A.1.1. If C ∈ Rcoutn2×cinn2 represents a 2D circular convolution with cin input channels and cout output channels, then it can be block diagonalized as FcoutCF∗cin = D, where Fc = Sc,n2 (Ic ⊗ (F ⊗ F )), Sc,n2 is a permutation matrix, Ik is the identity matrix of order k, and D is block diagonal with n2 blocks of size cout × cin. Proof. We first look at each of the blocks of C individually, referring to D̂ as the block matrix before applying the S permutations, i.e., D̂ = STcout,n2DScin,n2 , so that: D̂ij = [(Icout ⊗ (F ⊗ F )) C (Icin ⊗ (F ∗ ⊗ F ∗))]ij = (F ⊗ F )Cij(F ∗ ⊗ F ∗) = diag(vec(FWijF )), (A5) whereWij are the weights of the (ij)th single-channel convolution, using Theorem A.1. That is, D̂ is a block matrix of diagonal matrices. Then, let Sa,b be the perfect shuffle matrix that permutes the blockmatrix of diagonal matrices to a block diagonal matrix. Sa,b can be constructed by subselecting rows of the identity matrix. Using slice notation: Sa,b = Iab(1 : b : ab, :) Iab(2 : b : ab, :) ... Iab(b : b : ab, :) . (A6) As an example: S2,4 a 0 0 0 e 0 0 0 i 0 0 0 0 b 0 0 0 f 0 0 0 j 0 0 0 0 c 0 0 0 g 0 0 0 k 0 0 0 0 d 0 0 0 h 0 0 0 l m 0 0 0 q 0 0 0 u 0 0 0 0 n 0 0 0 r 0 0 0 v 0 0 0 0 o 0 0 0 s 0 0 0 w 0 0 0 0 p 0 0 0 t 0 0 0 x ︸ ︷︷ ︸ D̂ ST3,4 = a e i 0 0 0 0 0 0 0 0 0 m q u 0 0 0 0 0 0 0 0 0 0 0 0 b f j 0 0 0 0 0 0 0 0 0 n r v 0 0 0 0 0 0 0 0 0 0 0 0 c g k 0 0 0 0 0 0 0 0 0 o s w 0 0 0 0 0 0 0 0 0 0 0 0 d h l 0 0 0 0 0 0 0 0 0 p t x ︸ ︷︷ ︸ D . (A7) Then, with the perfect shuffle matrix, we can compute the block diagonal matrix D as: Scout,n2D̂S T cin,n2 = Scout,n2 (Icout ⊗ (F ⊗ F )) C (Icin ⊗ (F ∗ ⊗ F ∗))STcin,n2 = FcoutCF∗cin = D. (A8) The effect of left and right-multiplying with the perfect shuffle matrix is to create a new matrix D from D̂ such that [Dk]ij = [D̂ij ]kk, where the subscript inside the brackets refers to the kth diagonal block and the (ij)th block respectively. Remark. It is much more simple to compute D (here wfft) in tensor form given the convolution weights w as a cout × cin × n× n tensor: wfft = fft2(w).reshape(cout, cin, n**2).permute(2, 0, 1). Definition A.3. The Cayley transform is a bijection between skew-Hermitian matrices and unitary matrices; for real matrices, it is a bijection between skew-symmetric and orthogonal matrices. We apply the Cayley transform to an arbitrary matrix by first computing its skew-Hermitian part: we define the function cayley : Cm×m → Cm×m by cayley(B) = (Im − B + B∗)(Im + B − B∗)−1, where we compute the skew-Hermitian part ofB inline asB−B∗. Note that the Cayley transform of a real matrix is always real, i.e., Im(B) = 0⇒ Im(cayley(B)) = 0, in which caseB−B∗ = B−BT is a skew-symmetric matrix. We now note a simple but important fact that we will use to show that our convolutions are always exactly real despite manipulating their complex representations in the Fourier domain. Lemma A.2. Say J ∈ Cm×m is unitary so that J∗J = I , and B = JB̃J∗ for B ∈ Rm×m and B̃ ∈ Cm×m. Then cayley(B) = Jcayley(B̃)J∗. Proof. First note that B = JB̃J∗ implies BT = B∗ = (JB̃J∗)∗ = JB̃∗J∗. Then cayley(B) = (I −B +BT )(I +B −BT ) = (I − JB̃J∗ + JB̃∗J∗)(I + JB̃J∗ − JB̃∗J∗)−1 = J(I − B̃ + B̃∗)J∗ [ J(I + B̃ − B̃∗)J∗ ]−1 = J(I − B̃ + B̃∗)J∗ [ J(I + B̃ − B̃∗)−1J∗ ] = J(I − B̃ + B̃∗)(I + B̃ − B̃∗)−1J∗ = Jcayley(B̃)J∗. (A9) For the rest of this section, we drop the subscripts ofF and S when they can be inferred from context. Theorem A.3. When cin = cout = c, applying the Cayley transform to the block diagonal matrix D results in a real, orthogonal multi-channel 2D circular convolution: cayley(C) = F∗cayley(D)F . Proof. Note that F is unitary: FF∗ = S(Ic ⊗ (F ⊗ F ))(Ic ⊗ (F ∗ ⊗ F ∗))ST = SIcn2ST = SST = Icn2 , (A10) since S is a permutation matrix and is thus orthogonal. Then apply Lemma A.2, where we have J = F∗, B = C, and B̃ = D, to see the result. Note that cayley(C) is real because C is real; that is, even though we apply the Cayley transform to skew-Hermitian matrices in the Fourier domain, the resulting convolution is real. Remark. While we deal with skew-Hermitian matrices in the Fourier domain, we are still effectively parameterizing the Cayley transform in terms of skew-symmetric matrices: as in the note in Lemma A.2, we can see that C = F∗DF ⇒ C − CT = C − C∗ = F∗DF − F∗D∗F = F∗(D −D∗)F , (A11) where C is real, D is complex, and C − CT is skew-symmetric (in the spatial domain) despite computing it with a skew-Hermitian matrix D −D∗ in the Fourier domain. Remark. Since D is block diagonal, we only need to apply the Cayley transform (and thus invert) its n2 blocks of size c× c, which are much smaller than the whole matrix: cayley(D) = diag(cayley(D1), . . . , cayley(Dn2)). (A12) A.1 Semi-Orthogonal Convolutions In many cases, convolutional layers do not have cin = cout, in which case they cannot be orthogonal. Rather, we must resort to enforcing semi-orthogonality. We can semi-orthogonalize convolutions using the same techniques as above. Lemma A.4. Right-padding the multi-channel 2D circular convolution matrix C (from cin to cout channels) with dn2 columns of zeros is equivalent to padding each diagonal block of the corresponding block-diagonal matrix D on the right with d columns of zeros: [C 0dn2 ] = F∗ diag([D1 0d] , . . . , [Dn2 0d])F , (A13) where 0k refers to k columns of zeros and a compatible number of rows. Proof. For a fixed column j, note that [Dk]ij = 0 for all i, k ⇐⇒ [D̂ij ]kk = 0 for all i, k ⇐⇒ Cij = 0 for all i, (A14) since D̂ij = (F⊗F )Cij(F ∗⊗F ∗) = 0 onlywhenCij = 0. Apply this for j = cin+1, . . . , cin+d. Lemma A.5. Projecting out d blocks of columns of C is equivalent to projecting out d columns of each of the diagonal blocks of D: C [ Idn2 0 ] = F∗ diag ( D1 [ Id 0 ] , . . . ,Dn2 [ Id 0 ]) F (A15) Proof. This proceeds similarly to the previous lemma: removing columns of each of the n2 matrices D1, . . . ,Dn2 implies removing the corresponding blocks of columns of D̂, and thus of C. Theorem A.6. If C is a 2D multi-channel convolution with cin ≤ cout, then letting d = cout − cin, cayley ([C 0dn2 ]) [ Idn2 0 ] = F∗ diag ( cayley ([D1 0d]) [ Id 0d ] , . . . , cayley ([Dn2 0d]) [ Id 0d ]) F , (A16) which is a real 2D multi-channel semi-orthogonal circular convolution. Proof. For the first step, we use Lemma A.4 for right padding, getting [C 0dn2 ] = F∗ diag([D1 0d] , . . . , [Dn2 0d])F . (A17) Then, noting that [C 0dn2 ] is a convolution matrix with cin = cout, we can apply Theorem A.3 (and the following remark) to get: cayley ([C 0dn2 ]) = F∗ diag (cayley ([D1 0d]) , . . . , cayley ([Dn2 0d]))F . (A18) Since cayley ([C 0dn2 ]) is still a real convolution matrix, we can apply Lemma A.5 to get the result. This demonstrates that we can semi-orthogonalize convolutions with cin 6= cout by first padding them so that cin = cout; despite performing padding, the Cayley transform, and projections on complex matrices in the Fourier domain, we have shown that the resulting convolution is still real. In practice, we do not literally perform padding nor projections; we explain how to do an equivalent but more efficient comptutation on each diagonal block Dk ∈ Ccout×cin below. Proposition A.7. We can efficiently compute the Cayley transform for semi-orthogonalization, i.e., cayley ([W 0d]) [ Id 0d ] , when cin ≤ cout by writing the inverse in terms of the Schur complement. Proof. We can partition W ∈ Ccout×cin into its top part U ∈ Ccin×cin and bottom part V ∈ C(cout−cin)×cin , and then write the padded matrix [W 0cout−cin ] ∈ Ccout×cout as [W 0cout−cin ] = [ U 0 V 0 ]. (A19) Taking the skew-Hermitian part and applying the Cayley transform, then projecting, we get: cayley ([ U 0V 0 ]) [ Icin 0 ] = ( Icout − [ U 0V 0 ] + [ U 0V 0 ] ∗) ( Icout + [ U 0 V 0 ]− [ U 0V 0 ] ∗)−1 [ Icin 0 ] = [ Icin−U+U ∗ V ∗ −V Icout−cin ][ Icin+U−U ∗ −V ∗ V Icout−cin ]−1[ Icin 0 ] . (A20) We focus on computing the inverse while keeping only the first cin columns. We use the inversion formula noted in Zhang (2006, p. 13) for a block partitioned matrixM , M−1 [ Icin 0 ] = [ P Q R S ]−1[ Icin 0 ] = [ (M/S)−1 −(M/S)−1QS−1 −S−1R(M/S)−1 S−1+S−1R(M/S)−1QS−1 ][ Icin 0 ] = [ (M/S)−1 −S−1R(M/S)−1 ] , (A21) where we assumeM takes the form of the inverse in Eq. A20, andM/S = P −QS−1R is the Schur complement. Using this formula for the first cin columns of the inverse in Eq. A20, and computing the Schur complement Icin + U − U∗ + V ∗I−1cout−cinV , we find cayley ([ U 0V 0 ]) = [ Icin−U+U ∗ V ∗ −V Icout−cin ][ (Icin+U−U ∗+V ∗V )−1 −V (Icin+U−U ∗+V ∗V )−1 ] = [ (Icin−U+U ∗−V ∗V )(Icin+U−U ∗+V ∗V )−1 −2V (Icin+U−U ∗+V ∗V )−1 ] ∈ Ccout×cin , (A22) which is semi-orthogonal and requires computing only one inverse of size cin ≤ cout. Note that this inverse always exists because U − U∗ is skew-Hermitian, so it has purely imaginary eigenvalues, and V ∗V is positive semidefinite and has all real non-negative eigenvalues. That is, the sum Icin + U − U∗ + V ∗V has all nonzero eigenvalues and is thus nonsingular. Proposition A.8. We can also compute semi-orthogonal convolutions when cin ≥ cout using the method described above because cayley ([ CT 0 ])T = cayley ([ C0 ]). Proof. We use that (A−1)T = (AT )−1 and (I −A)(I +A)−1 = (I +A)−1(I −A) to see cayley ([ C0 ]) T = [( I − [ C0 ] + [ C0 ] T )( I + [ C0 ]− [ C0 ] T )−1]T = ( I + [ C0 ] T − [ C0 ] )−1 ( I − [ C0 ] T + [ C0 ] ) = cayley ( [ C0 ] T ) = cayley ([ CT 0 ]) . (A23) We have thus shown how to (semi-)orthogonalize real multi-channel 2D circular convolutions efficiently in the Fourier domain. Aminimal implementation of our method can be found in Appendix E. The techniques described above could also be used with other orthogonalization methods, or for calculating the determinants or singular values of convolutions. B Additional Results For KWLarge, our results on empirical robustness were mixed: while our Cayley layer outperforms BCOP in robust accuracy, the RKO methods are overall more robust by around 2%, for only a marginal decrease in clean accuracy. We note the lower empirical local Lipschitzness of RKO methods, which may explain their higher robustness: Figure 4 shows that the best choice of Lipschitz upper-bound for Cayley and BCOP layers may be less than 1 for this architecture. C Empirical runtimes Each runtime was recorded using the autograd profiler in PyTorch (Paszke et al., 2019) by summing the CUDA execution times. The batch size was fixed at 128 for all graphs, and each data point was averaged over 32 iterations. We used a Nvidia Quadro RTX 8000. D Additional Baseline Experiments D.1 Robustness Experiments The main competing orthogonal convolutional layer, BCOP (Li et al., 2019), uses Björck (Björck & Bowie, 1971) orthogonalization for internal parameter matrices; they also used it in their experiments for orthogonal fully-connected layers. Similarly to how we replaced the method in RKO with the Cayley transform for our CRKO (Cayley RKO) experiments, we replaced Björck with the Cayley transform in BCOP and used a Cayley linear layer forCayleyBCOP experiments, reported in Tables 6 and 7. We see slightly decreased performance over all metrics, similarly to the relationship between RKO and CRKO. For additional comparison, we also report on a plain convolutional baseline in Table 7. For this model, we used a plain circular convolutional layer and a Cayley linear layer, which still imparts a considerable degree of robustness. With the plain convolutional layer, the model gains a considerable degree of accuracy but loses some robustness. We did not report a plain convolutional baseline for the provable robustness experiments on KWLarge, as it would require a more sophisticated technique to bound the Lipschitz constants of each layer, which is outside the scope of our investigation. D.2 Wasserstein Distance Estimation We repeated the Wasserstein distance estimation experiment from Li et al. (2019), simply replacing the BCOP layer with our Cayley convolutional layer, and the Björck linear layer with our Cayley fully-connected layer. We took the best Wasserstein distance bound from one trial of each of the four learning rates considered in BCOP (0.1, 0.01, 0.001, 0.0001); see Table 8. E Example Implementations In PyTorch 1.8, our layer can be implemented as follows. def cayley(W): if len(W.shape) == 2: return cayley(W[None])[0] _, cout, cin = W.shape if cin > cout: return cayley(W.transpose(1, 2)).transpose(1, 2) U, V = W[:, :cin], W[:, cin:] I = torch.eye(cin, dtype=W.dtype, device=W.device)[None, :, :] A = U - U.conj().transpose(1, 2) + V.conj().transpose(1, 2) @ V inv = torch.inverse(I + A) return torch.cat((inv @ (I - A), -2 * V @ inv), axis=1) class CayleyConv(nn.Conv2d): def fft_shift_matrix(self, n, s): shift = torch.arange(0, n).repeat((n, 1)) shift = shift + shift.T return torch.exp(2j * np.pi * s * shift / n) def forward(self, x): cout, cin, _, _ = self.weight.shape batches, _, n, _ = x.shape if not hasattr(self, "shift_matrix"): s = (self.weight.shape[2] - 1) // 2 self.shift_matrix = self.fft_shift_matrix(n, -s)[:, :(n//2 + 1)] \ .reshape(n * (n // 2 + 1), 1, 1).to(x.device) xfft = torch.fft.rfft2(x).permute(2, 3, 1, 0) \ .reshape(n * (n // 2 + 1), cin, batches) wfft = self.shift_matrix * torch.fft.rfft2(self.weight, (n, n)) \ .reshape(cout, cin, n * (n // 2 + 1)).permute(2, 0, 1).conj() yfft = (cayley(wfft) @ xfft).reshape(n, n // 2 + 1, cout, batches) y = torch.fft.irfft2(yfft.permute(3, 2, 0, 1)) if self.bias is not None: y += self.bias[:, None, None] return y To make the layer support stride-2 convolutions, have CayleyConv inherit from the following class instead, which depends on the einops package: class StridedConv(nn.Conv2d): def __init__(self, *args, **kwargs): if "stride" in kwargs and kwargs["stride"] == 2: args = list(args) args[0] = 4 * args[0] # 4x in_channels args[2] = args[2] // 2 # //2 kernel_size; optional args = tuple(args) super().__init__(*args, **kwargs) downsample = "b c (w k1) (h k2) -> b (c k1 k2) w h" self.register_forward_pre_hook(lambda _, x: \ einops.rearrange(x[0], downsample, k1=2, k2=2) \ if self.stride == (2, 2) else x[0]) More details on our implementation and experiments can be found at: https://github.com/locuslab/orthogonal-convolutions.
1. What is the novel contribution of the paper regarding orthogonal convolutional layers? 2. What are the reviewer's concerns about the proposed method's completeness and relationship with BCOP? 3. How does the reviewer suggest comparing the computational complexity of different methods? 4. What does the reviewer think about the empirical evidence provided in the paper?
Review
Review The paper provides another parameterization for orthogonal convolutional layers using the Cayley transform, different from BCOP. To the best of my knowledge, this parameterization is novel. However, I have a few questions regarding the proposed method. (1) For 1D-convolutional layers, BCOP is a complete characterization. From the paper, the parameterization is not complete since the eigenvalues are all +1. While it is possible to multiply a diagonal matrix with either +1 or -1 entries, it is not clear such multiplication closes the gap. I am curious whether the composition is complete; otherwise, the proposed parameterization is strictly weaker than BCOP. (2) For 2D-convolutional layers, I believe both BCOP and the proposed method using the Cayley transform are incomplete. So I am curious whether the proposed parameterization is a proper superset of BCOP. Without the argument, it is vague to state the proposed method is more expressive than BCOP --- the better results could come from optimization instead of parameterization. That being said, the paper is still interesting if the proposed parameterization covers a different subset of all orthogonal 2D-convolutional layers (i.e., neither a superset nor a subset of BCOP). In this case, the authors need to characterize the difference between these two subsets. Which layer can be parameterized by Cayley transform but not BCOP, and vice versa? If the authors can clarify the questions, I will definitely increase my score. Others are minor comments: (3) Comparing computational complexities for different methods, RKO, OSSN, SVCM, BCOP, and Cayley transform is desired. (4) The BCOP includes the experiment of Wasserstein distance estimation. Empirically, it is better to show the proposed method is better than BCOP in various scenarios if a theoretical justification is too hard, if not impossible. The questions above are well addressed in the response, and I would like to increase my score.
ICLR
Title Orthogonalizing Convolutional Layers with the Cayley Transform Abstract Recent work has highlighted several advantages of enforcing orthogonality in the weight layers of deep networks, such as maintaining the stability of activations, preserving gradient norms, and enhancing adversarial robustness by enforcing low Lipschitz constants. Although numerous methods exist for enforcing the orthogonality of fully-connected layers, those for convolutional layers are more heuristic in nature, often focusing on penalty methods or limited classes of convolutions. In this work, we propose and evaluate an alternative approach to directly parameterize convolutional layers that are constrained to be orthogonal. Specifically, we propose to apply the Cayley transform to a skew-symmetric convolution in the Fourier domain, so that the inverse convolution needed by the Cayley transform can be computed efficiently. We compare our method to previous Lipschitz-constrained and orthogonal convolutional layers and show that it indeed preserves orthogonality to a high degree even for large convolutions. Applied to the problem of certified adversarial robustness, we show that networks incorporating the layer outperform existing deterministic methods for certified defense against `2-norm-bounded adversaries, while scaling to larger architectures than previously investigated. Code is available at https://github.com/locuslab/orthogonal-convolutions. 1 Introduction Encouraging orthogonality in neural networks has proven to yield several compelling benefits. For example, orthogonal initializations allow extremely deep vanilla convolutional neural networks to be trained quickly and stably (Xiao et al., 2018; Saxe et al., 2013). And initializations that remain closer to orthogonality throughout training seem to learn faster and generalize better (Pennington et al., 2017). Unlike Lipschitz-constrained layers, orthogonal layers are gradient-norm-preserving (Anil et al., 2019), discouraging vanishing and exploding gradients and stabilizing activations (Rodríguez et al., 2017). Orthogonality is thus a potential alternative to batch normalization in CNNs and can help to remember long-term dependencies in RNNs (Arjovsky et al., 2016; Vorontsov et al., 2017). Constraints and penalty terms encouraging orthogonality can improve generalization in practice (Bansal et al., 2018; Sedghi et al., 2018), improve adversarial robustness by enforcing low Lipschitz constants, and allow deterministic certificates of robustness (Tsuzuku et al., 2018). Despite evidence for the benefits of orthogonality constraints, and while there are many methods to orthogonalize fully-connected layers, the orthogonalization of convolutions has posed challenges. More broadly, current Lipschitz-constrained convolutions rely on spectral normalization and kernel reshaping methods (Tsuzuku et al., 2018), which only allow loose bounds and can cause vanishing gradients. Sedghi et al. (2018) showed how to clip the singular values of convolutions and thus enforce orthogonality, but relied on costly alternating projections to achieve tight constraints. Most recently, Li et al. (2019) introduced the Block Convolution Orthogonal Parameterization (BCOP), which cannot express the full space of orthogonal convolutions. In contrast to previous work, we provide a direct, expressive, and scalable parameterization of orthogonal convolutions. Our method relies on the Cayley transform, which is well-known for parameterizing orthogonal matrices in terms of skew-symmetric matrices, and can be easily extended to non-square weight matrices. The transform requires efficiently computing the inverse of a particular convolution in the Fourier domain, which we show works well in practice. We demonstrate that our Cayley layer is indeed orthogonal in practice when implemented in 32-bit precision, irrespective of the number of channels. Further, we compare it to alternative convolutional and Lipschitz-constrained layers: we include them in several architectures and evaluate their deterministic certifiable robustness against an `2-norm-bounded adversary. Our layer provides stateof-the-art results on this task. We also demonstrate that the layers empirically endow a considerable degree of robustness without adversarial training. Our layer generally outperforms the alternatives, particularly for larger architectures. 2 Related Work Orthogonality in neural networks. The benefits of orthogonal weight initializations for dynamical isometry, i.e., ensuring signals propagate through deep networks, are explained by Saxe et al. (2013) and Pennington et al. (2017), with limited theoretical guarantees investigated by Hu et al. (2020). Xiao et al. (2018) provided a method to initialize orthogonal convolutions, and demonstrated that it allows the training of extremely deep CNNs without batch normalization or residual connections. Further, Qi et al. (2020) developed a novel regularization term to encourage orthogonality throughout training and showed its effectiveness for training very deep vanilla networks. The signal-preserving properties of orthogonality can also help with remembering long-term dependencies in RNNs, on which there has been much work (Helfrich et al., 2018; Arjovsky et al., 2016). One way to orthogonalize weight matrices is with the Cayley transform, which is often used in Riemannian optimization (Absil et al., 2009). Helfrich et al. (2018) and Maduranga et al. (2019) avoid vanishing/exploding gradients in RNNs using the scaled Cayley transform. Similarly, LezcanoCasado&Martínez-Rubio (2019) use the exponentialmap, which theCayley transform approximates. Li et al. (2020) derive an iterative approximation of the Cayley transform for orthogonally-constrained optimizers and show it speeds the convergence of CNNs and RNNs. However, they merely orthogonalize a matrix obtained by reshaping the kernel, which is not the same as an orthogonal convolution (Sedghi et al., 2018). Our contribution is unique here in that we parameterize orthogonal convolutions directly, as opposed to reshaping kernels. Bounding neural network Lipschitzness. Orthogonality imposes a strict constraint on the Lipschitz constant, which itself comes with many benefits: Lower Lipschitz constants are associated with improved robustness (Yang et al., 2020) and better generalization bounds (Bartlett et al., 2017). Tsuzuku et al. (2018) showed that neural network classifications can be certified as robust to `2- norm-bounded perturbations given aLipschitz bound and sufficiently confident classifications. Along with Szegedy et al. (2013), they noted that the Lipschitz constant of neural networks can be bounded if the constants of the layers are known. Thus, there is substantial work on Lipschitz-constrained and regularized layers, which we review in Sec. 5. However, Anil et al. (2019) realized that mere Lipschitz constraints can attenuate gradients, unlike orthogonal layers. There have been other ideas for calculating and controlling the minimal Lipschitzness of neural networks, e.g., through regularization (Hein & Andriushchenko, 2017), extreme value theory (Weng et al., 2018), or using semi-definite programming (Latorre et al., 2020; Chen et al., 2020; Fazlyab et al., 2019), but constructing bounds from Lipschitz-constrained layers is more scalable and efficient. Besides Tsuzuku et al. (2018)’s strategy for deterministic certifiable robustness, there are many approaches to deterministically verifying neural network defenses using SMT solvers (Huang et al., 2017; Ehlers, 2017; Carlini & Wagner, 2017), integer programming approaches (Lomuscio & Maganti, 2017; Tjeng & Tedrake, 2017; Cheng et al., 2017), or semi-definite programming (Raghunathan et al., 2018). Wong et al. (2018)’s approach to minimize an LP-based bound on the robust loss is more scalable, but networks made from Lipschitz-constrained components can be more efficient still, as shown by Li et al. (2019) who outperform their approach. However, none of these methods yet perform as well as probabilistic methods (Cohen et al., 2019). Consequently, orthogonal layers appear to be an important component to enhance the convergence of deep networks while encouraging robustness and generalization. 3 Background Orthogonality. Since we are concerned with orthogonal convolutions, we review orthogonal matrices: A matrixQ ∈ Rn×n is orthogonal ifQTQ = QQT = I . However, in building neural networks, layers do not always have equal input and output dimensions: more generally, a matrix U ∈ Rm×n is semi-orthogonal if UTU = I or UUT = I . Importantly, ifm ≥ n, then U is also norm-preserving: ‖Ux‖2 = ‖x‖2 for all x ∈ Rn. Ifm < n, then the mapping is merely non-expansive (a contraction), i.e., ‖Ux‖2 ≤ ‖x‖2. A matrix having all singular values equal to 1 is orthogonal, and vice versa. Orthogonal convolutions. The same concept of orthogonality applies to convolutional layers, which are also linear transformations. A convolutional layer conv : Rc×n×n → Rc×n×n with c = cin = cout input and output channels is orthogonal if and only if ‖conv(X)‖F = ‖X‖F for all input tensors X ∈ Rc×n×n; the notion of semi-orthogonality extends similarly for cin 6= cout. Note that orthogonalizing each convolutional kernel as in Lezcano-Casado & Martínez-Rubio (2019); Lezcano-Casado (2019) does not yield an orthogonal (norm-preserving) convolution. Lipschitzness under the `2 norm. A consequence of orthogonality is 1-Lipschitzness. A function f : Rn → Rm is L-Lipschitz with respect to the `2 norm iff ‖f(x) − f(y)‖2 ≤ L‖x − y‖2 for all x, y ∈ Rn. IfL is the smallest such constant for f , then it’s called the Lipschitz constant of f , denoted by Lip(f). An useful property for certifiable robustness is that the Lipschitz constant of the composition of f and g is upper-bounded by the product of their constants: Lip(f ◦ g) ≤ Lip(f)Lip(g). Since simple neural networks are fundamentally just composed functions, this allows us to bound their Lipschitz constants, albeit loosely. We can extend this idea to residual networks using the fact that Lip(f + g) ≤ Lip(f) + Lip(g), which motivates using a convex combination in residual connections. More details can be found in Li et al. (2019); Szegedy et al. (2013). Lipschitz bounds for provable robustness. If we know the Lipschitz constant of the neural network, we can certify that a classification with sufficiently a large margin is robust to `2 perturbations below a certain magnitude. Specifically, denote the margin of a classification with label t as Mf (x) = max(0, yt −max i 6=t yi), (1) which can be interpreted as the distance between the correct logit and the next largest logit. Then if the logit function f has Lipschitz constant L, andMf (x) > √ 2L , then f(x) is certifiably robust to perturbations {δ : ‖δ‖2 ≤ }. Tsuzuku et al. (2018) and Li et al. (2019) provide proofs. 4 The Cayley transform of a Convolution Before describing our method, we first review discrete convolutions and the Cayley transform; then, we show the need for inverse convolutions and how to compute them efficiently in the Fourier domain, which lets us parameterize orthogonal convolutions via the Cayley transform. The key idea in our method is that multi-channel convolution in the Fourier domain reduces to a batch of matrix-vector products, and making each of those matrices orthogonal makes the convolution orthogonal. We describe our method in more detail in Appendix A and provide a minimal implementation in PyTorch in Appendix E. An unstrided convolutional layer with cin input channels and cout output channels has a weight tensor W of shape Rcout×cin×n×n and takes an inputX of shape Rcin×n×n to produce an output Y of shape Rcout×n×n, i.e., convW : Rcin×n×n → Rcout×n×n. It is easiest to analyze convolutions when they are circular: if the kernel goes out of bounds ofX , it wraps around to the other side—this operation can be carried out efficiently in the Fourier domain. Consequently, we focus on circular convolutions. We define convW (X) as the circular convolutional layer with weight tensor W ∈ Rcout×cin×n×n applied to an input tensor X ∈ Rcin×n×n yielding an output tensor Y = convW (X) ∈ Rcout×n×n. Equivalently, we can view convW (X) as the doubly block-circulant matrix C ∈ Rcoutn 2×cinn2 corresponding to the circular convolution with weight tensorW applied to the unrolled input tensor vecX ∈ Rcinn2×1. Similarly, we denote by convTW (X) the transpose CT of the same convolution, which can be obtained by transposing the first two channel dimensions of W and flipping each of the last two (kernel) dimensions vertically and horizontally, calling the result W ′, and computing convW ′(X). We denote conv−1W (X) as the inverse of the convolution, i.e., with corresponding matrix C−1, which is more difficult to efficiently compute. Now we review how to perform a convolution in the spatial domain. We refer to a pixel as a cin or cout-dimensional slice of a tensor, like X[:, i, j]. Each of the n2 (i, j) output pixels Y [:, i, j] are computed as follows: for each c ∈ [cout], compute Y [c, i, j] by centering the tensor W [c] on the (i, j)th pixel of the input and taking a dot product, wrapping around pixels of W that go out-ofbounds. Typically,W is zero except for a k×k region of the last two (spatial) dimensions, which we call the kernel or the receptive field. Typically, convolutional layers have small kernels, e.g., k = 3. Considering now matrices instead of tensors, the Cayley transform is a bijection between skewsymmetric matrices A and orthogonal matrices Q without −1 eigenvalues: Q = (I −A)(I +A)−1. (2) A matrix is skew-symmetric if A = −AT , and we can skew-symmetrize any square matrix B by computing the skew-symmetric partA = B−BT . The Cayley transform of such a skew-symmetric matrix is always orthogonal, which can be seen by multiplying Q by its transpose and rearranging. We can also apply the Cayley transform to convolutions, noting they are also linear transformations that can be represented as doubly block circulant matrices. While it is possible to construct the matrix C corresponding to a convolution convW and apply the Cayley transform to it, this is highly inefficient in practice: Convolutions can be easily skew-symmetrized by computing convW (X)− convTW (X), but finding their inverse is challenging; instead, we manipulate convolutions in the Fourier domain, taking advantage of the convolution theorem and the efficiency of the fast Fourier transform. According to the 2D convolution theorem (Jain, 1989), the circular convolution of two matrices in the Fourier domain is simply their elementwise product. We will show that the convolution theorem extends to multi-channel convolutions of tensors, in which case convolution reduces to a batch of complex matrix-vector products rather than elementwise products: inverting these smaller matrices is equivalent to inverting the convolution, and finding their skew-Hermitian part is equivalent to skew-symmetrizing the convolution, which allows us to compute the Cayley transform. We define the 2D Discrete (Fast) Fourier Transform for tensors of order ≥ 2 as a mapping FFT : Rm1×...×mr×n×n → Cm1×...×mr×n×n defined by FFT(X)[i1, ..., ir] = FnX[i1, ..., ir]Fn for il ∈ 1, ...,ml and l ∈ 1, ..., r and r ≥ 0, where Fn[i, j] = 1√n exp( −2πı n ) (i−1)(j−1). That is, we treat all but the last two dimensions as batch dimensions. We denote X̃ = FFT(X) for a tensor X . Using the convolution theorem, in the Fourier domain the cth output channel is the sum of the elementwise products of the cin input and weight channels: that is, Ỹ [c] = ∑cin k=1 W̃ [c, k] X̃[k]. Equivalently, working in the Fourier domain, the (i, j)th pixel of the cth output channel is the dot product of the (i, j)th pixel of the cth weight with the (i, j)th input pixel: Ỹ [c, i, j] = W̃ [c, :, i, j] · X̃[:, i, j]. From this, we can see that the whole (i, j)th Fourier-domain output pixel is the matrix-vector product FFT(convW (X))[:, i, j] = W̃ [:, :, i, j]X̃[:, i, j]. (3) This interpretation gives a way to compute the inverse convolution as required for the Cayley transform, assuming cin = cout: FFT(conv−1W (X))[:, i, j] = W̃ [:, :, i, j] −1X̃[:, i, j]. (4) Given this method to compute inverse convolutions, we can now parameterize an orthogonal convolution with a skew-symmetric convolution through the Cayley transform, highlighted in Algorithm 1: In line 1, we use the Fast Fourier Transform on the weight and input tensors. In line 4, we compute the Fourier domain weights for the skew-symmetric convolution (the Fourier representation is skew-Hermitian, thus the use of the conjugate transpose). Next, in lines 4–5 we compute the inverses required for FFT(conv−1I+A(x)) and use them to compute the Cayley transform written as (I+A)−1−A(I+A)−1 in line 6. Finally, we get our spatial domain result with the inverse FFT,which is always exactly real despite workingwith complexmatrices in the Fourier domain (seeAppendixA). 4.1 Properties of our approach It is important to note that the inverse in the Cayley transform always exists: Because A is skewsymmetric, it has all imaginary eigenvalues, so I + A has all nonzero eigenvalues and is thus nonsingular. Since only square matrices can be skew-symmetrized and inverted, Algorithm 1 only Algorithm 1: Orthogonal convolution via the Cayley transform. Input: A tensor X ∈ Rcin×n×n and convolution weightsW ∈ Rcout×cin×n×n, with cin = cout. Output: A tensor Y ∈ Rcout×n×n, the orthogonal convolution parameterized byW applied toX . 1 W̃ := FFT(W ) ∈ Ccout×cin×n×n, X̃ := FFT(X) ∈ Ccin×n×n 2 for all i, j ∈ 1, . . . , n // In parallel 3 do 4 Ã[:, :, i, j] := W̃ [:, :, i, j]− W̃ [:, :, i, j]∗ 5 Ỹ [:, i, j] := (I + Ã[:, :, i, j])−1X̃[:, i, j] 6 Z̃[:, i, j] := Ỹ [:, i, j]− Ã[:, :, i, j]Ỹ [:, i, j] 7 end 8 return FFT−1(Z̃).real works for cin = cout, but can be extended to the rectangular case where cout ≥ cin by padding the matrix with zeros and then projecting out the first cin columns after the transform, resulting in a norm-preserving semi-orthogonal matrix; the case cin ≥ cout follows similarly, but the resulting matrix is merely non-expansive. With efficient implementation in terms of the Schur complement (Appendix A.1, Eq. A22), this only requires inverting a square matrix of order min(cin, cout). We saw that learning was easier if we parameterized W in Algorithm 1 by W = gV/‖V ‖F for a learnable scalar g and tensor V , as in weight normalization (Salimans & Kingma, 2016). Comparison to BCOP.While the Block Convolution Orthogonal Parameterization (BCOP) can only express orthogonal convolutions with fixed k × k-sized kernels, a Cayley convolutional layer can represent orthogonal convolutions with a learnable kernel size up to the input size, and it does this without costly projections unlike Sedghi et al. (2018). However, our parameterization as presented is limited to orthogonal convolutions without -1 eigenvalues. Hence, our parameterization is incomplete; besides kernel size restrictions, BCOP was also demonstrated to incompletely represent the space of orthogonal convolutions, though the details of the problem were unresolved (Li et al., 2019). Our method can represent such orthogonal convolutions by multiplying the Cayley transform by a fixed diagonal matrix with ±1 entries (Gallier, 2006; Helfrich et al., 2018); however, we cannot optimize over the discrete set of such scaling matrices, so our method cannot optimize over all orthogonal convolutions, nor all special orthogonal convolutions. In our experiments, we did not find improvements from adding randomly initialized scaling matrices as in Helfrich et al. (2018). Limitations of ourmethod. As ourmethod requires computing an inverse convolution, it is generally incompatible with strided convolutions; e.g., a convolution with stride 2 cannot be inverted since it involves noninvertible downsampling. It is possible to apply our method to stride-2 convolutions by simultaneously increasing the number of output channels by 4× to compensate for the 2× downsampling of the two spatial dimensions, though we found this to be computationally inefficient. Instead, we use the invertible downsampling layer from (Jacobsen et al., 2018) to emulate striding. The convolution resulting from ourmethod is circular, which is the same as using the circular padding mode instead of zero padding in, e.g., PyTorch, and will not have a large impact on performance if subjects tend to be centered in images in the data set. BCOP (Li et al., 2019) and Sedghi et al. (2018) also restricted their attention to circular convolutions. Our method is substantially more expensive than plain convolutional layers, though in most practical settings it is more efficient than existing work: We plot the runtimes of our Cayley layer, BCOP, and plain convolutions in a variety of settings in Figure 6 for comparison, and we also report runtimes in Tables 4 and 5 (see Appendix C). Runtime comparisonOur Cayley layer does cincout FFTs on n×nmatrices (i.e., the kernels padded to the input size), and cin FFTs for each n× n input. These have complexity O(cincoutn2 log n) and O(coutn2 log n) respectively. Themost expensive step is computing the inverse of n2 square matrices of order c = min(cin, cout), with complexityO(n2c3), similarly to the method of Sedghi et al. (2018). We note like the authors that parallelization could effectively make this O(n2 log n + c3), and it is quite feasible in practice. As in Li et al. (2020), the inverse could be replaced with an iterative approximation, but we did not find it necessary for our relatively small architectures. For comparison, the related layers BCOP and RKO (Sec. 5) take only O(c3) to orthogonalize the convolution, and OSSN takesO(n2c3) (Li et al., 2019). In practice, we found our Cayley layer takes anywhere from 1/2× to 4× as long as BCOP, depending on the architecture (see Appendix C). 5 Experiments Our experiments have two goals: First, we show that our layer remains orthogonal in practice. Second, we compare the performance of our layer versus alternatives (particularly BCOP) on two adversarial robustness tasks on CIFAR-10: We investigate the certifiable robustness against an `2-norm-bounded adversary using the idea of Lipschitz Margin Training (Tsuzuku et al., 2018), and then we look at robustness in practice against a powerful adversary. We find that our layer is always orthogonal and performs relatively well in the robustness tasks. Separately, we show our layer improves on the Wasserstein distance estimation task from Li et al. (2019) in Appendix D.2. For alternative layers, we adopt the naming scheme for previous work on Lipschitz-constrained convolutions from Li et al. (2019), and we compare directly against their implementations. We outline the methods below. RKO. A convolution can be represented as a matrix-vector product, e.g., using a doubly blockcirculant matrix and the unrolled input. Alternatively, one could stack each k×k receptive field, and multiply by the cout × k2cin reshaped kernel matrix (Cisse et al., 2017). The spectral norm of this reshaped matrix is bounded by the convolution’s true spectral norm (Tsuzuku et al., 2018). Consequently, reshaped kernel methods orthogonalize this reshaped matrix, upper-bounding the singular values of the convolution by 1. Cisse et al. (2017) created a penalty term based on this matrix; instead, like Li et al. (2019), we orthogonalize the reshaped matrix directly, called reshaped kernel orthogonalization (RKO). They used an iterative algorithm for orthogonalization (Björck & Bowie, 1971); for comparison, we implementRKO using the Cayley transform instead of Björck orthogonalization, called CRKO. OSSN. A prevalent idea to constrain the Lipschitz constants of convolutions is to approximate the maximum singular value and normalize it out: Miyato et al. (2018) used the power method on the matrix W associated with the convolution, i.e., si+1 := WTWsi, and σmax ≈ ‖Wsn‖/‖sn‖. Gouk et al. (2018) improved upon this idea by applying the power method directly to convolutions, using the transposed convolution forWT . However, this one-sided spectral normalization is quite restrictive; dividing out σmax can make other singular values vanishingly small. SVCM. Sedghi et al. (2018) showed how to exactly compute the singular values of convolutional layers using the Fourier transform before the SVD, and proposed a singular value clipping method. However, the clipped convolution can have an arbitrarily large kernel size, so they resorted to alternating projections between orthogonal convolutions and k × k-kernel convolutions, which can be expensive. Like Li et al. (2019), we found that≈ 50 projections are needed for orthogonalization. BCOP. The Block Convolution Orthogonal Parameterization extends the orthogonal initialization method of Xiao et al. (2018). It differentiably parameterizes k × k orthogonal convolutions with an orthogonal matrix and 2(k− 1) symmetric projection matrices. The method only parameterizes the subspace of orthogonal convolutions with k × k-sized kernels, but is quite expressive empirically. Internally, orthogonalization is done with the method by Björck & Bowie (1971). Note that BCOP and SVCM are the only other orthogonal convolutional layers, and SVCM only for a large number of projections. RKO, CRKO, and OSSN merely upper-bound the Lipschitz constant of the layer by 1. 5.1 Training and Architectural Details Training details. For all experiments, we used CIFAR-10 with standard augmentation, i.e., random cropping and flipping. Inputs to the model are always in the range [0, 1]; we implement normalization as a layer for compatibility with AutoAttack. For each architecture/convolution pair, we tried learning rates in {10−5, 10−4, 10−3, 10−2, 10−1}, choosing the one with the best test accuracy. Most often, 0.001 is appropriate. We found that a piecewise triangular learning rate, as used in top performers in the DAWNBench competition (Coleman et al., 2017), performed best. Adam (Kingma & Ba, 2014) showed a significant improvement over plain SGD, and we used it for all experiments. Loss function. Inspired by Tsuzuku et al. (2018), Anil et al. (2019) and Li et al. (2019) used multi-class hinge loss where the margin is the robustness certificate √ 2L 0. We corroborate their finding that this works better than cross-entropy, and similarly use 0 = 0.5. Varying 0 controls a tradeoff between accuracy and robustness (see Fig. 5). Initialization. We found that the standard uniform initialization in PyTorch performed well for our layer. We adjusted the variance, but significant differences required order-of-magnitude changes. For residual networks, we tried Fixup initialization (Zhang et al., 2019), but saw no significant improvement. We hypothesize this is due to (1) the learnable scaling parameter inside the Cayley transform, which changes significantly during training and (2) the dynamical isometry inherent with orthogonal layers. For alternative layers, we used the initializations from Li et al. (2019). Architecture considerations. For fair comparison with previous work, we use the “large” network from Li et al. (2019), which was first implemented in Kolter & Wong (2017)’s work on certifiable robustness. We also compare the performance of the different layers in a 1-Lipschitz-constrained version of ResNet9 (He et al., 2016) and WideResNet10-10 (Zagoruyko & Komodakis, 2016). The architectureswe could investigatewere limited by compute andmemory, as all the layers compared are relatively expensive. For RKO, OSSN, SVCM, and BCOP, we use Björck orthogonalization (Björck & Bowie, 1971) for fully-connected layers, as reported in Li et al. (2019); Anil et al. (2019). For our Cayley convolutional layer and CRKO, we orthogonalize the fully-connected layers with the Cayley transform to be consistent with our method. We found the gradient-norm-preserving GroupSort activation function from Anil et al. (2019) to be more effective than ReLU, and we used a group size of 2, i.e.,MaxMin. Strided convolutions. For the KWLarge network, we used “invertible downsampling”, which emulates striding by rearranging the inputs to have 4× more channels while halving the two spatial dimensions and reducing the kernel size to bk/2c (Jacobsen et al., 2018). For the residual networks, we simply used a version of pooling, noting that average pooling is still non-expansive when multiplied by its kernel size, which allows us to use more of the network’s capacity. We also halved the kernel size of the last pooling layer, instead adding another fully-connected layer; empirically, this resulted in higher local Lipschitz constants. Ensuring Lipschitz constraints. Batch normalization layers scale their output, so they can’t be included in our 1-Lipschitz-constrained architecture; the gradient-norm-preserving properties of our layers compensate for this. We ensure residual connections are non-expansive by making them a convex combination with a new learnable parameter α, i.e., g(x) = αf(x)+(1−α)x, for α ∈ [0, 1]. To ensure the latter constraint, we use sigmoid(α). We can tune the overall Lipschitz bound to a given L using the Lipschitz composition property, multiplying each of them layers by L1/m. 5.2 Adversarial Robustness For certifiable robustness, we report the fraction of certifiable test points: i.e., thosewith classification marginMf (x) greater than √ 2L , where = 36/255. For empirical defense, we use both vanilla projected gradient descent andAutoAttack byCroce&Hein (2020). For PGD,we useα = /4.0with 10 iterations. Within AutoAttack, we use both APGD-CE and APGD-DLR, finding the decisionbased attacks provided no improvements. We report on = 36/255 for consistency with Li et al. (2019) and previous work on deterministic certifiable robustness (Wong et al., 2018). Additionally, we found it useful to report on empirical local Lipschitz constants throughout training using the PGD-like method from Yang et al. (2020). 5.3 Results Practical orthogonality. We show that our layer remains very close to orthogonality in practice, both before and after learning, when implemented in 32-bit precision. We investigated Cayley layers from one of our ResNet9 architectures, running them on random tensors to see if their norm is preserved, which is equivalent to orthogonality. We found that ‖Conv(x)‖/‖x‖, the extent to which our layer is gradient norm preserving, is always extremely close to 1. We illustrate the small discrepancies, easily bounded between 0.99999 and 1.00001, in Figure 1. Cayley layers which do not change or increase the number of channels are guaranteed to be orthogonal, which we see in practice for graphs (b, c, d, e). Those which decrease the number of channels can only be non-expansive, and in fact the layer seems to become slightly more norm-preserving after training (a). In short, our Cayley layer can capture the full benefits of orthogonality. Certifiable robustness. We use our layer and alternatives within the KWLarge architecture for a more direct comparison to previous work on deterministic certifable robustness (Li et al., 2019; Wong et al., 2018). As in (Li et al., 2019), we got the best performance without normalizing inputs, and can thus say that all networks compared here are at most 1-Lipschitz. Our layer outperforms BCOP on this task (see Table 1), and is thus state-of-the-art, getting on average 75.33% clean test accuracy and 59.16% certifiable robust accuracy against adversarial perturbations with norm less than = 36/255. In contrast, BCOP gets 75.11% test accuracy and 58.29% certifiable robust accuracy. The reshaped kernel methods perform only a percent or two worse on this task, while the spectral normalization and clipping methods lag behind. We assumed that a layer is only meaningfully better than the other if both the test and robust accuracy are improved; otherwise, the methods may simply occupy different parts of the tradeoff curve. Since reshaped kernel methods can encourage smaller Lipschitz constants than orthogonal layers (Sedghi et al., 2018), we investigated the clean vs. certifiable robust accuracy tradeoff enabled by scaling the Lipschitz upper bound of the network, visualized in Figure 2. To that end, in light of the competitiveness of RKO, we chose a Lipschitz upper-bound of 0.85 which gave our Cayley layer similar test accuracy; this allowed for even higher certifiable robustness of 59.99%, but lower test accuracy of 74.35%. Overall, we were surprised by the similarity between the four top-performing methods after scaling Lipschitz constants. We were not able to improve certifiable accuracy with ResNets. However, it was useful to increase the kernel size: we found 5 was an improvement in accuracy, while 7 and 9 were slightly worse. (Since our method operates in the Fourier domain, increases in kernel size incur no extra cost.) We also saw an improvement from scaling up the width of each layer of KWLarge, and our Cayley layer was substantially faster than BCOP as the width of KWLarge increased (see Appendix C). Multiplying the width by 3 and increasing the kernel size to 5, we were able to get 61.13% certified robust accuracy with our layer, and 60.55% with BCOP. Empirical robustness. Previous work has shown that adversarial robustness correlates with lower Lipschitz constants. Thus, we investigated the robustness endowed by our layer against `2 gradient-based adversaries. Here, we got better accuracy with the standard practice of normalizing inputs. Our layer outperformed the others in ResNet9 and WideResNet10-10 architectures; results were less decisive for KWLarge (see Appendix B). For the WideResNet, we got 82.99% clean accuracy and 73.16% robust accuracy for = 36/255. For comparison, the state-of-the-art achieves 91.08% clean accuracy and 72.91% robust accuracy for = 0.5 using a ResNet50 with adversarial training and additional unlabeled data (Augustin et al., 2020). We visualize the tradeoffs for our residual networks in Figure 3, noting that they empirically have smaller local Lipschitz constants than KWLarge. While our layer outperforms others for the default Lipschitz bound of 1, and is consistently slightly better than BCOP, RKO can perform similarly well for larger bounds. This provides some support for studies showing that hard constraints like ours may not match the performance of softer constraints, such as RKO and penalty terms (Bansal et al., 2018; Vorontsov et al., 2017). 6 Conclusion In this paper, we presented a new, expressive parameterization of orthogonal convolutions using the Cayley transform. Unlike previous approaches to Lipschitz-constrained convolutions, ours gives deep networks the full benefits of orthogonality, such as gradient norm preservation. We showed empirically that our method indeed maintains a high degree of orthogonality both before and after learning, and also scales better to some architectures than previous approaches. Using our layer, we were able to improve upon the state-of-the-art in deterministic certifiable robustness against an `2-norm-bounded adversary, and also showed that it endows networks with considerable inherent robustness empirically. While our layer offers benefits theoretically, we observed that heuristics involving orthogonalizing reshaped kernels were also quite effective for empirical robustness. Orthogonal convolutions may only show their true advantage in gradient norm preservation for deeper networks than we investigated. In light of our experiments in scaling the Lipschitz bound, we hypothesize that not orthogonality, but insead the ability of layers such as ours to exert control over the Lipschitz constant, may be best for the robustness/accuracy tradeoff. Future work may avoid expensive inverses using approximations or the exponential map, or compare various orthogonal and Lipschitz-constrained layers in the context of very deep networks. Acknowledgments We thank Shaojie Bai, Chun Kai Ling, EricWong, and the anonymous reviewers for helpful feedback and discussions. This work was partially supported under DARPA grant number HR00112020006. A Orthogonalizing Convolutions in the Fourier Domain Our method relies on the fact that a multi-channel circular convolution can be block-diagonalized by a suitable Discrete Fourier Transform matrix. We show how this follows from the 2D convolution theorem (Jain, 1989, p. 145) below. Definition A.1. Fn is the DFT matrix for sequences of length n; we drop the subscript when it can be inferred from context. Definition A.2. We define convW (X) as in Section 4; if cin = cout = 1, we drop the channel axes, i.e., for X,W ∈ Rn×n, the 2D circular convolution of X withW is convW (X) ∈ Rn×n. Theorem A.1. If C ∈ Rn2×n2 represents a 2D circular convolution with weights W ∈ Rn×n operating on a vectorized input vec(X) ∈ Rn2×1, with X ∈ Rn×n, then it can be diagonalized as (F ⊗ F )C(F ∗ ⊗ F ∗) = D. Proof. According to the 2D convolution theorem, we can implement a single-channel 2D circular convolution by computing the elementwise product of the DFT of the filter and input signals: FWF FXF = F convW (X)F. (A1) This elementwise product is easier to work mathematically with if we represent it as a diagonalmatrix-vector product after vectorizing the equation: diag(vec(FWF )) vec(FXF ) = vec(F convW (X)F ). (A2) We can then rearrange this using vec(ABC) = (CT ⊗A) vec(B) and the symmetry of F : diag(vec(FWF ))(F ⊗ F ) vec(X) = (F ⊗ F ) vec(convW (X)). (A3) Left-multiplying by the inverse of F ⊗F and notingC vec(X) = vec(convW (X)), we get the result (F ∗ ⊗ F ∗) diag(vec(FWF ))(F ⊗ F ) = C ⇒ diag(vec(FWF )) = (F ⊗ F )C(F ∗ ⊗ F ∗), (A4) which shows that the (doubly-block-ciculant) matrixC is diagonalized by F ⊗F . An alternate proof can be found in Jain (1989, p. 150). Now we can consider the case where we have a 2D circular convolution C ∈ Rcoutn2×cinn2 with cin input channels and cout output channels. Here, C has cout × cin blocks, each of which is a circular convolution Cij ∈ Rn 2×n2 . The input image is vecX = [ vecT X1, . . . , vec T Xcin ]T ∈ Rcinn2×1, where Xi is the ith channel of X . Corollary A.1.1. If C ∈ Rcoutn2×cinn2 represents a 2D circular convolution with cin input channels and cout output channels, then it can be block diagonalized as FcoutCF∗cin = D, where Fc = Sc,n2 (Ic ⊗ (F ⊗ F )), Sc,n2 is a permutation matrix, Ik is the identity matrix of order k, and D is block diagonal with n2 blocks of size cout × cin. Proof. We first look at each of the blocks of C individually, referring to D̂ as the block matrix before applying the S permutations, i.e., D̂ = STcout,n2DScin,n2 , so that: D̂ij = [(Icout ⊗ (F ⊗ F )) C (Icin ⊗ (F ∗ ⊗ F ∗))]ij = (F ⊗ F )Cij(F ∗ ⊗ F ∗) = diag(vec(FWijF )), (A5) whereWij are the weights of the (ij)th single-channel convolution, using Theorem A.1. That is, D̂ is a block matrix of diagonal matrices. Then, let Sa,b be the perfect shuffle matrix that permutes the blockmatrix of diagonal matrices to a block diagonal matrix. Sa,b can be constructed by subselecting rows of the identity matrix. Using slice notation: Sa,b = Iab(1 : b : ab, :) Iab(2 : b : ab, :) ... Iab(b : b : ab, :) . (A6) As an example: S2,4 a 0 0 0 e 0 0 0 i 0 0 0 0 b 0 0 0 f 0 0 0 j 0 0 0 0 c 0 0 0 g 0 0 0 k 0 0 0 0 d 0 0 0 h 0 0 0 l m 0 0 0 q 0 0 0 u 0 0 0 0 n 0 0 0 r 0 0 0 v 0 0 0 0 o 0 0 0 s 0 0 0 w 0 0 0 0 p 0 0 0 t 0 0 0 x ︸ ︷︷ ︸ D̂ ST3,4 = a e i 0 0 0 0 0 0 0 0 0 m q u 0 0 0 0 0 0 0 0 0 0 0 0 b f j 0 0 0 0 0 0 0 0 0 n r v 0 0 0 0 0 0 0 0 0 0 0 0 c g k 0 0 0 0 0 0 0 0 0 o s w 0 0 0 0 0 0 0 0 0 0 0 0 d h l 0 0 0 0 0 0 0 0 0 p t x ︸ ︷︷ ︸ D . (A7) Then, with the perfect shuffle matrix, we can compute the block diagonal matrix D as: Scout,n2D̂S T cin,n2 = Scout,n2 (Icout ⊗ (F ⊗ F )) C (Icin ⊗ (F ∗ ⊗ F ∗))STcin,n2 = FcoutCF∗cin = D. (A8) The effect of left and right-multiplying with the perfect shuffle matrix is to create a new matrix D from D̂ such that [Dk]ij = [D̂ij ]kk, where the subscript inside the brackets refers to the kth diagonal block and the (ij)th block respectively. Remark. It is much more simple to compute D (here wfft) in tensor form given the convolution weights w as a cout × cin × n× n tensor: wfft = fft2(w).reshape(cout, cin, n**2).permute(2, 0, 1). Definition A.3. The Cayley transform is a bijection between skew-Hermitian matrices and unitary matrices; for real matrices, it is a bijection between skew-symmetric and orthogonal matrices. We apply the Cayley transform to an arbitrary matrix by first computing its skew-Hermitian part: we define the function cayley : Cm×m → Cm×m by cayley(B) = (Im − B + B∗)(Im + B − B∗)−1, where we compute the skew-Hermitian part ofB inline asB−B∗. Note that the Cayley transform of a real matrix is always real, i.e., Im(B) = 0⇒ Im(cayley(B)) = 0, in which caseB−B∗ = B−BT is a skew-symmetric matrix. We now note a simple but important fact that we will use to show that our convolutions are always exactly real despite manipulating their complex representations in the Fourier domain. Lemma A.2. Say J ∈ Cm×m is unitary so that J∗J = I , and B = JB̃J∗ for B ∈ Rm×m and B̃ ∈ Cm×m. Then cayley(B) = Jcayley(B̃)J∗. Proof. First note that B = JB̃J∗ implies BT = B∗ = (JB̃J∗)∗ = JB̃∗J∗. Then cayley(B) = (I −B +BT )(I +B −BT ) = (I − JB̃J∗ + JB̃∗J∗)(I + JB̃J∗ − JB̃∗J∗)−1 = J(I − B̃ + B̃∗)J∗ [ J(I + B̃ − B̃∗)J∗ ]−1 = J(I − B̃ + B̃∗)J∗ [ J(I + B̃ − B̃∗)−1J∗ ] = J(I − B̃ + B̃∗)(I + B̃ − B̃∗)−1J∗ = Jcayley(B̃)J∗. (A9) For the rest of this section, we drop the subscripts ofF and S when they can be inferred from context. Theorem A.3. When cin = cout = c, applying the Cayley transform to the block diagonal matrix D results in a real, orthogonal multi-channel 2D circular convolution: cayley(C) = F∗cayley(D)F . Proof. Note that F is unitary: FF∗ = S(Ic ⊗ (F ⊗ F ))(Ic ⊗ (F ∗ ⊗ F ∗))ST = SIcn2ST = SST = Icn2 , (A10) since S is a permutation matrix and is thus orthogonal. Then apply Lemma A.2, where we have J = F∗, B = C, and B̃ = D, to see the result. Note that cayley(C) is real because C is real; that is, even though we apply the Cayley transform to skew-Hermitian matrices in the Fourier domain, the resulting convolution is real. Remark. While we deal with skew-Hermitian matrices in the Fourier domain, we are still effectively parameterizing the Cayley transform in terms of skew-symmetric matrices: as in the note in Lemma A.2, we can see that C = F∗DF ⇒ C − CT = C − C∗ = F∗DF − F∗D∗F = F∗(D −D∗)F , (A11) where C is real, D is complex, and C − CT is skew-symmetric (in the spatial domain) despite computing it with a skew-Hermitian matrix D −D∗ in the Fourier domain. Remark. Since D is block diagonal, we only need to apply the Cayley transform (and thus invert) its n2 blocks of size c× c, which are much smaller than the whole matrix: cayley(D) = diag(cayley(D1), . . . , cayley(Dn2)). (A12) A.1 Semi-Orthogonal Convolutions In many cases, convolutional layers do not have cin = cout, in which case they cannot be orthogonal. Rather, we must resort to enforcing semi-orthogonality. We can semi-orthogonalize convolutions using the same techniques as above. Lemma A.4. Right-padding the multi-channel 2D circular convolution matrix C (from cin to cout channels) with dn2 columns of zeros is equivalent to padding each diagonal block of the corresponding block-diagonal matrix D on the right with d columns of zeros: [C 0dn2 ] = F∗ diag([D1 0d] , . . . , [Dn2 0d])F , (A13) where 0k refers to k columns of zeros and a compatible number of rows. Proof. For a fixed column j, note that [Dk]ij = 0 for all i, k ⇐⇒ [D̂ij ]kk = 0 for all i, k ⇐⇒ Cij = 0 for all i, (A14) since D̂ij = (F⊗F )Cij(F ∗⊗F ∗) = 0 onlywhenCij = 0. Apply this for j = cin+1, . . . , cin+d. Lemma A.5. Projecting out d blocks of columns of C is equivalent to projecting out d columns of each of the diagonal blocks of D: C [ Idn2 0 ] = F∗ diag ( D1 [ Id 0 ] , . . . ,Dn2 [ Id 0 ]) F (A15) Proof. This proceeds similarly to the previous lemma: removing columns of each of the n2 matrices D1, . . . ,Dn2 implies removing the corresponding blocks of columns of D̂, and thus of C. Theorem A.6. If C is a 2D multi-channel convolution with cin ≤ cout, then letting d = cout − cin, cayley ([C 0dn2 ]) [ Idn2 0 ] = F∗ diag ( cayley ([D1 0d]) [ Id 0d ] , . . . , cayley ([Dn2 0d]) [ Id 0d ]) F , (A16) which is a real 2D multi-channel semi-orthogonal circular convolution. Proof. For the first step, we use Lemma A.4 for right padding, getting [C 0dn2 ] = F∗ diag([D1 0d] , . . . , [Dn2 0d])F . (A17) Then, noting that [C 0dn2 ] is a convolution matrix with cin = cout, we can apply Theorem A.3 (and the following remark) to get: cayley ([C 0dn2 ]) = F∗ diag (cayley ([D1 0d]) , . . . , cayley ([Dn2 0d]))F . (A18) Since cayley ([C 0dn2 ]) is still a real convolution matrix, we can apply Lemma A.5 to get the result. This demonstrates that we can semi-orthogonalize convolutions with cin 6= cout by first padding them so that cin = cout; despite performing padding, the Cayley transform, and projections on complex matrices in the Fourier domain, we have shown that the resulting convolution is still real. In practice, we do not literally perform padding nor projections; we explain how to do an equivalent but more efficient comptutation on each diagonal block Dk ∈ Ccout×cin below. Proposition A.7. We can efficiently compute the Cayley transform for semi-orthogonalization, i.e., cayley ([W 0d]) [ Id 0d ] , when cin ≤ cout by writing the inverse in terms of the Schur complement. Proof. We can partition W ∈ Ccout×cin into its top part U ∈ Ccin×cin and bottom part V ∈ C(cout−cin)×cin , and then write the padded matrix [W 0cout−cin ] ∈ Ccout×cout as [W 0cout−cin ] = [ U 0 V 0 ]. (A19) Taking the skew-Hermitian part and applying the Cayley transform, then projecting, we get: cayley ([ U 0V 0 ]) [ Icin 0 ] = ( Icout − [ U 0V 0 ] + [ U 0V 0 ] ∗) ( Icout + [ U 0 V 0 ]− [ U 0V 0 ] ∗)−1 [ Icin 0 ] = [ Icin−U+U ∗ V ∗ −V Icout−cin ][ Icin+U−U ∗ −V ∗ V Icout−cin ]−1[ Icin 0 ] . (A20) We focus on computing the inverse while keeping only the first cin columns. We use the inversion formula noted in Zhang (2006, p. 13) for a block partitioned matrixM , M−1 [ Icin 0 ] = [ P Q R S ]−1[ Icin 0 ] = [ (M/S)−1 −(M/S)−1QS−1 −S−1R(M/S)−1 S−1+S−1R(M/S)−1QS−1 ][ Icin 0 ] = [ (M/S)−1 −S−1R(M/S)−1 ] , (A21) where we assumeM takes the form of the inverse in Eq. A20, andM/S = P −QS−1R is the Schur complement. Using this formula for the first cin columns of the inverse in Eq. A20, and computing the Schur complement Icin + U − U∗ + V ∗I−1cout−cinV , we find cayley ([ U 0V 0 ]) = [ Icin−U+U ∗ V ∗ −V Icout−cin ][ (Icin+U−U ∗+V ∗V )−1 −V (Icin+U−U ∗+V ∗V )−1 ] = [ (Icin−U+U ∗−V ∗V )(Icin+U−U ∗+V ∗V )−1 −2V (Icin+U−U ∗+V ∗V )−1 ] ∈ Ccout×cin , (A22) which is semi-orthogonal and requires computing only one inverse of size cin ≤ cout. Note that this inverse always exists because U − U∗ is skew-Hermitian, so it has purely imaginary eigenvalues, and V ∗V is positive semidefinite and has all real non-negative eigenvalues. That is, the sum Icin + U − U∗ + V ∗V has all nonzero eigenvalues and is thus nonsingular. Proposition A.8. We can also compute semi-orthogonal convolutions when cin ≥ cout using the method described above because cayley ([ CT 0 ])T = cayley ([ C0 ]). Proof. We use that (A−1)T = (AT )−1 and (I −A)(I +A)−1 = (I +A)−1(I −A) to see cayley ([ C0 ]) T = [( I − [ C0 ] + [ C0 ] T )( I + [ C0 ]− [ C0 ] T )−1]T = ( I + [ C0 ] T − [ C0 ] )−1 ( I − [ C0 ] T + [ C0 ] ) = cayley ( [ C0 ] T ) = cayley ([ CT 0 ]) . (A23) We have thus shown how to (semi-)orthogonalize real multi-channel 2D circular convolutions efficiently in the Fourier domain. Aminimal implementation of our method can be found in Appendix E. The techniques described above could also be used with other orthogonalization methods, or for calculating the determinants or singular values of convolutions. B Additional Results For KWLarge, our results on empirical robustness were mixed: while our Cayley layer outperforms BCOP in robust accuracy, the RKO methods are overall more robust by around 2%, for only a marginal decrease in clean accuracy. We note the lower empirical local Lipschitzness of RKO methods, which may explain their higher robustness: Figure 4 shows that the best choice of Lipschitz upper-bound for Cayley and BCOP layers may be less than 1 for this architecture. C Empirical runtimes Each runtime was recorded using the autograd profiler in PyTorch (Paszke et al., 2019) by summing the CUDA execution times. The batch size was fixed at 128 for all graphs, and each data point was averaged over 32 iterations. We used a Nvidia Quadro RTX 8000. D Additional Baseline Experiments D.1 Robustness Experiments The main competing orthogonal convolutional layer, BCOP (Li et al., 2019), uses Björck (Björck & Bowie, 1971) orthogonalization for internal parameter matrices; they also used it in their experiments for orthogonal fully-connected layers. Similarly to how we replaced the method in RKO with the Cayley transform for our CRKO (Cayley RKO) experiments, we replaced Björck with the Cayley transform in BCOP and used a Cayley linear layer forCayleyBCOP experiments, reported in Tables 6 and 7. We see slightly decreased performance over all metrics, similarly to the relationship between RKO and CRKO. For additional comparison, we also report on a plain convolutional baseline in Table 7. For this model, we used a plain circular convolutional layer and a Cayley linear layer, which still imparts a considerable degree of robustness. With the plain convolutional layer, the model gains a considerable degree of accuracy but loses some robustness. We did not report a plain convolutional baseline for the provable robustness experiments on KWLarge, as it would require a more sophisticated technique to bound the Lipschitz constants of each layer, which is outside the scope of our investigation. D.2 Wasserstein Distance Estimation We repeated the Wasserstein distance estimation experiment from Li et al. (2019), simply replacing the BCOP layer with our Cayley convolutional layer, and the Björck linear layer with our Cayley fully-connected layer. We took the best Wasserstein distance bound from one trial of each of the four learning rates considered in BCOP (0.1, 0.01, 0.001, 0.0001); see Table 8. E Example Implementations In PyTorch 1.8, our layer can be implemented as follows. def cayley(W): if len(W.shape) == 2: return cayley(W[None])[0] _, cout, cin = W.shape if cin > cout: return cayley(W.transpose(1, 2)).transpose(1, 2) U, V = W[:, :cin], W[:, cin:] I = torch.eye(cin, dtype=W.dtype, device=W.device)[None, :, :] A = U - U.conj().transpose(1, 2) + V.conj().transpose(1, 2) @ V inv = torch.inverse(I + A) return torch.cat((inv @ (I - A), -2 * V @ inv), axis=1) class CayleyConv(nn.Conv2d): def fft_shift_matrix(self, n, s): shift = torch.arange(0, n).repeat((n, 1)) shift = shift + shift.T return torch.exp(2j * np.pi * s * shift / n) def forward(self, x): cout, cin, _, _ = self.weight.shape batches, _, n, _ = x.shape if not hasattr(self, "shift_matrix"): s = (self.weight.shape[2] - 1) // 2 self.shift_matrix = self.fft_shift_matrix(n, -s)[:, :(n//2 + 1)] \ .reshape(n * (n // 2 + 1), 1, 1).to(x.device) xfft = torch.fft.rfft2(x).permute(2, 3, 1, 0) \ .reshape(n * (n // 2 + 1), cin, batches) wfft = self.shift_matrix * torch.fft.rfft2(self.weight, (n, n)) \ .reshape(cout, cin, n * (n // 2 + 1)).permute(2, 0, 1).conj() yfft = (cayley(wfft) @ xfft).reshape(n, n // 2 + 1, cout, batches) y = torch.fft.irfft2(yfft.permute(3, 2, 0, 1)) if self.bias is not None: y += self.bias[:, None, None] return y To make the layer support stride-2 convolutions, have CayleyConv inherit from the following class instead, which depends on the einops package: class StridedConv(nn.Conv2d): def __init__(self, *args, **kwargs): if "stride" in kwargs and kwargs["stride"] == 2: args = list(args) args[0] = 4 * args[0] # 4x in_channels args[2] = args[2] // 2 # //2 kernel_size; optional args = tuple(args) super().__init__(*args, **kwargs) downsample = "b c (w k1) (h k2) -> b (c k1 k2) w h" self.register_forward_pre_hook(lambda _, x: \ einops.rearrange(x[0], downsample, k1=2, k2=2) \ if self.stride == (2, 2) else x[0]) More details on our implementation and experiments can be found at: https://github.com/locuslab/orthogonal-convolutions.
1. What is the main contribution of the paper regarding orthogonal convolutions? 2. What are the strengths of the proposed approach, particularly in addressing previous limitations? 3. What are the weaknesses of the method, and how do they affect the resulting convolution? 4. How does the method handle circular convolutions, and what are the implications for the resulting orthogonal convolution? 5. Are there any limitations on the convolution's stride and shape, and how do these impact the expressiveness of the method?
Review
Review Summary Objective: attain orthogonal convolutions. Approach: use c o n v W ( x ) − c o n v W T ( x ) which is skew-symmetric together with the Cayley map. Strengths [+] Using transposed convolutions to attain a "skew-symmetric" convolution is interesting. [+] The paper constructs an orthogonal convolution instead of enforcing orthogonality on a reshaped kernel. This ensures properties like gradient norm preservation. I always thought the lack of this property was a major limitation of previous work. The paper address this limitation for for circular convolutions. Weaknesses [-] The resulting convolution is circular. [-] They need invertibility which adds constraints on the convolutions stride and shape. [-] The author could be more explicitly outline the above limitations wrt expressiveness. Recommendation: Acceptance 7 [+] The paper address an important problem, and present the first "real" orthogonal convolution. [-] The orthogonal convolution is circular, and I think such limitations could be outlined more explicitly. Before discussion: I was torn between 6 and 7. If the authors add a paragraph that explain limitations wrt circular convolution, stride and shape I'll be happy to change to 7. That said, I'll condition this statement on first re-evaluating my opinion based on the comments from the other reviews. It is possible I am being a bit too harsh here, but it is also possible the other reviewers caught something I did not. The authors addressed my concerns and it seems my co-reviewers didn't bring new concerns to light, so I increased my score to 7. Questions and Concerns All my questions was addressed by the authors before I submitted my review. These questions, and their answers, should be visible to all other reviewers, the area chairs and the program committee. For completeness, I copied the questions below, please note they were already answered by the authors. Question 1. Equation (3) and (4) use both x and X . Is this a typo, and should both be X ? Question 2. Equation (3) and (4) use F F T : R n × n → C n × n on X ∈ R c i n × n × n and W ∈ R c o u t × c i n × n × n . Strictly speaking this is not well-defined, however, I suspect you just mean F F T ( X ) [ i , : , : ] := F F T ( X [ i , : , : ] ) . Is my suspicion correct? Question 3. You write "..., we see that the ( i , j ) 'th Fourier domain output pixel is given by the matrix-vector product .. ". This reminded me of the periodic convolution section 3.2 from [0]. Is the the same, if not, what is the difference? I'm asking to make sure I understand your method. If it is the same, I believe your method can be summarized as done below. Please let me know if this is correct/wrong. The periodic convolution c ( x ) from [0] is invertible, which allows us to compute the inverse of the skew-symmetric operation S ( x ) = c ( x ) − c T ( x ) . Since we can compute both S ( x ) and its inverse, we can compute the Cayley transform of S which is orthogonal. I want to emphasize that, if this is what you're doing, I do think it is novel. Question 4. To compute the inverse used in the Cayley transform you rewrite the convolution operation to matrix-vector multiplication in Fourier space. To do this you utilize the convolution theorem which only works for circular convolutions. How does this effect the resulting orthogonal convolution? (a) Does it have some circular behavior? (b) Must the kernel size be equal to the input size, e.g., W.shape=(c_in, c_out, n, n) with X.shape=(c_in, n, n)? (c) Must the convolution be unstrided? Maybe I missed it in the paper, but such restrictions were not entirely clear to me. Additional Feedback Fix the typo and maybe emphasize broadcasting notation in equations (3) and (4) as previously discussed.
ICLR
Title Orthogonalizing Convolutional Layers with the Cayley Transform Abstract Recent work has highlighted several advantages of enforcing orthogonality in the weight layers of deep networks, such as maintaining the stability of activations, preserving gradient norms, and enhancing adversarial robustness by enforcing low Lipschitz constants. Although numerous methods exist for enforcing the orthogonality of fully-connected layers, those for convolutional layers are more heuristic in nature, often focusing on penalty methods or limited classes of convolutions. In this work, we propose and evaluate an alternative approach to directly parameterize convolutional layers that are constrained to be orthogonal. Specifically, we propose to apply the Cayley transform to a skew-symmetric convolution in the Fourier domain, so that the inverse convolution needed by the Cayley transform can be computed efficiently. We compare our method to previous Lipschitz-constrained and orthogonal convolutional layers and show that it indeed preserves orthogonality to a high degree even for large convolutions. Applied to the problem of certified adversarial robustness, we show that networks incorporating the layer outperform existing deterministic methods for certified defense against `2-norm-bounded adversaries, while scaling to larger architectures than previously investigated. Code is available at https://github.com/locuslab/orthogonal-convolutions. 1 Introduction Encouraging orthogonality in neural networks has proven to yield several compelling benefits. For example, orthogonal initializations allow extremely deep vanilla convolutional neural networks to be trained quickly and stably (Xiao et al., 2018; Saxe et al., 2013). And initializations that remain closer to orthogonality throughout training seem to learn faster and generalize better (Pennington et al., 2017). Unlike Lipschitz-constrained layers, orthogonal layers are gradient-norm-preserving (Anil et al., 2019), discouraging vanishing and exploding gradients and stabilizing activations (Rodríguez et al., 2017). Orthogonality is thus a potential alternative to batch normalization in CNNs and can help to remember long-term dependencies in RNNs (Arjovsky et al., 2016; Vorontsov et al., 2017). Constraints and penalty terms encouraging orthogonality can improve generalization in practice (Bansal et al., 2018; Sedghi et al., 2018), improve adversarial robustness by enforcing low Lipschitz constants, and allow deterministic certificates of robustness (Tsuzuku et al., 2018). Despite evidence for the benefits of orthogonality constraints, and while there are many methods to orthogonalize fully-connected layers, the orthogonalization of convolutions has posed challenges. More broadly, current Lipschitz-constrained convolutions rely on spectral normalization and kernel reshaping methods (Tsuzuku et al., 2018), which only allow loose bounds and can cause vanishing gradients. Sedghi et al. (2018) showed how to clip the singular values of convolutions and thus enforce orthogonality, but relied on costly alternating projections to achieve tight constraints. Most recently, Li et al. (2019) introduced the Block Convolution Orthogonal Parameterization (BCOP), which cannot express the full space of orthogonal convolutions. In contrast to previous work, we provide a direct, expressive, and scalable parameterization of orthogonal convolutions. Our method relies on the Cayley transform, which is well-known for parameterizing orthogonal matrices in terms of skew-symmetric matrices, and can be easily extended to non-square weight matrices. The transform requires efficiently computing the inverse of a particular convolution in the Fourier domain, which we show works well in practice. We demonstrate that our Cayley layer is indeed orthogonal in practice when implemented in 32-bit precision, irrespective of the number of channels. Further, we compare it to alternative convolutional and Lipschitz-constrained layers: we include them in several architectures and evaluate their deterministic certifiable robustness against an `2-norm-bounded adversary. Our layer provides stateof-the-art results on this task. We also demonstrate that the layers empirically endow a considerable degree of robustness without adversarial training. Our layer generally outperforms the alternatives, particularly for larger architectures. 2 Related Work Orthogonality in neural networks. The benefits of orthogonal weight initializations for dynamical isometry, i.e., ensuring signals propagate through deep networks, are explained by Saxe et al. (2013) and Pennington et al. (2017), with limited theoretical guarantees investigated by Hu et al. (2020). Xiao et al. (2018) provided a method to initialize orthogonal convolutions, and demonstrated that it allows the training of extremely deep CNNs without batch normalization or residual connections. Further, Qi et al. (2020) developed a novel regularization term to encourage orthogonality throughout training and showed its effectiveness for training very deep vanilla networks. The signal-preserving properties of orthogonality can also help with remembering long-term dependencies in RNNs, on which there has been much work (Helfrich et al., 2018; Arjovsky et al., 2016). One way to orthogonalize weight matrices is with the Cayley transform, which is often used in Riemannian optimization (Absil et al., 2009). Helfrich et al. (2018) and Maduranga et al. (2019) avoid vanishing/exploding gradients in RNNs using the scaled Cayley transform. Similarly, LezcanoCasado&Martínez-Rubio (2019) use the exponentialmap, which theCayley transform approximates. Li et al. (2020) derive an iterative approximation of the Cayley transform for orthogonally-constrained optimizers and show it speeds the convergence of CNNs and RNNs. However, they merely orthogonalize a matrix obtained by reshaping the kernel, which is not the same as an orthogonal convolution (Sedghi et al., 2018). Our contribution is unique here in that we parameterize orthogonal convolutions directly, as opposed to reshaping kernels. Bounding neural network Lipschitzness. Orthogonality imposes a strict constraint on the Lipschitz constant, which itself comes with many benefits: Lower Lipschitz constants are associated with improved robustness (Yang et al., 2020) and better generalization bounds (Bartlett et al., 2017). Tsuzuku et al. (2018) showed that neural network classifications can be certified as robust to `2- norm-bounded perturbations given aLipschitz bound and sufficiently confident classifications. Along with Szegedy et al. (2013), they noted that the Lipschitz constant of neural networks can be bounded if the constants of the layers are known. Thus, there is substantial work on Lipschitz-constrained and regularized layers, which we review in Sec. 5. However, Anil et al. (2019) realized that mere Lipschitz constraints can attenuate gradients, unlike orthogonal layers. There have been other ideas for calculating and controlling the minimal Lipschitzness of neural networks, e.g., through regularization (Hein & Andriushchenko, 2017), extreme value theory (Weng et al., 2018), or using semi-definite programming (Latorre et al., 2020; Chen et al., 2020; Fazlyab et al., 2019), but constructing bounds from Lipschitz-constrained layers is more scalable and efficient. Besides Tsuzuku et al. (2018)’s strategy for deterministic certifiable robustness, there are many approaches to deterministically verifying neural network defenses using SMT solvers (Huang et al., 2017; Ehlers, 2017; Carlini & Wagner, 2017), integer programming approaches (Lomuscio & Maganti, 2017; Tjeng & Tedrake, 2017; Cheng et al., 2017), or semi-definite programming (Raghunathan et al., 2018). Wong et al. (2018)’s approach to minimize an LP-based bound on the robust loss is more scalable, but networks made from Lipschitz-constrained components can be more efficient still, as shown by Li et al. (2019) who outperform their approach. However, none of these methods yet perform as well as probabilistic methods (Cohen et al., 2019). Consequently, orthogonal layers appear to be an important component to enhance the convergence of deep networks while encouraging robustness and generalization. 3 Background Orthogonality. Since we are concerned with orthogonal convolutions, we review orthogonal matrices: A matrixQ ∈ Rn×n is orthogonal ifQTQ = QQT = I . However, in building neural networks, layers do not always have equal input and output dimensions: more generally, a matrix U ∈ Rm×n is semi-orthogonal if UTU = I or UUT = I . Importantly, ifm ≥ n, then U is also norm-preserving: ‖Ux‖2 = ‖x‖2 for all x ∈ Rn. Ifm < n, then the mapping is merely non-expansive (a contraction), i.e., ‖Ux‖2 ≤ ‖x‖2. A matrix having all singular values equal to 1 is orthogonal, and vice versa. Orthogonal convolutions. The same concept of orthogonality applies to convolutional layers, which are also linear transformations. A convolutional layer conv : Rc×n×n → Rc×n×n with c = cin = cout input and output channels is orthogonal if and only if ‖conv(X)‖F = ‖X‖F for all input tensors X ∈ Rc×n×n; the notion of semi-orthogonality extends similarly for cin 6= cout. Note that orthogonalizing each convolutional kernel as in Lezcano-Casado & Martínez-Rubio (2019); Lezcano-Casado (2019) does not yield an orthogonal (norm-preserving) convolution. Lipschitzness under the `2 norm. A consequence of orthogonality is 1-Lipschitzness. A function f : Rn → Rm is L-Lipschitz with respect to the `2 norm iff ‖f(x) − f(y)‖2 ≤ L‖x − y‖2 for all x, y ∈ Rn. IfL is the smallest such constant for f , then it’s called the Lipschitz constant of f , denoted by Lip(f). An useful property for certifiable robustness is that the Lipschitz constant of the composition of f and g is upper-bounded by the product of their constants: Lip(f ◦ g) ≤ Lip(f)Lip(g). Since simple neural networks are fundamentally just composed functions, this allows us to bound their Lipschitz constants, albeit loosely. We can extend this idea to residual networks using the fact that Lip(f + g) ≤ Lip(f) + Lip(g), which motivates using a convex combination in residual connections. More details can be found in Li et al. (2019); Szegedy et al. (2013). Lipschitz bounds for provable robustness. If we know the Lipschitz constant of the neural network, we can certify that a classification with sufficiently a large margin is robust to `2 perturbations below a certain magnitude. Specifically, denote the margin of a classification with label t as Mf (x) = max(0, yt −max i 6=t yi), (1) which can be interpreted as the distance between the correct logit and the next largest logit. Then if the logit function f has Lipschitz constant L, andMf (x) > √ 2L , then f(x) is certifiably robust to perturbations {δ : ‖δ‖2 ≤ }. Tsuzuku et al. (2018) and Li et al. (2019) provide proofs. 4 The Cayley transform of a Convolution Before describing our method, we first review discrete convolutions and the Cayley transform; then, we show the need for inverse convolutions and how to compute them efficiently in the Fourier domain, which lets us parameterize orthogonal convolutions via the Cayley transform. The key idea in our method is that multi-channel convolution in the Fourier domain reduces to a batch of matrix-vector products, and making each of those matrices orthogonal makes the convolution orthogonal. We describe our method in more detail in Appendix A and provide a minimal implementation in PyTorch in Appendix E. An unstrided convolutional layer with cin input channels and cout output channels has a weight tensor W of shape Rcout×cin×n×n and takes an inputX of shape Rcin×n×n to produce an output Y of shape Rcout×n×n, i.e., convW : Rcin×n×n → Rcout×n×n. It is easiest to analyze convolutions when they are circular: if the kernel goes out of bounds ofX , it wraps around to the other side—this operation can be carried out efficiently in the Fourier domain. Consequently, we focus on circular convolutions. We define convW (X) as the circular convolutional layer with weight tensor W ∈ Rcout×cin×n×n applied to an input tensor X ∈ Rcin×n×n yielding an output tensor Y = convW (X) ∈ Rcout×n×n. Equivalently, we can view convW (X) as the doubly block-circulant matrix C ∈ Rcoutn 2×cinn2 corresponding to the circular convolution with weight tensorW applied to the unrolled input tensor vecX ∈ Rcinn2×1. Similarly, we denote by convTW (X) the transpose CT of the same convolution, which can be obtained by transposing the first two channel dimensions of W and flipping each of the last two (kernel) dimensions vertically and horizontally, calling the result W ′, and computing convW ′(X). We denote conv−1W (X) as the inverse of the convolution, i.e., with corresponding matrix C−1, which is more difficult to efficiently compute. Now we review how to perform a convolution in the spatial domain. We refer to a pixel as a cin or cout-dimensional slice of a tensor, like X[:, i, j]. Each of the n2 (i, j) output pixels Y [:, i, j] are computed as follows: for each c ∈ [cout], compute Y [c, i, j] by centering the tensor W [c] on the (i, j)th pixel of the input and taking a dot product, wrapping around pixels of W that go out-ofbounds. Typically,W is zero except for a k×k region of the last two (spatial) dimensions, which we call the kernel or the receptive field. Typically, convolutional layers have small kernels, e.g., k = 3. Considering now matrices instead of tensors, the Cayley transform is a bijection between skewsymmetric matrices A and orthogonal matrices Q without −1 eigenvalues: Q = (I −A)(I +A)−1. (2) A matrix is skew-symmetric if A = −AT , and we can skew-symmetrize any square matrix B by computing the skew-symmetric partA = B−BT . The Cayley transform of such a skew-symmetric matrix is always orthogonal, which can be seen by multiplying Q by its transpose and rearranging. We can also apply the Cayley transform to convolutions, noting they are also linear transformations that can be represented as doubly block circulant matrices. While it is possible to construct the matrix C corresponding to a convolution convW and apply the Cayley transform to it, this is highly inefficient in practice: Convolutions can be easily skew-symmetrized by computing convW (X)− convTW (X), but finding their inverse is challenging; instead, we manipulate convolutions in the Fourier domain, taking advantage of the convolution theorem and the efficiency of the fast Fourier transform. According to the 2D convolution theorem (Jain, 1989), the circular convolution of two matrices in the Fourier domain is simply their elementwise product. We will show that the convolution theorem extends to multi-channel convolutions of tensors, in which case convolution reduces to a batch of complex matrix-vector products rather than elementwise products: inverting these smaller matrices is equivalent to inverting the convolution, and finding their skew-Hermitian part is equivalent to skew-symmetrizing the convolution, which allows us to compute the Cayley transform. We define the 2D Discrete (Fast) Fourier Transform for tensors of order ≥ 2 as a mapping FFT : Rm1×...×mr×n×n → Cm1×...×mr×n×n defined by FFT(X)[i1, ..., ir] = FnX[i1, ..., ir]Fn for il ∈ 1, ...,ml and l ∈ 1, ..., r and r ≥ 0, where Fn[i, j] = 1√n exp( −2πı n ) (i−1)(j−1). That is, we treat all but the last two dimensions as batch dimensions. We denote X̃ = FFT(X) for a tensor X . Using the convolution theorem, in the Fourier domain the cth output channel is the sum of the elementwise products of the cin input and weight channels: that is, Ỹ [c] = ∑cin k=1 W̃ [c, k] X̃[k]. Equivalently, working in the Fourier domain, the (i, j)th pixel of the cth output channel is the dot product of the (i, j)th pixel of the cth weight with the (i, j)th input pixel: Ỹ [c, i, j] = W̃ [c, :, i, j] · X̃[:, i, j]. From this, we can see that the whole (i, j)th Fourier-domain output pixel is the matrix-vector product FFT(convW (X))[:, i, j] = W̃ [:, :, i, j]X̃[:, i, j]. (3) This interpretation gives a way to compute the inverse convolution as required for the Cayley transform, assuming cin = cout: FFT(conv−1W (X))[:, i, j] = W̃ [:, :, i, j] −1X̃[:, i, j]. (4) Given this method to compute inverse convolutions, we can now parameterize an orthogonal convolution with a skew-symmetric convolution through the Cayley transform, highlighted in Algorithm 1: In line 1, we use the Fast Fourier Transform on the weight and input tensors. In line 4, we compute the Fourier domain weights for the skew-symmetric convolution (the Fourier representation is skew-Hermitian, thus the use of the conjugate transpose). Next, in lines 4–5 we compute the inverses required for FFT(conv−1I+A(x)) and use them to compute the Cayley transform written as (I+A)−1−A(I+A)−1 in line 6. Finally, we get our spatial domain result with the inverse FFT,which is always exactly real despite workingwith complexmatrices in the Fourier domain (seeAppendixA). 4.1 Properties of our approach It is important to note that the inverse in the Cayley transform always exists: Because A is skewsymmetric, it has all imaginary eigenvalues, so I + A has all nonzero eigenvalues and is thus nonsingular. Since only square matrices can be skew-symmetrized and inverted, Algorithm 1 only Algorithm 1: Orthogonal convolution via the Cayley transform. Input: A tensor X ∈ Rcin×n×n and convolution weightsW ∈ Rcout×cin×n×n, with cin = cout. Output: A tensor Y ∈ Rcout×n×n, the orthogonal convolution parameterized byW applied toX . 1 W̃ := FFT(W ) ∈ Ccout×cin×n×n, X̃ := FFT(X) ∈ Ccin×n×n 2 for all i, j ∈ 1, . . . , n // In parallel 3 do 4 Ã[:, :, i, j] := W̃ [:, :, i, j]− W̃ [:, :, i, j]∗ 5 Ỹ [:, i, j] := (I + Ã[:, :, i, j])−1X̃[:, i, j] 6 Z̃[:, i, j] := Ỹ [:, i, j]− Ã[:, :, i, j]Ỹ [:, i, j] 7 end 8 return FFT−1(Z̃).real works for cin = cout, but can be extended to the rectangular case where cout ≥ cin by padding the matrix with zeros and then projecting out the first cin columns after the transform, resulting in a norm-preserving semi-orthogonal matrix; the case cin ≥ cout follows similarly, but the resulting matrix is merely non-expansive. With efficient implementation in terms of the Schur complement (Appendix A.1, Eq. A22), this only requires inverting a square matrix of order min(cin, cout). We saw that learning was easier if we parameterized W in Algorithm 1 by W = gV/‖V ‖F for a learnable scalar g and tensor V , as in weight normalization (Salimans & Kingma, 2016). Comparison to BCOP.While the Block Convolution Orthogonal Parameterization (BCOP) can only express orthogonal convolutions with fixed k × k-sized kernels, a Cayley convolutional layer can represent orthogonal convolutions with a learnable kernel size up to the input size, and it does this without costly projections unlike Sedghi et al. (2018). However, our parameterization as presented is limited to orthogonal convolutions without -1 eigenvalues. Hence, our parameterization is incomplete; besides kernel size restrictions, BCOP was also demonstrated to incompletely represent the space of orthogonal convolutions, though the details of the problem were unresolved (Li et al., 2019). Our method can represent such orthogonal convolutions by multiplying the Cayley transform by a fixed diagonal matrix with ±1 entries (Gallier, 2006; Helfrich et al., 2018); however, we cannot optimize over the discrete set of such scaling matrices, so our method cannot optimize over all orthogonal convolutions, nor all special orthogonal convolutions. In our experiments, we did not find improvements from adding randomly initialized scaling matrices as in Helfrich et al. (2018). Limitations of ourmethod. As ourmethod requires computing an inverse convolution, it is generally incompatible with strided convolutions; e.g., a convolution with stride 2 cannot be inverted since it involves noninvertible downsampling. It is possible to apply our method to stride-2 convolutions by simultaneously increasing the number of output channels by 4× to compensate for the 2× downsampling of the two spatial dimensions, though we found this to be computationally inefficient. Instead, we use the invertible downsampling layer from (Jacobsen et al., 2018) to emulate striding. The convolution resulting from ourmethod is circular, which is the same as using the circular padding mode instead of zero padding in, e.g., PyTorch, and will not have a large impact on performance if subjects tend to be centered in images in the data set. BCOP (Li et al., 2019) and Sedghi et al. (2018) also restricted their attention to circular convolutions. Our method is substantially more expensive than plain convolutional layers, though in most practical settings it is more efficient than existing work: We plot the runtimes of our Cayley layer, BCOP, and plain convolutions in a variety of settings in Figure 6 for comparison, and we also report runtimes in Tables 4 and 5 (see Appendix C). Runtime comparisonOur Cayley layer does cincout FFTs on n×nmatrices (i.e., the kernels padded to the input size), and cin FFTs for each n× n input. These have complexity O(cincoutn2 log n) and O(coutn2 log n) respectively. Themost expensive step is computing the inverse of n2 square matrices of order c = min(cin, cout), with complexityO(n2c3), similarly to the method of Sedghi et al. (2018). We note like the authors that parallelization could effectively make this O(n2 log n + c3), and it is quite feasible in practice. As in Li et al. (2020), the inverse could be replaced with an iterative approximation, but we did not find it necessary for our relatively small architectures. For comparison, the related layers BCOP and RKO (Sec. 5) take only O(c3) to orthogonalize the convolution, and OSSN takesO(n2c3) (Li et al., 2019). In practice, we found our Cayley layer takes anywhere from 1/2× to 4× as long as BCOP, depending on the architecture (see Appendix C). 5 Experiments Our experiments have two goals: First, we show that our layer remains orthogonal in practice. Second, we compare the performance of our layer versus alternatives (particularly BCOP) on two adversarial robustness tasks on CIFAR-10: We investigate the certifiable robustness against an `2-norm-bounded adversary using the idea of Lipschitz Margin Training (Tsuzuku et al., 2018), and then we look at robustness in practice against a powerful adversary. We find that our layer is always orthogonal and performs relatively well in the robustness tasks. Separately, we show our layer improves on the Wasserstein distance estimation task from Li et al. (2019) in Appendix D.2. For alternative layers, we adopt the naming scheme for previous work on Lipschitz-constrained convolutions from Li et al. (2019), and we compare directly against their implementations. We outline the methods below. RKO. A convolution can be represented as a matrix-vector product, e.g., using a doubly blockcirculant matrix and the unrolled input. Alternatively, one could stack each k×k receptive field, and multiply by the cout × k2cin reshaped kernel matrix (Cisse et al., 2017). The spectral norm of this reshaped matrix is bounded by the convolution’s true spectral norm (Tsuzuku et al., 2018). Consequently, reshaped kernel methods orthogonalize this reshaped matrix, upper-bounding the singular values of the convolution by 1. Cisse et al. (2017) created a penalty term based on this matrix; instead, like Li et al. (2019), we orthogonalize the reshaped matrix directly, called reshaped kernel orthogonalization (RKO). They used an iterative algorithm for orthogonalization (Björck & Bowie, 1971); for comparison, we implementRKO using the Cayley transform instead of Björck orthogonalization, called CRKO. OSSN. A prevalent idea to constrain the Lipschitz constants of convolutions is to approximate the maximum singular value and normalize it out: Miyato et al. (2018) used the power method on the matrix W associated with the convolution, i.e., si+1 := WTWsi, and σmax ≈ ‖Wsn‖/‖sn‖. Gouk et al. (2018) improved upon this idea by applying the power method directly to convolutions, using the transposed convolution forWT . However, this one-sided spectral normalization is quite restrictive; dividing out σmax can make other singular values vanishingly small. SVCM. Sedghi et al. (2018) showed how to exactly compute the singular values of convolutional layers using the Fourier transform before the SVD, and proposed a singular value clipping method. However, the clipped convolution can have an arbitrarily large kernel size, so they resorted to alternating projections between orthogonal convolutions and k × k-kernel convolutions, which can be expensive. Like Li et al. (2019), we found that≈ 50 projections are needed for orthogonalization. BCOP. The Block Convolution Orthogonal Parameterization extends the orthogonal initialization method of Xiao et al. (2018). It differentiably parameterizes k × k orthogonal convolutions with an orthogonal matrix and 2(k− 1) symmetric projection matrices. The method only parameterizes the subspace of orthogonal convolutions with k × k-sized kernels, but is quite expressive empirically. Internally, orthogonalization is done with the method by Björck & Bowie (1971). Note that BCOP and SVCM are the only other orthogonal convolutional layers, and SVCM only for a large number of projections. RKO, CRKO, and OSSN merely upper-bound the Lipschitz constant of the layer by 1. 5.1 Training and Architectural Details Training details. For all experiments, we used CIFAR-10 with standard augmentation, i.e., random cropping and flipping. Inputs to the model are always in the range [0, 1]; we implement normalization as a layer for compatibility with AutoAttack. For each architecture/convolution pair, we tried learning rates in {10−5, 10−4, 10−3, 10−2, 10−1}, choosing the one with the best test accuracy. Most often, 0.001 is appropriate. We found that a piecewise triangular learning rate, as used in top performers in the DAWNBench competition (Coleman et al., 2017), performed best. Adam (Kingma & Ba, 2014) showed a significant improvement over plain SGD, and we used it for all experiments. Loss function. Inspired by Tsuzuku et al. (2018), Anil et al. (2019) and Li et al. (2019) used multi-class hinge loss where the margin is the robustness certificate √ 2L 0. We corroborate their finding that this works better than cross-entropy, and similarly use 0 = 0.5. Varying 0 controls a tradeoff between accuracy and robustness (see Fig. 5). Initialization. We found that the standard uniform initialization in PyTorch performed well for our layer. We adjusted the variance, but significant differences required order-of-magnitude changes. For residual networks, we tried Fixup initialization (Zhang et al., 2019), but saw no significant improvement. We hypothesize this is due to (1) the learnable scaling parameter inside the Cayley transform, which changes significantly during training and (2) the dynamical isometry inherent with orthogonal layers. For alternative layers, we used the initializations from Li et al. (2019). Architecture considerations. For fair comparison with previous work, we use the “large” network from Li et al. (2019), which was first implemented in Kolter & Wong (2017)’s work on certifiable robustness. We also compare the performance of the different layers in a 1-Lipschitz-constrained version of ResNet9 (He et al., 2016) and WideResNet10-10 (Zagoruyko & Komodakis, 2016). The architectureswe could investigatewere limited by compute andmemory, as all the layers compared are relatively expensive. For RKO, OSSN, SVCM, and BCOP, we use Björck orthogonalization (Björck & Bowie, 1971) for fully-connected layers, as reported in Li et al. (2019); Anil et al. (2019). For our Cayley convolutional layer and CRKO, we orthogonalize the fully-connected layers with the Cayley transform to be consistent with our method. We found the gradient-norm-preserving GroupSort activation function from Anil et al. (2019) to be more effective than ReLU, and we used a group size of 2, i.e.,MaxMin. Strided convolutions. For the KWLarge network, we used “invertible downsampling”, which emulates striding by rearranging the inputs to have 4× more channels while halving the two spatial dimensions and reducing the kernel size to bk/2c (Jacobsen et al., 2018). For the residual networks, we simply used a version of pooling, noting that average pooling is still non-expansive when multiplied by its kernel size, which allows us to use more of the network’s capacity. We also halved the kernel size of the last pooling layer, instead adding another fully-connected layer; empirically, this resulted in higher local Lipschitz constants. Ensuring Lipschitz constraints. Batch normalization layers scale their output, so they can’t be included in our 1-Lipschitz-constrained architecture; the gradient-norm-preserving properties of our layers compensate for this. We ensure residual connections are non-expansive by making them a convex combination with a new learnable parameter α, i.e., g(x) = αf(x)+(1−α)x, for α ∈ [0, 1]. To ensure the latter constraint, we use sigmoid(α). We can tune the overall Lipschitz bound to a given L using the Lipschitz composition property, multiplying each of them layers by L1/m. 5.2 Adversarial Robustness For certifiable robustness, we report the fraction of certifiable test points: i.e., thosewith classification marginMf (x) greater than √ 2L , where = 36/255. For empirical defense, we use both vanilla projected gradient descent andAutoAttack byCroce&Hein (2020). For PGD,we useα = /4.0with 10 iterations. Within AutoAttack, we use both APGD-CE and APGD-DLR, finding the decisionbased attacks provided no improvements. We report on = 36/255 for consistency with Li et al. (2019) and previous work on deterministic certifiable robustness (Wong et al., 2018). Additionally, we found it useful to report on empirical local Lipschitz constants throughout training using the PGD-like method from Yang et al. (2020). 5.3 Results Practical orthogonality. We show that our layer remains very close to orthogonality in practice, both before and after learning, when implemented in 32-bit precision. We investigated Cayley layers from one of our ResNet9 architectures, running them on random tensors to see if their norm is preserved, which is equivalent to orthogonality. We found that ‖Conv(x)‖/‖x‖, the extent to which our layer is gradient norm preserving, is always extremely close to 1. We illustrate the small discrepancies, easily bounded between 0.99999 and 1.00001, in Figure 1. Cayley layers which do not change or increase the number of channels are guaranteed to be orthogonal, which we see in practice for graphs (b, c, d, e). Those which decrease the number of channels can only be non-expansive, and in fact the layer seems to become slightly more norm-preserving after training (a). In short, our Cayley layer can capture the full benefits of orthogonality. Certifiable robustness. We use our layer and alternatives within the KWLarge architecture for a more direct comparison to previous work on deterministic certifable robustness (Li et al., 2019; Wong et al., 2018). As in (Li et al., 2019), we got the best performance without normalizing inputs, and can thus say that all networks compared here are at most 1-Lipschitz. Our layer outperforms BCOP on this task (see Table 1), and is thus state-of-the-art, getting on average 75.33% clean test accuracy and 59.16% certifiable robust accuracy against adversarial perturbations with norm less than = 36/255. In contrast, BCOP gets 75.11% test accuracy and 58.29% certifiable robust accuracy. The reshaped kernel methods perform only a percent or two worse on this task, while the spectral normalization and clipping methods lag behind. We assumed that a layer is only meaningfully better than the other if both the test and robust accuracy are improved; otherwise, the methods may simply occupy different parts of the tradeoff curve. Since reshaped kernel methods can encourage smaller Lipschitz constants than orthogonal layers (Sedghi et al., 2018), we investigated the clean vs. certifiable robust accuracy tradeoff enabled by scaling the Lipschitz upper bound of the network, visualized in Figure 2. To that end, in light of the competitiveness of RKO, we chose a Lipschitz upper-bound of 0.85 which gave our Cayley layer similar test accuracy; this allowed for even higher certifiable robustness of 59.99%, but lower test accuracy of 74.35%. Overall, we were surprised by the similarity between the four top-performing methods after scaling Lipschitz constants. We were not able to improve certifiable accuracy with ResNets. However, it was useful to increase the kernel size: we found 5 was an improvement in accuracy, while 7 and 9 were slightly worse. (Since our method operates in the Fourier domain, increases in kernel size incur no extra cost.) We also saw an improvement from scaling up the width of each layer of KWLarge, and our Cayley layer was substantially faster than BCOP as the width of KWLarge increased (see Appendix C). Multiplying the width by 3 and increasing the kernel size to 5, we were able to get 61.13% certified robust accuracy with our layer, and 60.55% with BCOP. Empirical robustness. Previous work has shown that adversarial robustness correlates with lower Lipschitz constants. Thus, we investigated the robustness endowed by our layer against `2 gradient-based adversaries. Here, we got better accuracy with the standard practice of normalizing inputs. Our layer outperformed the others in ResNet9 and WideResNet10-10 architectures; results were less decisive for KWLarge (see Appendix B). For the WideResNet, we got 82.99% clean accuracy and 73.16% robust accuracy for = 36/255. For comparison, the state-of-the-art achieves 91.08% clean accuracy and 72.91% robust accuracy for = 0.5 using a ResNet50 with adversarial training and additional unlabeled data (Augustin et al., 2020). We visualize the tradeoffs for our residual networks in Figure 3, noting that they empirically have smaller local Lipschitz constants than KWLarge. While our layer outperforms others for the default Lipschitz bound of 1, and is consistently slightly better than BCOP, RKO can perform similarly well for larger bounds. This provides some support for studies showing that hard constraints like ours may not match the performance of softer constraints, such as RKO and penalty terms (Bansal et al., 2018; Vorontsov et al., 2017). 6 Conclusion In this paper, we presented a new, expressive parameterization of orthogonal convolutions using the Cayley transform. Unlike previous approaches to Lipschitz-constrained convolutions, ours gives deep networks the full benefits of orthogonality, such as gradient norm preservation. We showed empirically that our method indeed maintains a high degree of orthogonality both before and after learning, and also scales better to some architectures than previous approaches. Using our layer, we were able to improve upon the state-of-the-art in deterministic certifiable robustness against an `2-norm-bounded adversary, and also showed that it endows networks with considerable inherent robustness empirically. While our layer offers benefits theoretically, we observed that heuristics involving orthogonalizing reshaped kernels were also quite effective for empirical robustness. Orthogonal convolutions may only show their true advantage in gradient norm preservation for deeper networks than we investigated. In light of our experiments in scaling the Lipschitz bound, we hypothesize that not orthogonality, but insead the ability of layers such as ours to exert control over the Lipschitz constant, may be best for the robustness/accuracy tradeoff. Future work may avoid expensive inverses using approximations or the exponential map, or compare various orthogonal and Lipschitz-constrained layers in the context of very deep networks. Acknowledgments We thank Shaojie Bai, Chun Kai Ling, EricWong, and the anonymous reviewers for helpful feedback and discussions. This work was partially supported under DARPA grant number HR00112020006. A Orthogonalizing Convolutions in the Fourier Domain Our method relies on the fact that a multi-channel circular convolution can be block-diagonalized by a suitable Discrete Fourier Transform matrix. We show how this follows from the 2D convolution theorem (Jain, 1989, p. 145) below. Definition A.1. Fn is the DFT matrix for sequences of length n; we drop the subscript when it can be inferred from context. Definition A.2. We define convW (X) as in Section 4; if cin = cout = 1, we drop the channel axes, i.e., for X,W ∈ Rn×n, the 2D circular convolution of X withW is convW (X) ∈ Rn×n. Theorem A.1. If C ∈ Rn2×n2 represents a 2D circular convolution with weights W ∈ Rn×n operating on a vectorized input vec(X) ∈ Rn2×1, with X ∈ Rn×n, then it can be diagonalized as (F ⊗ F )C(F ∗ ⊗ F ∗) = D. Proof. According to the 2D convolution theorem, we can implement a single-channel 2D circular convolution by computing the elementwise product of the DFT of the filter and input signals: FWF FXF = F convW (X)F. (A1) This elementwise product is easier to work mathematically with if we represent it as a diagonalmatrix-vector product after vectorizing the equation: diag(vec(FWF )) vec(FXF ) = vec(F convW (X)F ). (A2) We can then rearrange this using vec(ABC) = (CT ⊗A) vec(B) and the symmetry of F : diag(vec(FWF ))(F ⊗ F ) vec(X) = (F ⊗ F ) vec(convW (X)). (A3) Left-multiplying by the inverse of F ⊗F and notingC vec(X) = vec(convW (X)), we get the result (F ∗ ⊗ F ∗) diag(vec(FWF ))(F ⊗ F ) = C ⇒ diag(vec(FWF )) = (F ⊗ F )C(F ∗ ⊗ F ∗), (A4) which shows that the (doubly-block-ciculant) matrixC is diagonalized by F ⊗F . An alternate proof can be found in Jain (1989, p. 150). Now we can consider the case where we have a 2D circular convolution C ∈ Rcoutn2×cinn2 with cin input channels and cout output channels. Here, C has cout × cin blocks, each of which is a circular convolution Cij ∈ Rn 2×n2 . The input image is vecX = [ vecT X1, . . . , vec T Xcin ]T ∈ Rcinn2×1, where Xi is the ith channel of X . Corollary A.1.1. If C ∈ Rcoutn2×cinn2 represents a 2D circular convolution with cin input channels and cout output channels, then it can be block diagonalized as FcoutCF∗cin = D, where Fc = Sc,n2 (Ic ⊗ (F ⊗ F )), Sc,n2 is a permutation matrix, Ik is the identity matrix of order k, and D is block diagonal with n2 blocks of size cout × cin. Proof. We first look at each of the blocks of C individually, referring to D̂ as the block matrix before applying the S permutations, i.e., D̂ = STcout,n2DScin,n2 , so that: D̂ij = [(Icout ⊗ (F ⊗ F )) C (Icin ⊗ (F ∗ ⊗ F ∗))]ij = (F ⊗ F )Cij(F ∗ ⊗ F ∗) = diag(vec(FWijF )), (A5) whereWij are the weights of the (ij)th single-channel convolution, using Theorem A.1. That is, D̂ is a block matrix of diagonal matrices. Then, let Sa,b be the perfect shuffle matrix that permutes the blockmatrix of diagonal matrices to a block diagonal matrix. Sa,b can be constructed by subselecting rows of the identity matrix. Using slice notation: Sa,b = Iab(1 : b : ab, :) Iab(2 : b : ab, :) ... Iab(b : b : ab, :) . (A6) As an example: S2,4 a 0 0 0 e 0 0 0 i 0 0 0 0 b 0 0 0 f 0 0 0 j 0 0 0 0 c 0 0 0 g 0 0 0 k 0 0 0 0 d 0 0 0 h 0 0 0 l m 0 0 0 q 0 0 0 u 0 0 0 0 n 0 0 0 r 0 0 0 v 0 0 0 0 o 0 0 0 s 0 0 0 w 0 0 0 0 p 0 0 0 t 0 0 0 x ︸ ︷︷ ︸ D̂ ST3,4 = a e i 0 0 0 0 0 0 0 0 0 m q u 0 0 0 0 0 0 0 0 0 0 0 0 b f j 0 0 0 0 0 0 0 0 0 n r v 0 0 0 0 0 0 0 0 0 0 0 0 c g k 0 0 0 0 0 0 0 0 0 o s w 0 0 0 0 0 0 0 0 0 0 0 0 d h l 0 0 0 0 0 0 0 0 0 p t x ︸ ︷︷ ︸ D . (A7) Then, with the perfect shuffle matrix, we can compute the block diagonal matrix D as: Scout,n2D̂S T cin,n2 = Scout,n2 (Icout ⊗ (F ⊗ F )) C (Icin ⊗ (F ∗ ⊗ F ∗))STcin,n2 = FcoutCF∗cin = D. (A8) The effect of left and right-multiplying with the perfect shuffle matrix is to create a new matrix D from D̂ such that [Dk]ij = [D̂ij ]kk, where the subscript inside the brackets refers to the kth diagonal block and the (ij)th block respectively. Remark. It is much more simple to compute D (here wfft) in tensor form given the convolution weights w as a cout × cin × n× n tensor: wfft = fft2(w).reshape(cout, cin, n**2).permute(2, 0, 1). Definition A.3. The Cayley transform is a bijection between skew-Hermitian matrices and unitary matrices; for real matrices, it is a bijection between skew-symmetric and orthogonal matrices. We apply the Cayley transform to an arbitrary matrix by first computing its skew-Hermitian part: we define the function cayley : Cm×m → Cm×m by cayley(B) = (Im − B + B∗)(Im + B − B∗)−1, where we compute the skew-Hermitian part ofB inline asB−B∗. Note that the Cayley transform of a real matrix is always real, i.e., Im(B) = 0⇒ Im(cayley(B)) = 0, in which caseB−B∗ = B−BT is a skew-symmetric matrix. We now note a simple but important fact that we will use to show that our convolutions are always exactly real despite manipulating their complex representations in the Fourier domain. Lemma A.2. Say J ∈ Cm×m is unitary so that J∗J = I , and B = JB̃J∗ for B ∈ Rm×m and B̃ ∈ Cm×m. Then cayley(B) = Jcayley(B̃)J∗. Proof. First note that B = JB̃J∗ implies BT = B∗ = (JB̃J∗)∗ = JB̃∗J∗. Then cayley(B) = (I −B +BT )(I +B −BT ) = (I − JB̃J∗ + JB̃∗J∗)(I + JB̃J∗ − JB̃∗J∗)−1 = J(I − B̃ + B̃∗)J∗ [ J(I + B̃ − B̃∗)J∗ ]−1 = J(I − B̃ + B̃∗)J∗ [ J(I + B̃ − B̃∗)−1J∗ ] = J(I − B̃ + B̃∗)(I + B̃ − B̃∗)−1J∗ = Jcayley(B̃)J∗. (A9) For the rest of this section, we drop the subscripts ofF and S when they can be inferred from context. Theorem A.3. When cin = cout = c, applying the Cayley transform to the block diagonal matrix D results in a real, orthogonal multi-channel 2D circular convolution: cayley(C) = F∗cayley(D)F . Proof. Note that F is unitary: FF∗ = S(Ic ⊗ (F ⊗ F ))(Ic ⊗ (F ∗ ⊗ F ∗))ST = SIcn2ST = SST = Icn2 , (A10) since S is a permutation matrix and is thus orthogonal. Then apply Lemma A.2, where we have J = F∗, B = C, and B̃ = D, to see the result. Note that cayley(C) is real because C is real; that is, even though we apply the Cayley transform to skew-Hermitian matrices in the Fourier domain, the resulting convolution is real. Remark. While we deal with skew-Hermitian matrices in the Fourier domain, we are still effectively parameterizing the Cayley transform in terms of skew-symmetric matrices: as in the note in Lemma A.2, we can see that C = F∗DF ⇒ C − CT = C − C∗ = F∗DF − F∗D∗F = F∗(D −D∗)F , (A11) where C is real, D is complex, and C − CT is skew-symmetric (in the spatial domain) despite computing it with a skew-Hermitian matrix D −D∗ in the Fourier domain. Remark. Since D is block diagonal, we only need to apply the Cayley transform (and thus invert) its n2 blocks of size c× c, which are much smaller than the whole matrix: cayley(D) = diag(cayley(D1), . . . , cayley(Dn2)). (A12) A.1 Semi-Orthogonal Convolutions In many cases, convolutional layers do not have cin = cout, in which case they cannot be orthogonal. Rather, we must resort to enforcing semi-orthogonality. We can semi-orthogonalize convolutions using the same techniques as above. Lemma A.4. Right-padding the multi-channel 2D circular convolution matrix C (from cin to cout channels) with dn2 columns of zeros is equivalent to padding each diagonal block of the corresponding block-diagonal matrix D on the right with d columns of zeros: [C 0dn2 ] = F∗ diag([D1 0d] , . . . , [Dn2 0d])F , (A13) where 0k refers to k columns of zeros and a compatible number of rows. Proof. For a fixed column j, note that [Dk]ij = 0 for all i, k ⇐⇒ [D̂ij ]kk = 0 for all i, k ⇐⇒ Cij = 0 for all i, (A14) since D̂ij = (F⊗F )Cij(F ∗⊗F ∗) = 0 onlywhenCij = 0. Apply this for j = cin+1, . . . , cin+d. Lemma A.5. Projecting out d blocks of columns of C is equivalent to projecting out d columns of each of the diagonal blocks of D: C [ Idn2 0 ] = F∗ diag ( D1 [ Id 0 ] , . . . ,Dn2 [ Id 0 ]) F (A15) Proof. This proceeds similarly to the previous lemma: removing columns of each of the n2 matrices D1, . . . ,Dn2 implies removing the corresponding blocks of columns of D̂, and thus of C. Theorem A.6. If C is a 2D multi-channel convolution with cin ≤ cout, then letting d = cout − cin, cayley ([C 0dn2 ]) [ Idn2 0 ] = F∗ diag ( cayley ([D1 0d]) [ Id 0d ] , . . . , cayley ([Dn2 0d]) [ Id 0d ]) F , (A16) which is a real 2D multi-channel semi-orthogonal circular convolution. Proof. For the first step, we use Lemma A.4 for right padding, getting [C 0dn2 ] = F∗ diag([D1 0d] , . . . , [Dn2 0d])F . (A17) Then, noting that [C 0dn2 ] is a convolution matrix with cin = cout, we can apply Theorem A.3 (and the following remark) to get: cayley ([C 0dn2 ]) = F∗ diag (cayley ([D1 0d]) , . . . , cayley ([Dn2 0d]))F . (A18) Since cayley ([C 0dn2 ]) is still a real convolution matrix, we can apply Lemma A.5 to get the result. This demonstrates that we can semi-orthogonalize convolutions with cin 6= cout by first padding them so that cin = cout; despite performing padding, the Cayley transform, and projections on complex matrices in the Fourier domain, we have shown that the resulting convolution is still real. In practice, we do not literally perform padding nor projections; we explain how to do an equivalent but more efficient comptutation on each diagonal block Dk ∈ Ccout×cin below. Proposition A.7. We can efficiently compute the Cayley transform for semi-orthogonalization, i.e., cayley ([W 0d]) [ Id 0d ] , when cin ≤ cout by writing the inverse in terms of the Schur complement. Proof. We can partition W ∈ Ccout×cin into its top part U ∈ Ccin×cin and bottom part V ∈ C(cout−cin)×cin , and then write the padded matrix [W 0cout−cin ] ∈ Ccout×cout as [W 0cout−cin ] = [ U 0 V 0 ]. (A19) Taking the skew-Hermitian part and applying the Cayley transform, then projecting, we get: cayley ([ U 0V 0 ]) [ Icin 0 ] = ( Icout − [ U 0V 0 ] + [ U 0V 0 ] ∗) ( Icout + [ U 0 V 0 ]− [ U 0V 0 ] ∗)−1 [ Icin 0 ] = [ Icin−U+U ∗ V ∗ −V Icout−cin ][ Icin+U−U ∗ −V ∗ V Icout−cin ]−1[ Icin 0 ] . (A20) We focus on computing the inverse while keeping only the first cin columns. We use the inversion formula noted in Zhang (2006, p. 13) for a block partitioned matrixM , M−1 [ Icin 0 ] = [ P Q R S ]−1[ Icin 0 ] = [ (M/S)−1 −(M/S)−1QS−1 −S−1R(M/S)−1 S−1+S−1R(M/S)−1QS−1 ][ Icin 0 ] = [ (M/S)−1 −S−1R(M/S)−1 ] , (A21) where we assumeM takes the form of the inverse in Eq. A20, andM/S = P −QS−1R is the Schur complement. Using this formula for the first cin columns of the inverse in Eq. A20, and computing the Schur complement Icin + U − U∗ + V ∗I−1cout−cinV , we find cayley ([ U 0V 0 ]) = [ Icin−U+U ∗ V ∗ −V Icout−cin ][ (Icin+U−U ∗+V ∗V )−1 −V (Icin+U−U ∗+V ∗V )−1 ] = [ (Icin−U+U ∗−V ∗V )(Icin+U−U ∗+V ∗V )−1 −2V (Icin+U−U ∗+V ∗V )−1 ] ∈ Ccout×cin , (A22) which is semi-orthogonal and requires computing only one inverse of size cin ≤ cout. Note that this inverse always exists because U − U∗ is skew-Hermitian, so it has purely imaginary eigenvalues, and V ∗V is positive semidefinite and has all real non-negative eigenvalues. That is, the sum Icin + U − U∗ + V ∗V has all nonzero eigenvalues and is thus nonsingular. Proposition A.8. We can also compute semi-orthogonal convolutions when cin ≥ cout using the method described above because cayley ([ CT 0 ])T = cayley ([ C0 ]). Proof. We use that (A−1)T = (AT )−1 and (I −A)(I +A)−1 = (I +A)−1(I −A) to see cayley ([ C0 ]) T = [( I − [ C0 ] + [ C0 ] T )( I + [ C0 ]− [ C0 ] T )−1]T = ( I + [ C0 ] T − [ C0 ] )−1 ( I − [ C0 ] T + [ C0 ] ) = cayley ( [ C0 ] T ) = cayley ([ CT 0 ]) . (A23) We have thus shown how to (semi-)orthogonalize real multi-channel 2D circular convolutions efficiently in the Fourier domain. Aminimal implementation of our method can be found in Appendix E. The techniques described above could also be used with other orthogonalization methods, or for calculating the determinants or singular values of convolutions. B Additional Results For KWLarge, our results on empirical robustness were mixed: while our Cayley layer outperforms BCOP in robust accuracy, the RKO methods are overall more robust by around 2%, for only a marginal decrease in clean accuracy. We note the lower empirical local Lipschitzness of RKO methods, which may explain their higher robustness: Figure 4 shows that the best choice of Lipschitz upper-bound for Cayley and BCOP layers may be less than 1 for this architecture. C Empirical runtimes Each runtime was recorded using the autograd profiler in PyTorch (Paszke et al., 2019) by summing the CUDA execution times. The batch size was fixed at 128 for all graphs, and each data point was averaged over 32 iterations. We used a Nvidia Quadro RTX 8000. D Additional Baseline Experiments D.1 Robustness Experiments The main competing orthogonal convolutional layer, BCOP (Li et al., 2019), uses Björck (Björck & Bowie, 1971) orthogonalization for internal parameter matrices; they also used it in their experiments for orthogonal fully-connected layers. Similarly to how we replaced the method in RKO with the Cayley transform for our CRKO (Cayley RKO) experiments, we replaced Björck with the Cayley transform in BCOP and used a Cayley linear layer forCayleyBCOP experiments, reported in Tables 6 and 7. We see slightly decreased performance over all metrics, similarly to the relationship between RKO and CRKO. For additional comparison, we also report on a plain convolutional baseline in Table 7. For this model, we used a plain circular convolutional layer and a Cayley linear layer, which still imparts a considerable degree of robustness. With the plain convolutional layer, the model gains a considerable degree of accuracy but loses some robustness. We did not report a plain convolutional baseline for the provable robustness experiments on KWLarge, as it would require a more sophisticated technique to bound the Lipschitz constants of each layer, which is outside the scope of our investigation. D.2 Wasserstein Distance Estimation We repeated the Wasserstein distance estimation experiment from Li et al. (2019), simply replacing the BCOP layer with our Cayley convolutional layer, and the Björck linear layer with our Cayley fully-connected layer. We took the best Wasserstein distance bound from one trial of each of the four learning rates considered in BCOP (0.1, 0.01, 0.001, 0.0001); see Table 8. E Example Implementations In PyTorch 1.8, our layer can be implemented as follows. def cayley(W): if len(W.shape) == 2: return cayley(W[None])[0] _, cout, cin = W.shape if cin > cout: return cayley(W.transpose(1, 2)).transpose(1, 2) U, V = W[:, :cin], W[:, cin:] I = torch.eye(cin, dtype=W.dtype, device=W.device)[None, :, :] A = U - U.conj().transpose(1, 2) + V.conj().transpose(1, 2) @ V inv = torch.inverse(I + A) return torch.cat((inv @ (I - A), -2 * V @ inv), axis=1) class CayleyConv(nn.Conv2d): def fft_shift_matrix(self, n, s): shift = torch.arange(0, n).repeat((n, 1)) shift = shift + shift.T return torch.exp(2j * np.pi * s * shift / n) def forward(self, x): cout, cin, _, _ = self.weight.shape batches, _, n, _ = x.shape if not hasattr(self, "shift_matrix"): s = (self.weight.shape[2] - 1) // 2 self.shift_matrix = self.fft_shift_matrix(n, -s)[:, :(n//2 + 1)] \ .reshape(n * (n // 2 + 1), 1, 1).to(x.device) xfft = torch.fft.rfft2(x).permute(2, 3, 1, 0) \ .reshape(n * (n // 2 + 1), cin, batches) wfft = self.shift_matrix * torch.fft.rfft2(self.weight, (n, n)) \ .reshape(cout, cin, n * (n // 2 + 1)).permute(2, 0, 1).conj() yfft = (cayley(wfft) @ xfft).reshape(n, n // 2 + 1, cout, batches) y = torch.fft.irfft2(yfft.permute(3, 2, 0, 1)) if self.bias is not None: y += self.bias[:, None, None] return y To make the layer support stride-2 convolutions, have CayleyConv inherit from the following class instead, which depends on the einops package: class StridedConv(nn.Conv2d): def __init__(self, *args, **kwargs): if "stride" in kwargs and kwargs["stride"] == 2: args = list(args) args[0] = 4 * args[0] # 4x in_channels args[2] = args[2] // 2 # //2 kernel_size; optional args = tuple(args) super().__init__(*args, **kwargs) downsample = "b c (w k1) (h k2) -> b (c k1 k2) w h" self.register_forward_pre_hook(lambda _, x: \ einops.rearrange(x[0], downsample, k1=2, k2=2) \ if self.stride == (2, 2) else x[0]) More details on our implementation and experiments can be found at: https://github.com/locuslab/orthogonal-convolutions.
1. What is the main contribution of the paper regarding convolutional neural networks? 2. What are the strengths and weaknesses of the proposed approach in terms of computational efficiency and robustness? 3. How does the reviewer assess the clarity and organization of the paper's content, particularly in Section 4? 4. What are the differences between the proposed method and previous approaches in terms of parametrizing non-square matrices via the Cayley transform? 5. How does the reviewer evaluate the significance of the paper's application in certifiable robustness, and what are the potential implications for future research? 6. Are there any concerns or suggestions regarding the experimental results and comparisons with other methods?
Review
Review Summary The paper uses the idea that a convolution on the Fourier space accounts just to a matrix-vector product. It then uses the Cayley map to parametrise the special orthogonal group, compute the action and send the tensor back using the inverse FFT. They test this idea in the context of certifiable robustness. Comments Page 3. "Computing a dot product". Usually the term "dot product" is used to refer to scalar products. It would be perhaps more reasonable to say that it performs a "matrix-vector product". Page 3. "Typically, W is zero except for a k × k region, which we call the kernel or the receptive field.". Here W is the matrix version of the convolution. For this reason, W is non zero in k 2 elements in every row of each matrix W [ : , : , i , j ] , not in a k × k region. Although I might have misunderstood this... The operator conv is not defined anywhere. In general, it would be very helpful to say where does each object live in page 3, as W seems to be used as both an element of R c o u t × c i n × n × n and later as a square matrix. This is quite confusing. See also 7) Page 3. Cite some standard reference where a formal statement of the convolution theorem is given. Page 3. Mention that F n ∗ denotes the conjugate transpose. Eq. 3. Explain how are x and X related Similar to 3), FFT : R n × n → C n × n , but in Algorithm 1) this is applied to tensors. In this particular case one can hint what is going on, but again, it is unnecessarily difficult to track what lives what conventions are being used when an operator is applied to a tensor. This should be made explicit somewhere. Page 4. "However, it can be extended to all orthogonal convolutions by multiplying by a diagonal matrix with ± 1 entries (Gallier, 2006; Helfrich et al., 2018).". This is not true. The correct theorem says that "for every B ∈ O ( n ) there exists a diagonal matrix E with entries in − 1 , 1 such that E cay ( X ) = B for some X ∈ Skew ( n ) . Note that the matrix E depends on the matrix B . As such, to cover the whole O ( n ) , (or SO ( n ) ) one would have to optimise over this discrete set, which is not easy. Could the authors comment on how their method to parametrise non-square matrices via the Cayley transform relates to that used in the classical paper: Zaiwen Wen and Wotao Yin. A feasible method for optimization with orthogonality constraints.Mathematical Programming, 142(1-2):397–434, 2013. Note that the standard method would account for lifting the problem from St ( n , k ) to SO n , compute the Cayley map and then just take the first k columns. Their method is slightly different. Is your method one of these two or is it different to both of them? I believe that this discussion should be in the main paper in the related work section. Just out of curiosity, how expensive is this method when compared with an unconstrained convolutional layer in wall-clock time? Experiments What is the reason for using Björck & Bowie (1971) to orthogonalise the layers in the other examples? This is a very bad way to do so which leads to some terrible dynamics. I believe that Cayley should be used throughout all the experiments. Could the authors report the results of the other methods while using Cayley also for them? Conclusion This is a very strong paper, with a very nice application of orthogonalisation in the context of CNNs. I think that the presentation of the content in the paper is very good. The only thing I did find quite confusing is the lack of definitions in section 4. I would be happy to increase my score provided that the problems raised here are addressed, in particular that of the experiments.
ICLR
Title Picking Daisies in Private: Federated Learning from Small Datasets Abstract Federated learning allows multiple parties to collaboratively train a joint model without sharing local data. This enables applications of machine learning in settings of inherently distributed, undisclosable data such as in the medical domain. In practice, joint training is usually achieved by aggregating local models, for which local training objectives have to be in expectation similar to the joint (global) objective. Often, however, local datasets are so small that local objectives differ greatly from the global objective, resulting in federated learning to fail. We propose a novel approach that intertwines model aggregations with permutations of local models. The permutations expose each local model to a daisy chain of local datasets resulting in more efficient training in data-sparse domains. This enables training on extremely small local datasets, such as patient data across hospitals, while retaining the training efficiency and privacy benefits of federated learning. 1 INTRODUCTION How can we learn high quality models when data is inherently distributed into small parts that cannot be shared or pooled, as we for example often encounter in the medical domain (Rieke et al., 2020)? Federated learning solves many but not all of these problems. While it can achieve good global models without disclosing any of the local data, it does require sufficient data to be available at each site in order for the locally trained models to achieve a minimum quality. In many relevant applications, this is not the case: in healthcare settings we often have as little as a few dozens of samples (Granlund et al., 2020; Su et al., 2021; Painter et al., 2020), but also domains where DL is generally regarded as highly successful, such as natural language processing and object detection often suffer from a lack of data (Liu et al., 2020; Kang et al., 2019). In this paper, we present an elegant idea in which models are moved around iteratively and passed from client to client, thus forming a daisy-chain that the model traverses. This daisy-chaining allows us to learn from such small, distributed datasets simply by consecutively training the model with the data availalbe at each site. We should not do this naively, however, since it would not only lead to overfitting – a common problem in federated learning which can cause learning to diverge (Haddadpour and Mahdavi, 2019) – but also violate privacy, since a client can infer from a model upon the data of the client it received it from (Shokri et al., 2017). To alleviate these issues, we propose an approach to combine daisy-chaining of local datasets with aggregation of models orchestrated by a coordinator, which we term federated daisy-chaining (FEDDC). In a nutshell, in a daisy-chain round, local models are send to a coordinator and randomly redistributed to clients, without aggregation. Thereby, each individual model follows its own random daisy-chain of clients. In an aggregation round, models are aggregated and redistributed, as in standard federated learning. Our approach maintains privacy of local datasets, while it provably guarantees improvement of model quality of convex models with a suitable aggregation method which standard federated learning cannot. For non-convex models such as convolutional neural networks, it improves the performance upon the state-of-the-art on standard benchmark and medical datasets. Formally, we show that FEDDC allows convergences on datasets so small that standard federated learning diverges by analyzing aggregation via the Radon point from a PAC-learning perspective. We substantiate this theoretical analysis by showing that FEDDC in practice matches the accuracy of a model trained on the full data of the SUSY binary classification dataset, beating standard federated learning by a wide margin. In fact, FEDDC allows us to achieve optimal model quality with only 2 samples per client. In an extensive empirical evaluation, we then show that FEDDC outperforms vanilla federated learning (McMahan et al., 2017), naive daisy-chaining, and FedProx (Li et al., 2020a) on the benchmark dataset CIFAR10 (Krizhevsky, 2009), and more importantly on two realworld medical datasets. In summary, our contributions are as follows. • FEDDC, an elegant novel approach to federated learning from small datasets via a combination of daisy-chaining and aggregation, • a theoretical guarantee that FEDDC improves models in terms of , δ-guarantees, which standard federated averaging can not, • a thorough discussion of the privacy aspects and mitigations suitable for FEDDC, including an empirical evaluation of differentially private FEDDC, and • an extensive set of experiments showing that FEDDC substantially improves model quality for small datasets, being able to train ResNet18 on a pneumonia dataset on as little as 8 samples per client. 2 RELATED WORK Learning from small datasets is a well studied problem in machine learning. In the literature, we find among others general solutions, such as using simpler models, and transfer learning (Torrey and Shavlik, 2010), to more specialized ones, such as data augmentation (Ibrahim et al., 2021) and fewshot learning (Vinyals et al., 2016; Prabhu et al., 2019). In our scenario, however, data is abundant, but the problem is that the local datasets at each site are small and cannot be pooled. Federated learning and its variants have been shown to learn from incomplete local data sources, e.g., non-iid label distributions (Li et al., 2020a; Wang et al., 2019) and differing feature distributions (Li et al., 2020b; Reisizadeh et al., 2020a), but were proven to fail in case of large gradient diversity (Haddadpour and Mahdavi, 2019) and too dissimilar label distribution (Marfoq et al., 2021). For very small datasets, local empirical distributions may vary greatly from the global data distribution—while the difference of empirical to true distribution decreases exponentially with the sample size (e.g., according to the Dvoretzky–Kiefer–Wolfowitz inequality), for small sample sizes the difference can be substantial, in particular if the data distribution differs from a Normal distribution (Kwak and Kim, 2017). FedProx (Li et al., 2020a) is a variant of federated learning that is particularly suitable for tackling non-iid data distributions. It increases training stability by adding a momentum-like proximal term to the objective functions. This increase in stability, however, comes at the cost of not being privacypreserving anymore (Rahman et al., 2021). We compare FEDDC to FedProx in Section 7. We can reduce sample complexity by training networks only partially, e.g., by collaboratively training only a shared part of the model. This approach allows training client-specific models in the medical domain (Yang et al., 2021), but by design cannot train a global model. Kiss and Horvath (2021) propose a decentralized and communication-efficient variant of federated learning that migrates models over a decentralized network and stores incoming models locally at each client until sufficiently many models are collected on each client for an averaging step, similar to Gossip federated learing (Jelasity et al., 2005). The variant without averaging is similar to simple daisy-chaining which we compare to in Section 7. FEDDC is compatible with any aggregation operator, including the Radon point (Kamp et al., 2017) and the geometric median (Pillutla et al., 2019). It can also be straightforwardly combined with approaches to improve communication-efficiency, such as dynamic averaging (Kamp et al., 2018), and model quantization (Reisizadeh et al., 2020b). 3 PRELIMINARIES We assume iterative learning algorithms (cf. Chp. 2.1.4 Kamp, 2019) A : X × Y × H → H that update a model h ∈ H using a dataset D ⊂ X × Y from an input space X and output space Y , i.e., ht+1 = A(D,ht). Given a set of m ∈ N clients with local datasets D1, . . . , Dm ⊂ X × Y drawn iid from a data distribution D and a loss function ` : Y × Y → R, the goal is to find a single model h∗ ∈ H that minimizes the risk ε(h) = E(x,y)∼D [ `(h(x), y) ] . (1) In centralized learning, the datasets are pooled as D = ⋃ i∈[m]D i and A is applied to D until convergence. Note that applying A on D can be the application to any random subset, e.g., as in mini-batch training, and convergence is measured in terms of low training loss, small gradient, or small deviation from previous iterate. In standard federated learning (McMahan et al., 2017), A is applied in parallel for b ∈ N rounds on each client locally to produce local models h1, . . . , hm. These models are then centralized and aggregated using an aggregation operator agg : Hm → H, i.e., h = agg(h1, . . . , hm). The aggregated model h is then redistributed to local clients which perform another b rounds of training using h as a starting point. This is iterated until convergence of h. In the following section, we describe FEDDC. 4 METHOD We propose federated daisy chaining as an extension to federated learning and hence assume a setup where we have m clients and one designated coordinator node.1 We provide pseudocode of our approach as Algorithm 1. The client Each client trains its local model in each round on local data (line 4), and sends its model to the coordinator every b rounds for aggregation, where b is the aggregation period, and every d rounds for daisy chaining, where d is the daisy-chaining period (line 6). This re-distribution of models results in each individual model following a daisy-chain of clients, training on each local dataset. Such a daisy-chain is interrupted by each aggregation round. The coordinator Upon receiving models (line 10), in a daisy-chaining round (line 11) the coordinator draws a random permutation π of clients (line 12) and re-distributes the model of client i to client π(i) (line 13), while in an aggregation round (line 15), the coordinator instead aggregates all local models (line 16) and re-distributes the aggregate to all clients (line 17). Communication complexity Communication between clients and coordinator happens in O( tmaxd + tmax b ) rounds, where tmax is the overall number of rounds. Although inherently higher than in plain federated learning, the overall amount of communication in daisy chained federated learning is still low. In particular, in each communication round, each client sends and receives only a single model from the coordinator. The amount of communication per communication round is thus linear in the number of clients and model size, similar to federated averaging. In the following section we show that the additional daisy-chaining rounds ensure convergence for small datasets in terms of PAC-like , δ-guarantees. 5 THEORY Next, we theoretically analyze the key properties of FEDDC in terms of PAC-like ( , δ)-guarantees. For that, we make the following assumption on the learning algorithm A. Assumption 1 (( , δ)-guarantees). The learning algorithmA applied on all datasets drawn iid from D of size n ≥ n0 ∈ N produces a model h ∈ H such that with probability δ ∈ (0, 1] it holds for > 0 that P (ε(h) > ) < δ . The sample size n0 is a monotone function in δ and , i.e., for fixed n0 is monotonically increasing with δ and for fixed δ it is monotonically decreasing with (note that typically n0 is a polynomial in −1 and log(δ−1)). 1This star-topology can be extended to hierarchical networks in a straight-forward manner. Federated learning can also be performed in a decentralized network via gossip algorithms (Jelasity et al., 2005) Algorithm 1 Federated Daisy-Chaining FEDDC Require: daisy-chaining period d, aggregation period b, learning algorithmA, aggregation operator agg, m clients with local datasets D1, . . . , Dm 1: initialize local models h10, . . . , h m 0 2: at local client i in round t 3: draw random set of samples S from local dataset Di 4: hit ←A(S, hit−1) 5: if t% d = d− 1 or t % b = b− 1 then 6: send hit to coordinator 7: end if 8: 9: at coordinator in round t 10: receive models h1t , . . . , h m t 11: if t% d = d− 1 then 12: draw permutation π of [1,m] at random 13: for all i ∈ [m] send model hit to client π(i) 14: end if 15: if t% b = b− 1 then 16: ht ← agg(h1t , . . . , hmt ) 17: send ht to all clients 18: end if 19: Here ε(h) is the risk defined in Equation 1. We will show that aggregation for small local datasets can diverge and that daisy-chaining can prevent this. For this, we analyze the development of ( , δ)guarantees on model quality when aggregating local models with and without daisy-chaining. It is an open question how such an ( , δ)-guarantee develops when averaging local models. Existing work analyzes convergence (Haddadpour and Mahdavi, 2019; Kamp et al., 2018) or regret (Kamp et al., 2014) and thus gives no generalization bound. Recent work on generalization bounds for federated averaging via the NTK-framework (Huang et al., 2021) is promising, but not directly compatible with daisy-chaining: the analysis of Huang et al. (2021) requires local datasets to be disjoint which would be violated by a daisy-chaining round. Using the Radon point (Radon, 1921) as aggregation operator, however, does permit analyzing the development of ( , δ)-guarantees. In particular, it was shown that for fixed the probability of bad models is reduced doubly exponentially (Kamp et al., 2017) when we aggregate models using the (iterated) Radon point (Clarkson et al., 1996). Here, a Radon point of a set of points S from a space X is—similar to the geometric median—a point in the convex hull of S with a high centrality (more precisely, a Tukey depth (Tukey, 1975; Gilad-Bachrach et al., 2004) of at least 2). For a Radon point to exist, the size of S has to be sufficiently large; the minimum size of S ⊂ X is denoted the Radon number of the space X and for X ⊆ Rd the radon number is d+ 2. Let r ∈ N be the Radon number of H, A be a learning algorithm as in assumption 1, and ε be convex. Assume m ≥ rh many clients with h ∈ N. For > 0, δ ∈ (0, 1] assume local datasets D1, . . . , Dm of size larger than n0( , δ) drawn iid from D, and h1, . . . , hm be local models trained on them usingA. Let rh be the iterated Radon point with h iterations computed on the local models. Then it follows from Theorem 3 in Kamp et al. (2017) that for all i ∈ [m] it holds that P (ε(rh) > ) ≤ (r P (ε(hi) > ))2 h (2) where the probability is over the random draws of local datasets. This implies that the iterated Radon point only improves over the local models if δ < r−1. Consequently, local models need to achieve a minimum quality for the federated learning system to converge. Corollary 2. Given a model space H with Radon number r ∈ N, convex risk ε, and a learning algorithm A with sample size n( , δ). Given > 0 and any h ∈ N, if local datasets D1, . . . , Dm with m ≥ rh are smaller than n0( , r−1), then federated learning using the Radon point does not improve model quality in terms of ( , δ)-guarantees. In other words, when using aggregation by Radon points alone, an improvement in terms of ( , δ)guarantees is strongly dependent on large enough local datasets. Furthermore, given δ > r−1, the guarantee can become arbitrarily bad by increasing the number of aggregation rounds. Federated Daisy-Chaining as given in Algo. 1 permutes local models at random, which is in theory equivalent to permuting local datasets. This way, the amount of data visible to each model is increased. Since the permutation is drawn at random, the minimum amount of distinct local samples observed by each model can be given with high probability. Lemma 3. Given δ ∈ (0, 1], m ∈ N clients, and k ∈ [m], if Algorithm 1 with daisy chaining period d ∈ N is run for T ∈ N rounds with T ≥ d ln δ ln ( m−1 m ) (m− k + 1)m then each local model has seen at least k distinct datasets with probability 1− δ. Proof. For m clients with m local datasets, the chance of a client i to not see dataset j after τ many permutations is ( m−1 m )τ . The probability that each of the m clients is not seeing m − k + 1 other datasets is hence m−k+1∏ j=1 ( m− 1 m )τ = ( m− 1 m )τ(m−k+1) , and corresponds to the probability of each client seeing less than k distinct other datasets. The probability of all clients seeing at least k distinct datasets is hence at least 1− ( m− 1 m )τ(m−k+1)m ! ≥ 1− δ ⇔ ( m− 1 m )τ(m−k+1)m ! ≤ δ . Taking the logarithm on both sides with base (m− 1)/m < 1 yields τ(m− k + 1)m ≥ ln δ ln m−1m . Multiplying with m− k+1 and observing that τ many daisy-chaining rounds with period d require T = τd total rounds yields the result. From Lm. 3 it follows that when we perform daisy-chaining with m clients, and local datasets of size n, for at least d ln δ((ln(m− 1)− ln(m))(m− k+1)m)−1 rounds, each local model will with probability at least 1− δ be trained on at least kn samples. Proposition 4. Given a model space H with Radon number r ∈ N, convex risk ε, and a learning algorithm A with sample size n( , δ). Given > 0, δ ∈ (0, r−1) and any h ∈ N, if local datasets D1, . . . , Dm of size n ∈ N with m ≥ rh, then Alg. 1 using the Radon point with b ≥ d ln δ ln ( m−1 m ) ( m− n0( ,δ)n + 1 ) m improves model quality in terms of ( , δ)-guarantees. Proof. The number of daisy-chaining rounds before computing a Radon point ensure that with probability 1−δ all local models are trained on at least kn samples with k = n0( , δ)/n, i.e., each model is trained on at least n0( , δ) samples and thus an ( , δ)-guarantee holds for each model. Since δ < r−1, this guarantee is improved as detailed in Eq. (2). To support this theoretical result, we compare FEDDC using the iterated Radon point with standard federated learning on the SUSY binary classification dataset (Baldi et al., 2014), training a linear model on 441 clients with only 2 samples per client. The results in Figure 1 show that after 500 rounds FEDDC reached the test accuracy of a model that has been trained on the centralized dataset (ACC=0.77) beating federated learning by a large margin (ACC=0.65). Before further investigating FEDDC empirically in Section 7, we discuss the privacy-aspects of FEDDC in the following section. 6 PRIVACY A major benefit of federated learning is that data remains undisclosed on the local clients and only model parameters are exchanged. It is, however, possible to infer upon local data given model parameters (Ma et al., 2020). In classical federated learning there are two types of attacks that would allow such inference: (i) an attacker intercepting the communication of a client with the coordinator obtaining model updates to infer upon the clients data, and (ii) a malicious coordinator obtaining models to infer upon the data of each client. A malicious client cannot learn about other clients data, since it only obtains the average of all local models. In federated daisychaining there is a third possible attack: (iii) a malicious client obtaining model updates from another client to infer upon its data. In the following, we discuss potential defenses against these three types of attacks in more detail. Note that we limit the discussion on attacks that aim at inferring upon local data, thus breaching data privacy. For a discussion of attacks that aim to poison the learning process (Bhagoji et al., 2019) or create backdoors (Sun et al., 2019) for adversarial examples, we refer to Lyu et al. (2020). A general and wide-spread approach to tackle all three possible attack types is to add noise to the model parameters before sending. Using appropriate clipping and noise, this guarantees , δdifferential privacy for local data (Wei et al., 2020) at the cost of a slight-to-moderate loss in model quality. Another approach to tackle an attack on communication (i) is to use encrypted communication. One can also protect against a malicious coordinator (ii) by using homomorphic encryption that allows the coordinator to average models without decrypting them (Zhang et al., 2020). This, however, only works for particular aggregation operators and does not allow to perform daisy-chaining. Secure daisy-chaining in the presence of a malicious coordinator (ii) can, however, be performed using asymmetric encryption. Assume each client creates a public-private key pair and shares the public key with the coordinator. To avoid the malicious coordinator to send clients its own public key and act as a man in the middle, public keys have to be announced (e.g., by broadcast). While this allows sending clients to identify the recipient of their model, no receiving client can identify the sender. Thus, inference on the origin of a model remains impossible. For a daisy-chaining round the coordinator sends the public key of the receiving client to the sending client, the sending client checks the validity of the key and sends an encrypted model to the coordinator which forwards it to the receiving client. Since only the receiving client can decrypt the model, the communication is secure. In standard federated learning, a malicious client cannot infer upon the data of other clients from model updates, since it only receives the average model. In federated daisy-chaining, it receives the model from a random, unknown client in each daisy-chaining round. Now, the malicious client can infer upon the membership of a particular data point in the local dataset of the client the model originated from, i.e., a membership inference attack (Shokri et al., 2017). Similarly, the malicious client can infer upon the presence of data points with certain attributes in the dataset (Ateniese et al., 2015). The malicious client, however, does not know the client the model was trained on, i.e., it does not know the origin of the dataset. Using a random scheduling of daisy-chaining and averaging rounds at the coordinator, the malicious client cannot even distinguish between a model from another client or the average of all models. Nonetheless, daisy-chaining opens up new potential attack vectors (e.g., by clustering received models to potentially determine their origins). These potential attack vectors can be tackled by adding noise to model parameters as discussed above, since “[d]ifferentially private models are, by construction, secure against membership inference attacks” (Shokri et al., 2017). To investigate the impact of this privacy technique on FEDDC, we apply it in practice: We train a small ResNet on 250 clients using FEDDC with d = 2 and b = 10. Details on the experimental setup can be found in Supp. ??,??. Differential privacy is achieved by clipping local model updates and adding Gaussian noise as proposed by Geyer et al. (2017). The results shown in Figure 2 indicate that the standard trade-off between model quality and privacy holds for FEDDC as well. Moreover, for mild privacy settings the model quality does not decrease. That is, FEDDC is able to robustly predict even under differential privacy. 7 EMPIRICAL EVALUATION We evaluate FEDDC against the state-of-the-art in federated learning on synthetic and real world data. In particular, we compare to standard Federated averaging (FedAvg) (McMahan et al., 2017), FedAvg with equal communication as FEDDC, FedProx (Li et al., 2020a), and simple daisy-chaining without aggregation. As real world applications we consider the image classification problem CIFAR10 (Krizhevsky, 2009), publicly available MRI scans for brain tumors2, and chest X-rays for 2https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection pneumonia (e.g., from COVID-19)3. For reproducibility, we provide details on architectures, and experimental setup in Supp. ??,??. The implementation of the experiments is publicly available at https://anonymous.4open.science/r/FedDC-1BC9. 7.1 SYNTHETIC DATA We first investigate the potential of FEDDC on a synthetic binary classification dataset generated by the sklearn (Pedregosa et al., 2011) make_classification function with 100 features. On this dataset, we train a simple MLP with 3 hidden layers on m = 50 clients with n = 10 samples per client. We compare FEDDC with d = 1 and b = 200 to FedAvg with b = 200. The results presented in Figure 3 show that FEDDC achieves an optimal test performance of 0.89 (centralized training on all data achieves a test accuracy of 0.88), substantially outperforming FedAvg. The results indicate that the main reason is overfitting of local clients, since for FedAvg train accuracy reaches 1.0 quickly after each averaging step. In the following, we investigate how these promising results translate to real-world datasets. 7.2 CIFAR10 To compare FEDDC with the state of the art on real world data, we first consider the CIFAR10 image benchmark. To find a suitable aggregation period b for FEDDC and FedAvg, we first run a search grid across periods for 250 clients with small versions of ResNet (details in Supp. ??). We report the results in Figure 4 and set the period for FEDDC to 10, and consider federated averaging with periods of both 1 and 10. For our next experiment, we equip 150 clients each with a ResNet18. To simulate our setting that each client has a small amount of samples, each one of them only receives 64 samples. Note that the combined amount of examples is only one fifth of the original training data, hence we cannot expect the typical performance on this dataset. As NNs are non-convex, Radon points are no longer suitable as aggre- gation method, we instead resort to averaging. Results are reported in Table 1. We observe that FEDDC achieves substantially higher accuracy of more than 6 percentage points over federated averaging with the same amount of communication. Looking closer, we see that FedAvg drastically overfits, achieving training accuracies of 0.97, a similar trends as reported in Figure 3 for synthetic data. We further see that daisy-chaining alone, besides its privacy issues, performs worse than FEDDC. Similarly, FedProx run with b = 10 and µ = 0.1 only achieves an accuracy of 0.545. 7.3 MEDICAL IMAGE DATA We conduct experiments on real medical image data, which are naturally of small sample size and represent actual health related machine learning tasks. Here, we observe similar trends as for CIFAR10. For the brain MRI scans, we simulate 25 clients equipped with simple CNNs (see App. ??) and 8 samples each. The results for brain tumor prediction based on these scans are reported in 3https://www.kaggle.com/praveengovi/coronahack-chest-xraydataset Table 1. Again, FEDDC performs best, beating both FedAvg and FedProx on this challenging tasks. For pneumonia, we simulate 150 clients training ResNet18 (see again App. ??) with 8 samples per client. The results in Table 1 not only show that FEDDC again outperforms all baselines, but also highlight that FEDDC enables us to train a ResNet18 to high accuracy with as little as 8 samples per client. 8 DISCUSSION Empirical evaluation shows that FEDDC drastically improves upon state-of-the-art methods for federated learning for settings with only small amounts of available data. This confirms the theoretical potential, given by the , δ-guarantees, of improving model quality, which is unique among federated learning methods. Using the iterated Radon point as aggregation method, and given as few as 2 samples per client, FEDDC matches the test accuracy of a model trained on the whole SUSY dataset, outperforming standard federated learning by over 12% points of accuracy. This result shows that unlike federated learning, FEDDC does not heavily overfit and is able to learn a generalized model, and is consistent with a synthetic prediction task using multi-layer perceptrons. To study FEDDC in the context of real data, we consider both the standard image benchmark data CIFAR10, as well as two challenging image classification tasks from the health domain where only little data is available. On each of these tasks, FEDDC consistently outperforms state-of-the-art federate learning methods. Similar to before, we observe overfitting of standard federate learning methods. To rule out any effects due to increased communication, we also considered FedAvg with the same amount of communication as our method, however, FedAvg shows no improvement. Through FEDDC, we present an effective solution to the problem of federated learning on small datasets. We further show that our method is able to robustly predict even under the effect of differential privacy, and suggest effective measures based on encryption as mitigations against attacks on communication or malicious coordinators. 9 CONCLUSION We considered the problem of learning high quality models in settings where data is inherently distributed across sites, data cannot be shared between sites, and each site only has very little data available. We propose an elegant, surprisingly simple approach that effectively solves this problem, by combining the idea of model aggregation approaches from federated learning with the concept of passing individual models around while still maintaining privacy. We showed that this approach theoretically improves models in terms of , δ-guarantees, which state-of-the-art federated averaging can not provide. In extensive empirical evaluations, including challenging image classification tasks from the health domain, we further show that for settings with limited data available per site, our method improves upon existing work by a wide margin. It thus paves the way for learning high quality models from small datasets. Although the amount of communication is not a critical issue for the settings where we intend FEDDC to be used in, it does make for engaging future work to improve its communication efficiency and hence also enable it for settings with limited bandwidth, e.g., regarding model training on mobile devices. Both from a practical, as well as from a security and privacy perspective, it would also be interesting to study how to formulate FEDDC in a decentralized setting, when no coordinator is available.
1. What is the main contribution of the paper in federated learning? 2. What are the strengths and weaknesses of the proposed approach in terms of privacy risk and accuracy improvement? 3. How does the reviewer assess the paper's analysis of privacy implications, particularly in comparison with existing works? 4. What are some minor to serious issues with the way the paper is currently written? 5. How does the reviewer suggest improving the paper, specifically regarding the assumption and the analysis of privacy loss?
Summary Of The Paper Review
Summary Of The Paper This paper considers the setting of federated learning where each client holds a small subset D k of the entire dataset D . The goal is to learn a hypothesis with small (generalization) error. The challenge is that each dataset might be too small to guarantee convergence and/or generalization. The "obvious" solution to this is to have multiple clients share data. But this is undesirable because of privacy considerations. Instead this paper proposes having clients swap "models" periodically. The paper argues that from the point of view of the model, it sees a greater diversity of client data, and thus ought to be able to train with better generalization. The paper reports that this improves over existing state of the art work in federated learning in terms of accuracy. Of course this incurs increased privacy risk. The paper proposes to tackle this through differential privacy. The results reported here in this regard are experimental. The permutation is superficially similar to the recently proposed shuffle model, but no connection is drawn in the paper. Review Discalimer: I am not an expert on federated learning, hence I am not too knowledgeable about what has been proposed in the literature. The idea of increasing the sample diversity by swapping models is simple and natural. The paper deserves credit for proposing this (if indeed this has not been considered before). On the other hand, the key downside to their approach is the privacy loss. This is what makes the federated model more challenging. In my view, this paper does not adequately analyze the privacy implications. A fair comparison to existing work in my view would look something like this: Fix a privacy budget. Prove a formal upper bound on the privacy loss incurred in the proposed model. Repeat this analysis for state of the art federated learning, and compare the resulting accuracies. In this work, step 2 is not addressed beyond experiments, and a quotation from Shokri etal, 2017. In addition, there seem to be multiple issues with the way the paper is currently written, ranging from minor to fairly serious. In its current form, I cannot recommend the paper be published. Assumption 1 feels a little ungainly, and circular. It feels like two things are being thrown together: one is a sample complexity bound for H , which is standard. The second is an assumption about the learning algorithm itself (which is what we would like to prove bounds about). Now some assumption on the algorithm is indeed called for, for instance if the aggregation function simply outputs the constant 0 hypothesis, then the algorithm cannot work. But to sweep this under an assumption that the algorithm works properly given enough samples is somewhat tautological, it is definitely confusing. Perhaps the authors want to state an assumption on the aggregation function instead, which is indeed necessary and would not be circular. Equation (2) quotes a result from the literature, given an upper bound on the error incurred by "radon point aggregation". It is conceivable that this can be improved, unless someone proves a lower bound that this is the best achievable rate. Perhaps the authors have such a bound in the supplementary material (which I am unable to see)? If not, Corollary 2 is not implied by Equation (2). Even if such a lower bound is not known, it is fair to say that the this paper can lower the error on each individual model to be aggregated, so that Equation (2) is now more useful. Lemma 3 is a simple corollary of the so-called coupon collector process. For fair comparison, the improved generalization needs to be balanced by a more formal analysis of the privacy loss. (Perhaps ideas from the shuffle model might be useful here. )
ICLR
Title Picking Daisies in Private: Federated Learning from Small Datasets Abstract Federated learning allows multiple parties to collaboratively train a joint model without sharing local data. This enables applications of machine learning in settings of inherently distributed, undisclosable data such as in the medical domain. In practice, joint training is usually achieved by aggregating local models, for which local training objectives have to be in expectation similar to the joint (global) objective. Often, however, local datasets are so small that local objectives differ greatly from the global objective, resulting in federated learning to fail. We propose a novel approach that intertwines model aggregations with permutations of local models. The permutations expose each local model to a daisy chain of local datasets resulting in more efficient training in data-sparse domains. This enables training on extremely small local datasets, such as patient data across hospitals, while retaining the training efficiency and privacy benefits of federated learning. 1 INTRODUCTION How can we learn high quality models when data is inherently distributed into small parts that cannot be shared or pooled, as we for example often encounter in the medical domain (Rieke et al., 2020)? Federated learning solves many but not all of these problems. While it can achieve good global models without disclosing any of the local data, it does require sufficient data to be available at each site in order for the locally trained models to achieve a minimum quality. In many relevant applications, this is not the case: in healthcare settings we often have as little as a few dozens of samples (Granlund et al., 2020; Su et al., 2021; Painter et al., 2020), but also domains where DL is generally regarded as highly successful, such as natural language processing and object detection often suffer from a lack of data (Liu et al., 2020; Kang et al., 2019). In this paper, we present an elegant idea in which models are moved around iteratively and passed from client to client, thus forming a daisy-chain that the model traverses. This daisy-chaining allows us to learn from such small, distributed datasets simply by consecutively training the model with the data availalbe at each site. We should not do this naively, however, since it would not only lead to overfitting – a common problem in federated learning which can cause learning to diverge (Haddadpour and Mahdavi, 2019) – but also violate privacy, since a client can infer from a model upon the data of the client it received it from (Shokri et al., 2017). To alleviate these issues, we propose an approach to combine daisy-chaining of local datasets with aggregation of models orchestrated by a coordinator, which we term federated daisy-chaining (FEDDC). In a nutshell, in a daisy-chain round, local models are send to a coordinator and randomly redistributed to clients, without aggregation. Thereby, each individual model follows its own random daisy-chain of clients. In an aggregation round, models are aggregated and redistributed, as in standard federated learning. Our approach maintains privacy of local datasets, while it provably guarantees improvement of model quality of convex models with a suitable aggregation method which standard federated learning cannot. For non-convex models such as convolutional neural networks, it improves the performance upon the state-of-the-art on standard benchmark and medical datasets. Formally, we show that FEDDC allows convergences on datasets so small that standard federated learning diverges by analyzing aggregation via the Radon point from a PAC-learning perspective. We substantiate this theoretical analysis by showing that FEDDC in practice matches the accuracy of a model trained on the full data of the SUSY binary classification dataset, beating standard federated learning by a wide margin. In fact, FEDDC allows us to achieve optimal model quality with only 2 samples per client. In an extensive empirical evaluation, we then show that FEDDC outperforms vanilla federated learning (McMahan et al., 2017), naive daisy-chaining, and FedProx (Li et al., 2020a) on the benchmark dataset CIFAR10 (Krizhevsky, 2009), and more importantly on two realworld medical datasets. In summary, our contributions are as follows. • FEDDC, an elegant novel approach to federated learning from small datasets via a combination of daisy-chaining and aggregation, • a theoretical guarantee that FEDDC improves models in terms of , δ-guarantees, which standard federated averaging can not, • a thorough discussion of the privacy aspects and mitigations suitable for FEDDC, including an empirical evaluation of differentially private FEDDC, and • an extensive set of experiments showing that FEDDC substantially improves model quality for small datasets, being able to train ResNet18 on a pneumonia dataset on as little as 8 samples per client. 2 RELATED WORK Learning from small datasets is a well studied problem in machine learning. In the literature, we find among others general solutions, such as using simpler models, and transfer learning (Torrey and Shavlik, 2010), to more specialized ones, such as data augmentation (Ibrahim et al., 2021) and fewshot learning (Vinyals et al., 2016; Prabhu et al., 2019). In our scenario, however, data is abundant, but the problem is that the local datasets at each site are small and cannot be pooled. Federated learning and its variants have been shown to learn from incomplete local data sources, e.g., non-iid label distributions (Li et al., 2020a; Wang et al., 2019) and differing feature distributions (Li et al., 2020b; Reisizadeh et al., 2020a), but were proven to fail in case of large gradient diversity (Haddadpour and Mahdavi, 2019) and too dissimilar label distribution (Marfoq et al., 2021). For very small datasets, local empirical distributions may vary greatly from the global data distribution—while the difference of empirical to true distribution decreases exponentially with the sample size (e.g., according to the Dvoretzky–Kiefer–Wolfowitz inequality), for small sample sizes the difference can be substantial, in particular if the data distribution differs from a Normal distribution (Kwak and Kim, 2017). FedProx (Li et al., 2020a) is a variant of federated learning that is particularly suitable for tackling non-iid data distributions. It increases training stability by adding a momentum-like proximal term to the objective functions. This increase in stability, however, comes at the cost of not being privacypreserving anymore (Rahman et al., 2021). We compare FEDDC to FedProx in Section 7. We can reduce sample complexity by training networks only partially, e.g., by collaboratively training only a shared part of the model. This approach allows training client-specific models in the medical domain (Yang et al., 2021), but by design cannot train a global model. Kiss and Horvath (2021) propose a decentralized and communication-efficient variant of federated learning that migrates models over a decentralized network and stores incoming models locally at each client until sufficiently many models are collected on each client for an averaging step, similar to Gossip federated learing (Jelasity et al., 2005). The variant without averaging is similar to simple daisy-chaining which we compare to in Section 7. FEDDC is compatible with any aggregation operator, including the Radon point (Kamp et al., 2017) and the geometric median (Pillutla et al., 2019). It can also be straightforwardly combined with approaches to improve communication-efficiency, such as dynamic averaging (Kamp et al., 2018), and model quantization (Reisizadeh et al., 2020b). 3 PRELIMINARIES We assume iterative learning algorithms (cf. Chp. 2.1.4 Kamp, 2019) A : X × Y × H → H that update a model h ∈ H using a dataset D ⊂ X × Y from an input space X and output space Y , i.e., ht+1 = A(D,ht). Given a set of m ∈ N clients with local datasets D1, . . . , Dm ⊂ X × Y drawn iid from a data distribution D and a loss function ` : Y × Y → R, the goal is to find a single model h∗ ∈ H that minimizes the risk ε(h) = E(x,y)∼D [ `(h(x), y) ] . (1) In centralized learning, the datasets are pooled as D = ⋃ i∈[m]D i and A is applied to D until convergence. Note that applying A on D can be the application to any random subset, e.g., as in mini-batch training, and convergence is measured in terms of low training loss, small gradient, or small deviation from previous iterate. In standard federated learning (McMahan et al., 2017), A is applied in parallel for b ∈ N rounds on each client locally to produce local models h1, . . . , hm. These models are then centralized and aggregated using an aggregation operator agg : Hm → H, i.e., h = agg(h1, . . . , hm). The aggregated model h is then redistributed to local clients which perform another b rounds of training using h as a starting point. This is iterated until convergence of h. In the following section, we describe FEDDC. 4 METHOD We propose federated daisy chaining as an extension to federated learning and hence assume a setup where we have m clients and one designated coordinator node.1 We provide pseudocode of our approach as Algorithm 1. The client Each client trains its local model in each round on local data (line 4), and sends its model to the coordinator every b rounds for aggregation, where b is the aggregation period, and every d rounds for daisy chaining, where d is the daisy-chaining period (line 6). This re-distribution of models results in each individual model following a daisy-chain of clients, training on each local dataset. Such a daisy-chain is interrupted by each aggregation round. The coordinator Upon receiving models (line 10), in a daisy-chaining round (line 11) the coordinator draws a random permutation π of clients (line 12) and re-distributes the model of client i to client π(i) (line 13), while in an aggregation round (line 15), the coordinator instead aggregates all local models (line 16) and re-distributes the aggregate to all clients (line 17). Communication complexity Communication between clients and coordinator happens in O( tmaxd + tmax b ) rounds, where tmax is the overall number of rounds. Although inherently higher than in plain federated learning, the overall amount of communication in daisy chained federated learning is still low. In particular, in each communication round, each client sends and receives only a single model from the coordinator. The amount of communication per communication round is thus linear in the number of clients and model size, similar to federated averaging. In the following section we show that the additional daisy-chaining rounds ensure convergence for small datasets in terms of PAC-like , δ-guarantees. 5 THEORY Next, we theoretically analyze the key properties of FEDDC in terms of PAC-like ( , δ)-guarantees. For that, we make the following assumption on the learning algorithm A. Assumption 1 (( , δ)-guarantees). The learning algorithmA applied on all datasets drawn iid from D of size n ≥ n0 ∈ N produces a model h ∈ H such that with probability δ ∈ (0, 1] it holds for > 0 that P (ε(h) > ) < δ . The sample size n0 is a monotone function in δ and , i.e., for fixed n0 is monotonically increasing with δ and for fixed δ it is monotonically decreasing with (note that typically n0 is a polynomial in −1 and log(δ−1)). 1This star-topology can be extended to hierarchical networks in a straight-forward manner. Federated learning can also be performed in a decentralized network via gossip algorithms (Jelasity et al., 2005) Algorithm 1 Federated Daisy-Chaining FEDDC Require: daisy-chaining period d, aggregation period b, learning algorithmA, aggregation operator agg, m clients with local datasets D1, . . . , Dm 1: initialize local models h10, . . . , h m 0 2: at local client i in round t 3: draw random set of samples S from local dataset Di 4: hit ←A(S, hit−1) 5: if t% d = d− 1 or t % b = b− 1 then 6: send hit to coordinator 7: end if 8: 9: at coordinator in round t 10: receive models h1t , . . . , h m t 11: if t% d = d− 1 then 12: draw permutation π of [1,m] at random 13: for all i ∈ [m] send model hit to client π(i) 14: end if 15: if t% b = b− 1 then 16: ht ← agg(h1t , . . . , hmt ) 17: send ht to all clients 18: end if 19: Here ε(h) is the risk defined in Equation 1. We will show that aggregation for small local datasets can diverge and that daisy-chaining can prevent this. For this, we analyze the development of ( , δ)guarantees on model quality when aggregating local models with and without daisy-chaining. It is an open question how such an ( , δ)-guarantee develops when averaging local models. Existing work analyzes convergence (Haddadpour and Mahdavi, 2019; Kamp et al., 2018) or regret (Kamp et al., 2014) and thus gives no generalization bound. Recent work on generalization bounds for federated averaging via the NTK-framework (Huang et al., 2021) is promising, but not directly compatible with daisy-chaining: the analysis of Huang et al. (2021) requires local datasets to be disjoint which would be violated by a daisy-chaining round. Using the Radon point (Radon, 1921) as aggregation operator, however, does permit analyzing the development of ( , δ)-guarantees. In particular, it was shown that for fixed the probability of bad models is reduced doubly exponentially (Kamp et al., 2017) when we aggregate models using the (iterated) Radon point (Clarkson et al., 1996). Here, a Radon point of a set of points S from a space X is—similar to the geometric median—a point in the convex hull of S with a high centrality (more precisely, a Tukey depth (Tukey, 1975; Gilad-Bachrach et al., 2004) of at least 2). For a Radon point to exist, the size of S has to be sufficiently large; the minimum size of S ⊂ X is denoted the Radon number of the space X and for X ⊆ Rd the radon number is d+ 2. Let r ∈ N be the Radon number of H, A be a learning algorithm as in assumption 1, and ε be convex. Assume m ≥ rh many clients with h ∈ N. For > 0, δ ∈ (0, 1] assume local datasets D1, . . . , Dm of size larger than n0( , δ) drawn iid from D, and h1, . . . , hm be local models trained on them usingA. Let rh be the iterated Radon point with h iterations computed on the local models. Then it follows from Theorem 3 in Kamp et al. (2017) that for all i ∈ [m] it holds that P (ε(rh) > ) ≤ (r P (ε(hi) > ))2 h (2) where the probability is over the random draws of local datasets. This implies that the iterated Radon point only improves over the local models if δ < r−1. Consequently, local models need to achieve a minimum quality for the federated learning system to converge. Corollary 2. Given a model space H with Radon number r ∈ N, convex risk ε, and a learning algorithm A with sample size n( , δ). Given > 0 and any h ∈ N, if local datasets D1, . . . , Dm with m ≥ rh are smaller than n0( , r−1), then federated learning using the Radon point does not improve model quality in terms of ( , δ)-guarantees. In other words, when using aggregation by Radon points alone, an improvement in terms of ( , δ)guarantees is strongly dependent on large enough local datasets. Furthermore, given δ > r−1, the guarantee can become arbitrarily bad by increasing the number of aggregation rounds. Federated Daisy-Chaining as given in Algo. 1 permutes local models at random, which is in theory equivalent to permuting local datasets. This way, the amount of data visible to each model is increased. Since the permutation is drawn at random, the minimum amount of distinct local samples observed by each model can be given with high probability. Lemma 3. Given δ ∈ (0, 1], m ∈ N clients, and k ∈ [m], if Algorithm 1 with daisy chaining period d ∈ N is run for T ∈ N rounds with T ≥ d ln δ ln ( m−1 m ) (m− k + 1)m then each local model has seen at least k distinct datasets with probability 1− δ. Proof. For m clients with m local datasets, the chance of a client i to not see dataset j after τ many permutations is ( m−1 m )τ . The probability that each of the m clients is not seeing m − k + 1 other datasets is hence m−k+1∏ j=1 ( m− 1 m )τ = ( m− 1 m )τ(m−k+1) , and corresponds to the probability of each client seeing less than k distinct other datasets. The probability of all clients seeing at least k distinct datasets is hence at least 1− ( m− 1 m )τ(m−k+1)m ! ≥ 1− δ ⇔ ( m− 1 m )τ(m−k+1)m ! ≤ δ . Taking the logarithm on both sides with base (m− 1)/m < 1 yields τ(m− k + 1)m ≥ ln δ ln m−1m . Multiplying with m− k+1 and observing that τ many daisy-chaining rounds with period d require T = τd total rounds yields the result. From Lm. 3 it follows that when we perform daisy-chaining with m clients, and local datasets of size n, for at least d ln δ((ln(m− 1)− ln(m))(m− k+1)m)−1 rounds, each local model will with probability at least 1− δ be trained on at least kn samples. Proposition 4. Given a model space H with Radon number r ∈ N, convex risk ε, and a learning algorithm A with sample size n( , δ). Given > 0, δ ∈ (0, r−1) and any h ∈ N, if local datasets D1, . . . , Dm of size n ∈ N with m ≥ rh, then Alg. 1 using the Radon point with b ≥ d ln δ ln ( m−1 m ) ( m− n0( ,δ)n + 1 ) m improves model quality in terms of ( , δ)-guarantees. Proof. The number of daisy-chaining rounds before computing a Radon point ensure that with probability 1−δ all local models are trained on at least kn samples with k = n0( , δ)/n, i.e., each model is trained on at least n0( , δ) samples and thus an ( , δ)-guarantee holds for each model. Since δ < r−1, this guarantee is improved as detailed in Eq. (2). To support this theoretical result, we compare FEDDC using the iterated Radon point with standard federated learning on the SUSY binary classification dataset (Baldi et al., 2014), training a linear model on 441 clients with only 2 samples per client. The results in Figure 1 show that after 500 rounds FEDDC reached the test accuracy of a model that has been trained on the centralized dataset (ACC=0.77) beating federated learning by a large margin (ACC=0.65). Before further investigating FEDDC empirically in Section 7, we discuss the privacy-aspects of FEDDC in the following section. 6 PRIVACY A major benefit of federated learning is that data remains undisclosed on the local clients and only model parameters are exchanged. It is, however, possible to infer upon local data given model parameters (Ma et al., 2020). In classical federated learning there are two types of attacks that would allow such inference: (i) an attacker intercepting the communication of a client with the coordinator obtaining model updates to infer upon the clients data, and (ii) a malicious coordinator obtaining models to infer upon the data of each client. A malicious client cannot learn about other clients data, since it only obtains the average of all local models. In federated daisychaining there is a third possible attack: (iii) a malicious client obtaining model updates from another client to infer upon its data. In the following, we discuss potential defenses against these three types of attacks in more detail. Note that we limit the discussion on attacks that aim at inferring upon local data, thus breaching data privacy. For a discussion of attacks that aim to poison the learning process (Bhagoji et al., 2019) or create backdoors (Sun et al., 2019) for adversarial examples, we refer to Lyu et al. (2020). A general and wide-spread approach to tackle all three possible attack types is to add noise to the model parameters before sending. Using appropriate clipping and noise, this guarantees , δdifferential privacy for local data (Wei et al., 2020) at the cost of a slight-to-moderate loss in model quality. Another approach to tackle an attack on communication (i) is to use encrypted communication. One can also protect against a malicious coordinator (ii) by using homomorphic encryption that allows the coordinator to average models without decrypting them (Zhang et al., 2020). This, however, only works for particular aggregation operators and does not allow to perform daisy-chaining. Secure daisy-chaining in the presence of a malicious coordinator (ii) can, however, be performed using asymmetric encryption. Assume each client creates a public-private key pair and shares the public key with the coordinator. To avoid the malicious coordinator to send clients its own public key and act as a man in the middle, public keys have to be announced (e.g., by broadcast). While this allows sending clients to identify the recipient of their model, no receiving client can identify the sender. Thus, inference on the origin of a model remains impossible. For a daisy-chaining round the coordinator sends the public key of the receiving client to the sending client, the sending client checks the validity of the key and sends an encrypted model to the coordinator which forwards it to the receiving client. Since only the receiving client can decrypt the model, the communication is secure. In standard federated learning, a malicious client cannot infer upon the data of other clients from model updates, since it only receives the average model. In federated daisy-chaining, it receives the model from a random, unknown client in each daisy-chaining round. Now, the malicious client can infer upon the membership of a particular data point in the local dataset of the client the model originated from, i.e., a membership inference attack (Shokri et al., 2017). Similarly, the malicious client can infer upon the presence of data points with certain attributes in the dataset (Ateniese et al., 2015). The malicious client, however, does not know the client the model was trained on, i.e., it does not know the origin of the dataset. Using a random scheduling of daisy-chaining and averaging rounds at the coordinator, the malicious client cannot even distinguish between a model from another client or the average of all models. Nonetheless, daisy-chaining opens up new potential attack vectors (e.g., by clustering received models to potentially determine their origins). These potential attack vectors can be tackled by adding noise to model parameters as discussed above, since “[d]ifferentially private models are, by construction, secure against membership inference attacks” (Shokri et al., 2017). To investigate the impact of this privacy technique on FEDDC, we apply it in practice: We train a small ResNet on 250 clients using FEDDC with d = 2 and b = 10. Details on the experimental setup can be found in Supp. ??,??. Differential privacy is achieved by clipping local model updates and adding Gaussian noise as proposed by Geyer et al. (2017). The results shown in Figure 2 indicate that the standard trade-off between model quality and privacy holds for FEDDC as well. Moreover, for mild privacy settings the model quality does not decrease. That is, FEDDC is able to robustly predict even under differential privacy. 7 EMPIRICAL EVALUATION We evaluate FEDDC against the state-of-the-art in federated learning on synthetic and real world data. In particular, we compare to standard Federated averaging (FedAvg) (McMahan et al., 2017), FedAvg with equal communication as FEDDC, FedProx (Li et al., 2020a), and simple daisy-chaining without aggregation. As real world applications we consider the image classification problem CIFAR10 (Krizhevsky, 2009), publicly available MRI scans for brain tumors2, and chest X-rays for 2https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection pneumonia (e.g., from COVID-19)3. For reproducibility, we provide details on architectures, and experimental setup in Supp. ??,??. The implementation of the experiments is publicly available at https://anonymous.4open.science/r/FedDC-1BC9. 7.1 SYNTHETIC DATA We first investigate the potential of FEDDC on a synthetic binary classification dataset generated by the sklearn (Pedregosa et al., 2011) make_classification function with 100 features. On this dataset, we train a simple MLP with 3 hidden layers on m = 50 clients with n = 10 samples per client. We compare FEDDC with d = 1 and b = 200 to FedAvg with b = 200. The results presented in Figure 3 show that FEDDC achieves an optimal test performance of 0.89 (centralized training on all data achieves a test accuracy of 0.88), substantially outperforming FedAvg. The results indicate that the main reason is overfitting of local clients, since for FedAvg train accuracy reaches 1.0 quickly after each averaging step. In the following, we investigate how these promising results translate to real-world datasets. 7.2 CIFAR10 To compare FEDDC with the state of the art on real world data, we first consider the CIFAR10 image benchmark. To find a suitable aggregation period b for FEDDC and FedAvg, we first run a search grid across periods for 250 clients with small versions of ResNet (details in Supp. ??). We report the results in Figure 4 and set the period for FEDDC to 10, and consider federated averaging with periods of both 1 and 10. For our next experiment, we equip 150 clients each with a ResNet18. To simulate our setting that each client has a small amount of samples, each one of them only receives 64 samples. Note that the combined amount of examples is only one fifth of the original training data, hence we cannot expect the typical performance on this dataset. As NNs are non-convex, Radon points are no longer suitable as aggre- gation method, we instead resort to averaging. Results are reported in Table 1. We observe that FEDDC achieves substantially higher accuracy of more than 6 percentage points over federated averaging with the same amount of communication. Looking closer, we see that FedAvg drastically overfits, achieving training accuracies of 0.97, a similar trends as reported in Figure 3 for synthetic data. We further see that daisy-chaining alone, besides its privacy issues, performs worse than FEDDC. Similarly, FedProx run with b = 10 and µ = 0.1 only achieves an accuracy of 0.545. 7.3 MEDICAL IMAGE DATA We conduct experiments on real medical image data, which are naturally of small sample size and represent actual health related machine learning tasks. Here, we observe similar trends as for CIFAR10. For the brain MRI scans, we simulate 25 clients equipped with simple CNNs (see App. ??) and 8 samples each. The results for brain tumor prediction based on these scans are reported in 3https://www.kaggle.com/praveengovi/coronahack-chest-xraydataset Table 1. Again, FEDDC performs best, beating both FedAvg and FedProx on this challenging tasks. For pneumonia, we simulate 150 clients training ResNet18 (see again App. ??) with 8 samples per client. The results in Table 1 not only show that FEDDC again outperforms all baselines, but also highlight that FEDDC enables us to train a ResNet18 to high accuracy with as little as 8 samples per client. 8 DISCUSSION Empirical evaluation shows that FEDDC drastically improves upon state-of-the-art methods for federated learning for settings with only small amounts of available data. This confirms the theoretical potential, given by the , δ-guarantees, of improving model quality, which is unique among federated learning methods. Using the iterated Radon point as aggregation method, and given as few as 2 samples per client, FEDDC matches the test accuracy of a model trained on the whole SUSY dataset, outperforming standard federated learning by over 12% points of accuracy. This result shows that unlike federated learning, FEDDC does not heavily overfit and is able to learn a generalized model, and is consistent with a synthetic prediction task using multi-layer perceptrons. To study FEDDC in the context of real data, we consider both the standard image benchmark data CIFAR10, as well as two challenging image classification tasks from the health domain where only little data is available. On each of these tasks, FEDDC consistently outperforms state-of-the-art federate learning methods. Similar to before, we observe overfitting of standard federate learning methods. To rule out any effects due to increased communication, we also considered FedAvg with the same amount of communication as our method, however, FedAvg shows no improvement. Through FEDDC, we present an effective solution to the problem of federated learning on small datasets. We further show that our method is able to robustly predict even under the effect of differential privacy, and suggest effective measures based on encryption as mitigations against attacks on communication or malicious coordinators. 9 CONCLUSION We considered the problem of learning high quality models in settings where data is inherently distributed across sites, data cannot be shared between sites, and each site only has very little data available. We propose an elegant, surprisingly simple approach that effectively solves this problem, by combining the idea of model aggregation approaches from federated learning with the concept of passing individual models around while still maintaining privacy. We showed that this approach theoretically improves models in terms of , δ-guarantees, which state-of-the-art federated averaging can not provide. In extensive empirical evaluations, including challenging image classification tasks from the health domain, we further show that for settings with limited data available per site, our method improves upon existing work by a wide margin. It thus paves the way for learning high quality models from small datasets. Although the amount of communication is not a critical issue for the settings where we intend FEDDC to be used in, it does make for engaging future work to improve its communication efficiency and hence also enable it for settings with limited bandwidth, e.g., regarding model training on mobile devices. Both from a practical, as well as from a security and privacy perspective, it would also be interesting to study how to formulate FEDDC in a decentralized setting, when no coordinator is available.
1. How does the proposed approach improve model accuracy in federated learning settings? 2. What are the assumptions made about data distribution in the paper, and how might non-iid data affect the method's generalization? 3. Are there any differences in communication complexity between the two types of aggregations used in the proposed method? If so, how do they impact the overall performance? 4. Can the authors provide further explanation or references regarding the statement in the introduction about achieving good global models without disclosing local data? 5. How does the daisy-chaining technique utilized in the proposed method enhance the performance of federated learning algorithms, and what additional insights can be provided regarding Proposition 4?
Summary Of The Paper Review
Summary Of The Paper This paper considers a federated learning setting in which the sample size at client is so inadequet that the local objectives greatly differ from the global one. This paper proposes a novel approach that intertwines model aggregations with permutations of local models. By doing so, local models are exposed to several clients' data which ultimately improves the model accuracy. Review It is mentioned in the introduction that "While it can achieve good global models without disclosing any of the local data, it does require sufficient data to be available at each site in order for the locally trained models to achieve a minimum quality". I wonder if the authors can elaborate more on this by providing relevant references and a more detailed technical discussion. A major concern in this work is the assumption of iid data. We know that in federated learning applications, the data is highly heterogenous. I wonder how well the proposed method would generalize to more realistic non-iid setting. There is a subtle difference between the communication complexity of the two types of aggregation in the proposed method. When the coordinator aggregates the models and broadcasts the average (during aggregation period), there exist such broadcasting opportunity since all the clients receive the same model. However, during the daisy-chaining period, different models are pushed down to the clients, accounting for more communication complexity. Therefore, the characterization of communication complexity in Section 4 O(t_max/d + t_max/b) needs refinements. It is not clear -in theory- how the performance of a federated learning algorithm is improved when using daisy-chaining technique. Proposition 4 is stated quite vaguely. Can the authors elaborate more on this?
ICLR
Title Picking Daisies in Private: Federated Learning from Small Datasets Abstract Federated learning allows multiple parties to collaboratively train a joint model without sharing local data. This enables applications of machine learning in settings of inherently distributed, undisclosable data such as in the medical domain. In practice, joint training is usually achieved by aggregating local models, for which local training objectives have to be in expectation similar to the joint (global) objective. Often, however, local datasets are so small that local objectives differ greatly from the global objective, resulting in federated learning to fail. We propose a novel approach that intertwines model aggregations with permutations of local models. The permutations expose each local model to a daisy chain of local datasets resulting in more efficient training in data-sparse domains. This enables training on extremely small local datasets, such as patient data across hospitals, while retaining the training efficiency and privacy benefits of federated learning. 1 INTRODUCTION How can we learn high quality models when data is inherently distributed into small parts that cannot be shared or pooled, as we for example often encounter in the medical domain (Rieke et al., 2020)? Federated learning solves many but not all of these problems. While it can achieve good global models without disclosing any of the local data, it does require sufficient data to be available at each site in order for the locally trained models to achieve a minimum quality. In many relevant applications, this is not the case: in healthcare settings we often have as little as a few dozens of samples (Granlund et al., 2020; Su et al., 2021; Painter et al., 2020), but also domains where DL is generally regarded as highly successful, such as natural language processing and object detection often suffer from a lack of data (Liu et al., 2020; Kang et al., 2019). In this paper, we present an elegant idea in which models are moved around iteratively and passed from client to client, thus forming a daisy-chain that the model traverses. This daisy-chaining allows us to learn from such small, distributed datasets simply by consecutively training the model with the data availalbe at each site. We should not do this naively, however, since it would not only lead to overfitting – a common problem in federated learning which can cause learning to diverge (Haddadpour and Mahdavi, 2019) – but also violate privacy, since a client can infer from a model upon the data of the client it received it from (Shokri et al., 2017). To alleviate these issues, we propose an approach to combine daisy-chaining of local datasets with aggregation of models orchestrated by a coordinator, which we term federated daisy-chaining (FEDDC). In a nutshell, in a daisy-chain round, local models are send to a coordinator and randomly redistributed to clients, without aggregation. Thereby, each individual model follows its own random daisy-chain of clients. In an aggregation round, models are aggregated and redistributed, as in standard federated learning. Our approach maintains privacy of local datasets, while it provably guarantees improvement of model quality of convex models with a suitable aggregation method which standard federated learning cannot. For non-convex models such as convolutional neural networks, it improves the performance upon the state-of-the-art on standard benchmark and medical datasets. Formally, we show that FEDDC allows convergences on datasets so small that standard federated learning diverges by analyzing aggregation via the Radon point from a PAC-learning perspective. We substantiate this theoretical analysis by showing that FEDDC in practice matches the accuracy of a model trained on the full data of the SUSY binary classification dataset, beating standard federated learning by a wide margin. In fact, FEDDC allows us to achieve optimal model quality with only 2 samples per client. In an extensive empirical evaluation, we then show that FEDDC outperforms vanilla federated learning (McMahan et al., 2017), naive daisy-chaining, and FedProx (Li et al., 2020a) on the benchmark dataset CIFAR10 (Krizhevsky, 2009), and more importantly on two realworld medical datasets. In summary, our contributions are as follows. • FEDDC, an elegant novel approach to federated learning from small datasets via a combination of daisy-chaining and aggregation, • a theoretical guarantee that FEDDC improves models in terms of , δ-guarantees, which standard federated averaging can not, • a thorough discussion of the privacy aspects and mitigations suitable for FEDDC, including an empirical evaluation of differentially private FEDDC, and • an extensive set of experiments showing that FEDDC substantially improves model quality for small datasets, being able to train ResNet18 on a pneumonia dataset on as little as 8 samples per client. 2 RELATED WORK Learning from small datasets is a well studied problem in machine learning. In the literature, we find among others general solutions, such as using simpler models, and transfer learning (Torrey and Shavlik, 2010), to more specialized ones, such as data augmentation (Ibrahim et al., 2021) and fewshot learning (Vinyals et al., 2016; Prabhu et al., 2019). In our scenario, however, data is abundant, but the problem is that the local datasets at each site are small and cannot be pooled. Federated learning and its variants have been shown to learn from incomplete local data sources, e.g., non-iid label distributions (Li et al., 2020a; Wang et al., 2019) and differing feature distributions (Li et al., 2020b; Reisizadeh et al., 2020a), but were proven to fail in case of large gradient diversity (Haddadpour and Mahdavi, 2019) and too dissimilar label distribution (Marfoq et al., 2021). For very small datasets, local empirical distributions may vary greatly from the global data distribution—while the difference of empirical to true distribution decreases exponentially with the sample size (e.g., according to the Dvoretzky–Kiefer–Wolfowitz inequality), for small sample sizes the difference can be substantial, in particular if the data distribution differs from a Normal distribution (Kwak and Kim, 2017). FedProx (Li et al., 2020a) is a variant of federated learning that is particularly suitable for tackling non-iid data distributions. It increases training stability by adding a momentum-like proximal term to the objective functions. This increase in stability, however, comes at the cost of not being privacypreserving anymore (Rahman et al., 2021). We compare FEDDC to FedProx in Section 7. We can reduce sample complexity by training networks only partially, e.g., by collaboratively training only a shared part of the model. This approach allows training client-specific models in the medical domain (Yang et al., 2021), but by design cannot train a global model. Kiss and Horvath (2021) propose a decentralized and communication-efficient variant of federated learning that migrates models over a decentralized network and stores incoming models locally at each client until sufficiently many models are collected on each client for an averaging step, similar to Gossip federated learing (Jelasity et al., 2005). The variant without averaging is similar to simple daisy-chaining which we compare to in Section 7. FEDDC is compatible with any aggregation operator, including the Radon point (Kamp et al., 2017) and the geometric median (Pillutla et al., 2019). It can also be straightforwardly combined with approaches to improve communication-efficiency, such as dynamic averaging (Kamp et al., 2018), and model quantization (Reisizadeh et al., 2020b). 3 PRELIMINARIES We assume iterative learning algorithms (cf. Chp. 2.1.4 Kamp, 2019) A : X × Y × H → H that update a model h ∈ H using a dataset D ⊂ X × Y from an input space X and output space Y , i.e., ht+1 = A(D,ht). Given a set of m ∈ N clients with local datasets D1, . . . , Dm ⊂ X × Y drawn iid from a data distribution D and a loss function ` : Y × Y → R, the goal is to find a single model h∗ ∈ H that minimizes the risk ε(h) = E(x,y)∼D [ `(h(x), y) ] . (1) In centralized learning, the datasets are pooled as D = ⋃ i∈[m]D i and A is applied to D until convergence. Note that applying A on D can be the application to any random subset, e.g., as in mini-batch training, and convergence is measured in terms of low training loss, small gradient, or small deviation from previous iterate. In standard federated learning (McMahan et al., 2017), A is applied in parallel for b ∈ N rounds on each client locally to produce local models h1, . . . , hm. These models are then centralized and aggregated using an aggregation operator agg : Hm → H, i.e., h = agg(h1, . . . , hm). The aggregated model h is then redistributed to local clients which perform another b rounds of training using h as a starting point. This is iterated until convergence of h. In the following section, we describe FEDDC. 4 METHOD We propose federated daisy chaining as an extension to federated learning and hence assume a setup where we have m clients and one designated coordinator node.1 We provide pseudocode of our approach as Algorithm 1. The client Each client trains its local model in each round on local data (line 4), and sends its model to the coordinator every b rounds for aggregation, where b is the aggregation period, and every d rounds for daisy chaining, where d is the daisy-chaining period (line 6). This re-distribution of models results in each individual model following a daisy-chain of clients, training on each local dataset. Such a daisy-chain is interrupted by each aggregation round. The coordinator Upon receiving models (line 10), in a daisy-chaining round (line 11) the coordinator draws a random permutation π of clients (line 12) and re-distributes the model of client i to client π(i) (line 13), while in an aggregation round (line 15), the coordinator instead aggregates all local models (line 16) and re-distributes the aggregate to all clients (line 17). Communication complexity Communication between clients and coordinator happens in O( tmaxd + tmax b ) rounds, where tmax is the overall number of rounds. Although inherently higher than in plain federated learning, the overall amount of communication in daisy chained federated learning is still low. In particular, in each communication round, each client sends and receives only a single model from the coordinator. The amount of communication per communication round is thus linear in the number of clients and model size, similar to federated averaging. In the following section we show that the additional daisy-chaining rounds ensure convergence for small datasets in terms of PAC-like , δ-guarantees. 5 THEORY Next, we theoretically analyze the key properties of FEDDC in terms of PAC-like ( , δ)-guarantees. For that, we make the following assumption on the learning algorithm A. Assumption 1 (( , δ)-guarantees). The learning algorithmA applied on all datasets drawn iid from D of size n ≥ n0 ∈ N produces a model h ∈ H such that with probability δ ∈ (0, 1] it holds for > 0 that P (ε(h) > ) < δ . The sample size n0 is a monotone function in δ and , i.e., for fixed n0 is monotonically increasing with δ and for fixed δ it is monotonically decreasing with (note that typically n0 is a polynomial in −1 and log(δ−1)). 1This star-topology can be extended to hierarchical networks in a straight-forward manner. Federated learning can also be performed in a decentralized network via gossip algorithms (Jelasity et al., 2005) Algorithm 1 Federated Daisy-Chaining FEDDC Require: daisy-chaining period d, aggregation period b, learning algorithmA, aggregation operator agg, m clients with local datasets D1, . . . , Dm 1: initialize local models h10, . . . , h m 0 2: at local client i in round t 3: draw random set of samples S from local dataset Di 4: hit ←A(S, hit−1) 5: if t% d = d− 1 or t % b = b− 1 then 6: send hit to coordinator 7: end if 8: 9: at coordinator in round t 10: receive models h1t , . . . , h m t 11: if t% d = d− 1 then 12: draw permutation π of [1,m] at random 13: for all i ∈ [m] send model hit to client π(i) 14: end if 15: if t% b = b− 1 then 16: ht ← agg(h1t , . . . , hmt ) 17: send ht to all clients 18: end if 19: Here ε(h) is the risk defined in Equation 1. We will show that aggregation for small local datasets can diverge and that daisy-chaining can prevent this. For this, we analyze the development of ( , δ)guarantees on model quality when aggregating local models with and without daisy-chaining. It is an open question how such an ( , δ)-guarantee develops when averaging local models. Existing work analyzes convergence (Haddadpour and Mahdavi, 2019; Kamp et al., 2018) or regret (Kamp et al., 2014) and thus gives no generalization bound. Recent work on generalization bounds for federated averaging via the NTK-framework (Huang et al., 2021) is promising, but not directly compatible with daisy-chaining: the analysis of Huang et al. (2021) requires local datasets to be disjoint which would be violated by a daisy-chaining round. Using the Radon point (Radon, 1921) as aggregation operator, however, does permit analyzing the development of ( , δ)-guarantees. In particular, it was shown that for fixed the probability of bad models is reduced doubly exponentially (Kamp et al., 2017) when we aggregate models using the (iterated) Radon point (Clarkson et al., 1996). Here, a Radon point of a set of points S from a space X is—similar to the geometric median—a point in the convex hull of S with a high centrality (more precisely, a Tukey depth (Tukey, 1975; Gilad-Bachrach et al., 2004) of at least 2). For a Radon point to exist, the size of S has to be sufficiently large; the minimum size of S ⊂ X is denoted the Radon number of the space X and for X ⊆ Rd the radon number is d+ 2. Let r ∈ N be the Radon number of H, A be a learning algorithm as in assumption 1, and ε be convex. Assume m ≥ rh many clients with h ∈ N. For > 0, δ ∈ (0, 1] assume local datasets D1, . . . , Dm of size larger than n0( , δ) drawn iid from D, and h1, . . . , hm be local models trained on them usingA. Let rh be the iterated Radon point with h iterations computed on the local models. Then it follows from Theorem 3 in Kamp et al. (2017) that for all i ∈ [m] it holds that P (ε(rh) > ) ≤ (r P (ε(hi) > ))2 h (2) where the probability is over the random draws of local datasets. This implies that the iterated Radon point only improves over the local models if δ < r−1. Consequently, local models need to achieve a minimum quality for the federated learning system to converge. Corollary 2. Given a model space H with Radon number r ∈ N, convex risk ε, and a learning algorithm A with sample size n( , δ). Given > 0 and any h ∈ N, if local datasets D1, . . . , Dm with m ≥ rh are smaller than n0( , r−1), then federated learning using the Radon point does not improve model quality in terms of ( , δ)-guarantees. In other words, when using aggregation by Radon points alone, an improvement in terms of ( , δ)guarantees is strongly dependent on large enough local datasets. Furthermore, given δ > r−1, the guarantee can become arbitrarily bad by increasing the number of aggregation rounds. Federated Daisy-Chaining as given in Algo. 1 permutes local models at random, which is in theory equivalent to permuting local datasets. This way, the amount of data visible to each model is increased. Since the permutation is drawn at random, the minimum amount of distinct local samples observed by each model can be given with high probability. Lemma 3. Given δ ∈ (0, 1], m ∈ N clients, and k ∈ [m], if Algorithm 1 with daisy chaining period d ∈ N is run for T ∈ N rounds with T ≥ d ln δ ln ( m−1 m ) (m− k + 1)m then each local model has seen at least k distinct datasets with probability 1− δ. Proof. For m clients with m local datasets, the chance of a client i to not see dataset j after τ many permutations is ( m−1 m )τ . The probability that each of the m clients is not seeing m − k + 1 other datasets is hence m−k+1∏ j=1 ( m− 1 m )τ = ( m− 1 m )τ(m−k+1) , and corresponds to the probability of each client seeing less than k distinct other datasets. The probability of all clients seeing at least k distinct datasets is hence at least 1− ( m− 1 m )τ(m−k+1)m ! ≥ 1− δ ⇔ ( m− 1 m )τ(m−k+1)m ! ≤ δ . Taking the logarithm on both sides with base (m− 1)/m < 1 yields τ(m− k + 1)m ≥ ln δ ln m−1m . Multiplying with m− k+1 and observing that τ many daisy-chaining rounds with period d require T = τd total rounds yields the result. From Lm. 3 it follows that when we perform daisy-chaining with m clients, and local datasets of size n, for at least d ln δ((ln(m− 1)− ln(m))(m− k+1)m)−1 rounds, each local model will with probability at least 1− δ be trained on at least kn samples. Proposition 4. Given a model space H with Radon number r ∈ N, convex risk ε, and a learning algorithm A with sample size n( , δ). Given > 0, δ ∈ (0, r−1) and any h ∈ N, if local datasets D1, . . . , Dm of size n ∈ N with m ≥ rh, then Alg. 1 using the Radon point with b ≥ d ln δ ln ( m−1 m ) ( m− n0( ,δ)n + 1 ) m improves model quality in terms of ( , δ)-guarantees. Proof. The number of daisy-chaining rounds before computing a Radon point ensure that with probability 1−δ all local models are trained on at least kn samples with k = n0( , δ)/n, i.e., each model is trained on at least n0( , δ) samples and thus an ( , δ)-guarantee holds for each model. Since δ < r−1, this guarantee is improved as detailed in Eq. (2). To support this theoretical result, we compare FEDDC using the iterated Radon point with standard federated learning on the SUSY binary classification dataset (Baldi et al., 2014), training a linear model on 441 clients with only 2 samples per client. The results in Figure 1 show that after 500 rounds FEDDC reached the test accuracy of a model that has been trained on the centralized dataset (ACC=0.77) beating federated learning by a large margin (ACC=0.65). Before further investigating FEDDC empirically in Section 7, we discuss the privacy-aspects of FEDDC in the following section. 6 PRIVACY A major benefit of federated learning is that data remains undisclosed on the local clients and only model parameters are exchanged. It is, however, possible to infer upon local data given model parameters (Ma et al., 2020). In classical federated learning there are two types of attacks that would allow such inference: (i) an attacker intercepting the communication of a client with the coordinator obtaining model updates to infer upon the clients data, and (ii) a malicious coordinator obtaining models to infer upon the data of each client. A malicious client cannot learn about other clients data, since it only obtains the average of all local models. In federated daisychaining there is a third possible attack: (iii) a malicious client obtaining model updates from another client to infer upon its data. In the following, we discuss potential defenses against these three types of attacks in more detail. Note that we limit the discussion on attacks that aim at inferring upon local data, thus breaching data privacy. For a discussion of attacks that aim to poison the learning process (Bhagoji et al., 2019) or create backdoors (Sun et al., 2019) for adversarial examples, we refer to Lyu et al. (2020). A general and wide-spread approach to tackle all three possible attack types is to add noise to the model parameters before sending. Using appropriate clipping and noise, this guarantees , δdifferential privacy for local data (Wei et al., 2020) at the cost of a slight-to-moderate loss in model quality. Another approach to tackle an attack on communication (i) is to use encrypted communication. One can also protect against a malicious coordinator (ii) by using homomorphic encryption that allows the coordinator to average models without decrypting them (Zhang et al., 2020). This, however, only works for particular aggregation operators and does not allow to perform daisy-chaining. Secure daisy-chaining in the presence of a malicious coordinator (ii) can, however, be performed using asymmetric encryption. Assume each client creates a public-private key pair and shares the public key with the coordinator. To avoid the malicious coordinator to send clients its own public key and act as a man in the middle, public keys have to be announced (e.g., by broadcast). While this allows sending clients to identify the recipient of their model, no receiving client can identify the sender. Thus, inference on the origin of a model remains impossible. For a daisy-chaining round the coordinator sends the public key of the receiving client to the sending client, the sending client checks the validity of the key and sends an encrypted model to the coordinator which forwards it to the receiving client. Since only the receiving client can decrypt the model, the communication is secure. In standard federated learning, a malicious client cannot infer upon the data of other clients from model updates, since it only receives the average model. In federated daisy-chaining, it receives the model from a random, unknown client in each daisy-chaining round. Now, the malicious client can infer upon the membership of a particular data point in the local dataset of the client the model originated from, i.e., a membership inference attack (Shokri et al., 2017). Similarly, the malicious client can infer upon the presence of data points with certain attributes in the dataset (Ateniese et al., 2015). The malicious client, however, does not know the client the model was trained on, i.e., it does not know the origin of the dataset. Using a random scheduling of daisy-chaining and averaging rounds at the coordinator, the malicious client cannot even distinguish between a model from another client or the average of all models. Nonetheless, daisy-chaining opens up new potential attack vectors (e.g., by clustering received models to potentially determine their origins). These potential attack vectors can be tackled by adding noise to model parameters as discussed above, since “[d]ifferentially private models are, by construction, secure against membership inference attacks” (Shokri et al., 2017). To investigate the impact of this privacy technique on FEDDC, we apply it in practice: We train a small ResNet on 250 clients using FEDDC with d = 2 and b = 10. Details on the experimental setup can be found in Supp. ??,??. Differential privacy is achieved by clipping local model updates and adding Gaussian noise as proposed by Geyer et al. (2017). The results shown in Figure 2 indicate that the standard trade-off between model quality and privacy holds for FEDDC as well. Moreover, for mild privacy settings the model quality does not decrease. That is, FEDDC is able to robustly predict even under differential privacy. 7 EMPIRICAL EVALUATION We evaluate FEDDC against the state-of-the-art in federated learning on synthetic and real world data. In particular, we compare to standard Federated averaging (FedAvg) (McMahan et al., 2017), FedAvg with equal communication as FEDDC, FedProx (Li et al., 2020a), and simple daisy-chaining without aggregation. As real world applications we consider the image classification problem CIFAR10 (Krizhevsky, 2009), publicly available MRI scans for brain tumors2, and chest X-rays for 2https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection pneumonia (e.g., from COVID-19)3. For reproducibility, we provide details on architectures, and experimental setup in Supp. ??,??. The implementation of the experiments is publicly available at https://anonymous.4open.science/r/FedDC-1BC9. 7.1 SYNTHETIC DATA We first investigate the potential of FEDDC on a synthetic binary classification dataset generated by the sklearn (Pedregosa et al., 2011) make_classification function with 100 features. On this dataset, we train a simple MLP with 3 hidden layers on m = 50 clients with n = 10 samples per client. We compare FEDDC with d = 1 and b = 200 to FedAvg with b = 200. The results presented in Figure 3 show that FEDDC achieves an optimal test performance of 0.89 (centralized training on all data achieves a test accuracy of 0.88), substantially outperforming FedAvg. The results indicate that the main reason is overfitting of local clients, since for FedAvg train accuracy reaches 1.0 quickly after each averaging step. In the following, we investigate how these promising results translate to real-world datasets. 7.2 CIFAR10 To compare FEDDC with the state of the art on real world data, we first consider the CIFAR10 image benchmark. To find a suitable aggregation period b for FEDDC and FedAvg, we first run a search grid across periods for 250 clients with small versions of ResNet (details in Supp. ??). We report the results in Figure 4 and set the period for FEDDC to 10, and consider federated averaging with periods of both 1 and 10. For our next experiment, we equip 150 clients each with a ResNet18. To simulate our setting that each client has a small amount of samples, each one of them only receives 64 samples. Note that the combined amount of examples is only one fifth of the original training data, hence we cannot expect the typical performance on this dataset. As NNs are non-convex, Radon points are no longer suitable as aggre- gation method, we instead resort to averaging. Results are reported in Table 1. We observe that FEDDC achieves substantially higher accuracy of more than 6 percentage points over federated averaging with the same amount of communication. Looking closer, we see that FedAvg drastically overfits, achieving training accuracies of 0.97, a similar trends as reported in Figure 3 for synthetic data. We further see that daisy-chaining alone, besides its privacy issues, performs worse than FEDDC. Similarly, FedProx run with b = 10 and µ = 0.1 only achieves an accuracy of 0.545. 7.3 MEDICAL IMAGE DATA We conduct experiments on real medical image data, which are naturally of small sample size and represent actual health related machine learning tasks. Here, we observe similar trends as for CIFAR10. For the brain MRI scans, we simulate 25 clients equipped with simple CNNs (see App. ??) and 8 samples each. The results for brain tumor prediction based on these scans are reported in 3https://www.kaggle.com/praveengovi/coronahack-chest-xraydataset Table 1. Again, FEDDC performs best, beating both FedAvg and FedProx on this challenging tasks. For pneumonia, we simulate 150 clients training ResNet18 (see again App. ??) with 8 samples per client. The results in Table 1 not only show that FEDDC again outperforms all baselines, but also highlight that FEDDC enables us to train a ResNet18 to high accuracy with as little as 8 samples per client. 8 DISCUSSION Empirical evaluation shows that FEDDC drastically improves upon state-of-the-art methods for federated learning for settings with only small amounts of available data. This confirms the theoretical potential, given by the , δ-guarantees, of improving model quality, which is unique among federated learning methods. Using the iterated Radon point as aggregation method, and given as few as 2 samples per client, FEDDC matches the test accuracy of a model trained on the whole SUSY dataset, outperforming standard federated learning by over 12% points of accuracy. This result shows that unlike federated learning, FEDDC does not heavily overfit and is able to learn a generalized model, and is consistent with a synthetic prediction task using multi-layer perceptrons. To study FEDDC in the context of real data, we consider both the standard image benchmark data CIFAR10, as well as two challenging image classification tasks from the health domain where only little data is available. On each of these tasks, FEDDC consistently outperforms state-of-the-art federate learning methods. Similar to before, we observe overfitting of standard federate learning methods. To rule out any effects due to increased communication, we also considered FedAvg with the same amount of communication as our method, however, FedAvg shows no improvement. Through FEDDC, we present an effective solution to the problem of federated learning on small datasets. We further show that our method is able to robustly predict even under the effect of differential privacy, and suggest effective measures based on encryption as mitigations against attacks on communication or malicious coordinators. 9 CONCLUSION We considered the problem of learning high quality models in settings where data is inherently distributed across sites, data cannot be shared between sites, and each site only has very little data available. We propose an elegant, surprisingly simple approach that effectively solves this problem, by combining the idea of model aggregation approaches from federated learning with the concept of passing individual models around while still maintaining privacy. We showed that this approach theoretically improves models in terms of , δ-guarantees, which state-of-the-art federated averaging can not provide. In extensive empirical evaluations, including challenging image classification tasks from the health domain, we further show that for settings with limited data available per site, our method improves upon existing work by a wide margin. It thus paves the way for learning high quality models from small datasets. Although the amount of communication is not a critical issue for the settings where we intend FEDDC to be used in, it does make for engaging future work to improve its communication efficiency and hence also enable it for settings with limited bandwidth, e.g., regarding model training on mobile devices. Both from a practical, as well as from a security and privacy perspective, it would also be interesting to study how to formulate FEDDC in a decentralized setting, when no coordinator is available.
1. What is the focus and contribution of the paper on federated learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to address the problem of small data? 3. What are the weaknesses of the paper, especially regarding its privacy guarantees and communication costs? 4. Do you have any concerns or questions about the applicability of the approach in non-iid settings? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors propose a mechanism to solve the problem of small data in federated learning. Rather than directly uploading models to the server, there is an indicator to re-distribute each local model to other local models. They also discuss the conventional privacy guarantee with their approach. Finally, they provide experimental results on multiple datasets. The main contribution is to discuss how to solve the small data research question in FL. This question itself is practical and interesting. Also, I would say the analysis of the privacy guarantee is convincing. Review Strengths: To raise this real-world question is a shining point in this paper. To my best knowledge, there is limited work discussing this research question in this field; They cover the key points in this paper including privacy discussion, communication cost discussion, and comparison with selected baselines. They cite appropriate related works and the presentation of their work is easy to follow. Weaknesses: As each local model will be re-distributed to other clients in this mechanism, though they discuss the privacy guarantee, it will raise security concerns of model poisoning and attack propagation in the network; According to the current random distribution mechanism, though authors find a substitution of sharing data by sharing models, the communication cost will increase a lot. The authors did propose the sparse matrix, but considering the whole iterative process in FL, the communication cost has not been discussed fully. The non-iid setting is another point that most research papers would discuss in their work. I am wondering how this approach would perform with this setting. Also, FedAvg and FedProx are both good baselines, but I am a little bit curious about the comparison with other state-of-the-art baselines.
ICLR
Title Picking Daisies in Private: Federated Learning from Small Datasets Abstract Federated learning allows multiple parties to collaboratively train a joint model without sharing local data. This enables applications of machine learning in settings of inherently distributed, undisclosable data such as in the medical domain. In practice, joint training is usually achieved by aggregating local models, for which local training objectives have to be in expectation similar to the joint (global) objective. Often, however, local datasets are so small that local objectives differ greatly from the global objective, resulting in federated learning to fail. We propose a novel approach that intertwines model aggregations with permutations of local models. The permutations expose each local model to a daisy chain of local datasets resulting in more efficient training in data-sparse domains. This enables training on extremely small local datasets, such as patient data across hospitals, while retaining the training efficiency and privacy benefits of federated learning. 1 INTRODUCTION How can we learn high quality models when data is inherently distributed into small parts that cannot be shared or pooled, as we for example often encounter in the medical domain (Rieke et al., 2020)? Federated learning solves many but not all of these problems. While it can achieve good global models without disclosing any of the local data, it does require sufficient data to be available at each site in order for the locally trained models to achieve a minimum quality. In many relevant applications, this is not the case: in healthcare settings we often have as little as a few dozens of samples (Granlund et al., 2020; Su et al., 2021; Painter et al., 2020), but also domains where DL is generally regarded as highly successful, such as natural language processing and object detection often suffer from a lack of data (Liu et al., 2020; Kang et al., 2019). In this paper, we present an elegant idea in which models are moved around iteratively and passed from client to client, thus forming a daisy-chain that the model traverses. This daisy-chaining allows us to learn from such small, distributed datasets simply by consecutively training the model with the data availalbe at each site. We should not do this naively, however, since it would not only lead to overfitting – a common problem in federated learning which can cause learning to diverge (Haddadpour and Mahdavi, 2019) – but also violate privacy, since a client can infer from a model upon the data of the client it received it from (Shokri et al., 2017). To alleviate these issues, we propose an approach to combine daisy-chaining of local datasets with aggregation of models orchestrated by a coordinator, which we term federated daisy-chaining (FEDDC). In a nutshell, in a daisy-chain round, local models are send to a coordinator and randomly redistributed to clients, without aggregation. Thereby, each individual model follows its own random daisy-chain of clients. In an aggregation round, models are aggregated and redistributed, as in standard federated learning. Our approach maintains privacy of local datasets, while it provably guarantees improvement of model quality of convex models with a suitable aggregation method which standard federated learning cannot. For non-convex models such as convolutional neural networks, it improves the performance upon the state-of-the-art on standard benchmark and medical datasets. Formally, we show that FEDDC allows convergences on datasets so small that standard federated learning diverges by analyzing aggregation via the Radon point from a PAC-learning perspective. We substantiate this theoretical analysis by showing that FEDDC in practice matches the accuracy of a model trained on the full data of the SUSY binary classification dataset, beating standard federated learning by a wide margin. In fact, FEDDC allows us to achieve optimal model quality with only 2 samples per client. In an extensive empirical evaluation, we then show that FEDDC outperforms vanilla federated learning (McMahan et al., 2017), naive daisy-chaining, and FedProx (Li et al., 2020a) on the benchmark dataset CIFAR10 (Krizhevsky, 2009), and more importantly on two realworld medical datasets. In summary, our contributions are as follows. • FEDDC, an elegant novel approach to federated learning from small datasets via a combination of daisy-chaining and aggregation, • a theoretical guarantee that FEDDC improves models in terms of , δ-guarantees, which standard federated averaging can not, • a thorough discussion of the privacy aspects and mitigations suitable for FEDDC, including an empirical evaluation of differentially private FEDDC, and • an extensive set of experiments showing that FEDDC substantially improves model quality for small datasets, being able to train ResNet18 on a pneumonia dataset on as little as 8 samples per client. 2 RELATED WORK Learning from small datasets is a well studied problem in machine learning. In the literature, we find among others general solutions, such as using simpler models, and transfer learning (Torrey and Shavlik, 2010), to more specialized ones, such as data augmentation (Ibrahim et al., 2021) and fewshot learning (Vinyals et al., 2016; Prabhu et al., 2019). In our scenario, however, data is abundant, but the problem is that the local datasets at each site are small and cannot be pooled. Federated learning and its variants have been shown to learn from incomplete local data sources, e.g., non-iid label distributions (Li et al., 2020a; Wang et al., 2019) and differing feature distributions (Li et al., 2020b; Reisizadeh et al., 2020a), but were proven to fail in case of large gradient diversity (Haddadpour and Mahdavi, 2019) and too dissimilar label distribution (Marfoq et al., 2021). For very small datasets, local empirical distributions may vary greatly from the global data distribution—while the difference of empirical to true distribution decreases exponentially with the sample size (e.g., according to the Dvoretzky–Kiefer–Wolfowitz inequality), for small sample sizes the difference can be substantial, in particular if the data distribution differs from a Normal distribution (Kwak and Kim, 2017). FedProx (Li et al., 2020a) is a variant of federated learning that is particularly suitable for tackling non-iid data distributions. It increases training stability by adding a momentum-like proximal term to the objective functions. This increase in stability, however, comes at the cost of not being privacypreserving anymore (Rahman et al., 2021). We compare FEDDC to FedProx in Section 7. We can reduce sample complexity by training networks only partially, e.g., by collaboratively training only a shared part of the model. This approach allows training client-specific models in the medical domain (Yang et al., 2021), but by design cannot train a global model. Kiss and Horvath (2021) propose a decentralized and communication-efficient variant of federated learning that migrates models over a decentralized network and stores incoming models locally at each client until sufficiently many models are collected on each client for an averaging step, similar to Gossip federated learing (Jelasity et al., 2005). The variant without averaging is similar to simple daisy-chaining which we compare to in Section 7. FEDDC is compatible with any aggregation operator, including the Radon point (Kamp et al., 2017) and the geometric median (Pillutla et al., 2019). It can also be straightforwardly combined with approaches to improve communication-efficiency, such as dynamic averaging (Kamp et al., 2018), and model quantization (Reisizadeh et al., 2020b). 3 PRELIMINARIES We assume iterative learning algorithms (cf. Chp. 2.1.4 Kamp, 2019) A : X × Y × H → H that update a model h ∈ H using a dataset D ⊂ X × Y from an input space X and output space Y , i.e., ht+1 = A(D,ht). Given a set of m ∈ N clients with local datasets D1, . . . , Dm ⊂ X × Y drawn iid from a data distribution D and a loss function ` : Y × Y → R, the goal is to find a single model h∗ ∈ H that minimizes the risk ε(h) = E(x,y)∼D [ `(h(x), y) ] . (1) In centralized learning, the datasets are pooled as D = ⋃ i∈[m]D i and A is applied to D until convergence. Note that applying A on D can be the application to any random subset, e.g., as in mini-batch training, and convergence is measured in terms of low training loss, small gradient, or small deviation from previous iterate. In standard federated learning (McMahan et al., 2017), A is applied in parallel for b ∈ N rounds on each client locally to produce local models h1, . . . , hm. These models are then centralized and aggregated using an aggregation operator agg : Hm → H, i.e., h = agg(h1, . . . , hm). The aggregated model h is then redistributed to local clients which perform another b rounds of training using h as a starting point. This is iterated until convergence of h. In the following section, we describe FEDDC. 4 METHOD We propose federated daisy chaining as an extension to federated learning and hence assume a setup where we have m clients and one designated coordinator node.1 We provide pseudocode of our approach as Algorithm 1. The client Each client trains its local model in each round on local data (line 4), and sends its model to the coordinator every b rounds for aggregation, where b is the aggregation period, and every d rounds for daisy chaining, where d is the daisy-chaining period (line 6). This re-distribution of models results in each individual model following a daisy-chain of clients, training on each local dataset. Such a daisy-chain is interrupted by each aggregation round. The coordinator Upon receiving models (line 10), in a daisy-chaining round (line 11) the coordinator draws a random permutation π of clients (line 12) and re-distributes the model of client i to client π(i) (line 13), while in an aggregation round (line 15), the coordinator instead aggregates all local models (line 16) and re-distributes the aggregate to all clients (line 17). Communication complexity Communication between clients and coordinator happens in O( tmaxd + tmax b ) rounds, where tmax is the overall number of rounds. Although inherently higher than in plain federated learning, the overall amount of communication in daisy chained federated learning is still low. In particular, in each communication round, each client sends and receives only a single model from the coordinator. The amount of communication per communication round is thus linear in the number of clients and model size, similar to federated averaging. In the following section we show that the additional daisy-chaining rounds ensure convergence for small datasets in terms of PAC-like , δ-guarantees. 5 THEORY Next, we theoretically analyze the key properties of FEDDC in terms of PAC-like ( , δ)-guarantees. For that, we make the following assumption on the learning algorithm A. Assumption 1 (( , δ)-guarantees). The learning algorithmA applied on all datasets drawn iid from D of size n ≥ n0 ∈ N produces a model h ∈ H such that with probability δ ∈ (0, 1] it holds for > 0 that P (ε(h) > ) < δ . The sample size n0 is a monotone function in δ and , i.e., for fixed n0 is monotonically increasing with δ and for fixed δ it is monotonically decreasing with (note that typically n0 is a polynomial in −1 and log(δ−1)). 1This star-topology can be extended to hierarchical networks in a straight-forward manner. Federated learning can also be performed in a decentralized network via gossip algorithms (Jelasity et al., 2005) Algorithm 1 Federated Daisy-Chaining FEDDC Require: daisy-chaining period d, aggregation period b, learning algorithmA, aggregation operator agg, m clients with local datasets D1, . . . , Dm 1: initialize local models h10, . . . , h m 0 2: at local client i in round t 3: draw random set of samples S from local dataset Di 4: hit ←A(S, hit−1) 5: if t% d = d− 1 or t % b = b− 1 then 6: send hit to coordinator 7: end if 8: 9: at coordinator in round t 10: receive models h1t , . . . , h m t 11: if t% d = d− 1 then 12: draw permutation π of [1,m] at random 13: for all i ∈ [m] send model hit to client π(i) 14: end if 15: if t% b = b− 1 then 16: ht ← agg(h1t , . . . , hmt ) 17: send ht to all clients 18: end if 19: Here ε(h) is the risk defined in Equation 1. We will show that aggregation for small local datasets can diverge and that daisy-chaining can prevent this. For this, we analyze the development of ( , δ)guarantees on model quality when aggregating local models with and without daisy-chaining. It is an open question how such an ( , δ)-guarantee develops when averaging local models. Existing work analyzes convergence (Haddadpour and Mahdavi, 2019; Kamp et al., 2018) or regret (Kamp et al., 2014) and thus gives no generalization bound. Recent work on generalization bounds for federated averaging via the NTK-framework (Huang et al., 2021) is promising, but not directly compatible with daisy-chaining: the analysis of Huang et al. (2021) requires local datasets to be disjoint which would be violated by a daisy-chaining round. Using the Radon point (Radon, 1921) as aggregation operator, however, does permit analyzing the development of ( , δ)-guarantees. In particular, it was shown that for fixed the probability of bad models is reduced doubly exponentially (Kamp et al., 2017) when we aggregate models using the (iterated) Radon point (Clarkson et al., 1996). Here, a Radon point of a set of points S from a space X is—similar to the geometric median—a point in the convex hull of S with a high centrality (more precisely, a Tukey depth (Tukey, 1975; Gilad-Bachrach et al., 2004) of at least 2). For a Radon point to exist, the size of S has to be sufficiently large; the minimum size of S ⊂ X is denoted the Radon number of the space X and for X ⊆ Rd the radon number is d+ 2. Let r ∈ N be the Radon number of H, A be a learning algorithm as in assumption 1, and ε be convex. Assume m ≥ rh many clients with h ∈ N. For > 0, δ ∈ (0, 1] assume local datasets D1, . . . , Dm of size larger than n0( , δ) drawn iid from D, and h1, . . . , hm be local models trained on them usingA. Let rh be the iterated Radon point with h iterations computed on the local models. Then it follows from Theorem 3 in Kamp et al. (2017) that for all i ∈ [m] it holds that P (ε(rh) > ) ≤ (r P (ε(hi) > ))2 h (2) where the probability is over the random draws of local datasets. This implies that the iterated Radon point only improves over the local models if δ < r−1. Consequently, local models need to achieve a minimum quality for the federated learning system to converge. Corollary 2. Given a model space H with Radon number r ∈ N, convex risk ε, and a learning algorithm A with sample size n( , δ). Given > 0 and any h ∈ N, if local datasets D1, . . . , Dm with m ≥ rh are smaller than n0( , r−1), then federated learning using the Radon point does not improve model quality in terms of ( , δ)-guarantees. In other words, when using aggregation by Radon points alone, an improvement in terms of ( , δ)guarantees is strongly dependent on large enough local datasets. Furthermore, given δ > r−1, the guarantee can become arbitrarily bad by increasing the number of aggregation rounds. Federated Daisy-Chaining as given in Algo. 1 permutes local models at random, which is in theory equivalent to permuting local datasets. This way, the amount of data visible to each model is increased. Since the permutation is drawn at random, the minimum amount of distinct local samples observed by each model can be given with high probability. Lemma 3. Given δ ∈ (0, 1], m ∈ N clients, and k ∈ [m], if Algorithm 1 with daisy chaining period d ∈ N is run for T ∈ N rounds with T ≥ d ln δ ln ( m−1 m ) (m− k + 1)m then each local model has seen at least k distinct datasets with probability 1− δ. Proof. For m clients with m local datasets, the chance of a client i to not see dataset j after τ many permutations is ( m−1 m )τ . The probability that each of the m clients is not seeing m − k + 1 other datasets is hence m−k+1∏ j=1 ( m− 1 m )τ = ( m− 1 m )τ(m−k+1) , and corresponds to the probability of each client seeing less than k distinct other datasets. The probability of all clients seeing at least k distinct datasets is hence at least 1− ( m− 1 m )τ(m−k+1)m ! ≥ 1− δ ⇔ ( m− 1 m )τ(m−k+1)m ! ≤ δ . Taking the logarithm on both sides with base (m− 1)/m < 1 yields τ(m− k + 1)m ≥ ln δ ln m−1m . Multiplying with m− k+1 and observing that τ many daisy-chaining rounds with period d require T = τd total rounds yields the result. From Lm. 3 it follows that when we perform daisy-chaining with m clients, and local datasets of size n, for at least d ln δ((ln(m− 1)− ln(m))(m− k+1)m)−1 rounds, each local model will with probability at least 1− δ be trained on at least kn samples. Proposition 4. Given a model space H with Radon number r ∈ N, convex risk ε, and a learning algorithm A with sample size n( , δ). Given > 0, δ ∈ (0, r−1) and any h ∈ N, if local datasets D1, . . . , Dm of size n ∈ N with m ≥ rh, then Alg. 1 using the Radon point with b ≥ d ln δ ln ( m−1 m ) ( m− n0( ,δ)n + 1 ) m improves model quality in terms of ( , δ)-guarantees. Proof. The number of daisy-chaining rounds before computing a Radon point ensure that with probability 1−δ all local models are trained on at least kn samples with k = n0( , δ)/n, i.e., each model is trained on at least n0( , δ) samples and thus an ( , δ)-guarantee holds for each model. Since δ < r−1, this guarantee is improved as detailed in Eq. (2). To support this theoretical result, we compare FEDDC using the iterated Radon point with standard federated learning on the SUSY binary classification dataset (Baldi et al., 2014), training a linear model on 441 clients with only 2 samples per client. The results in Figure 1 show that after 500 rounds FEDDC reached the test accuracy of a model that has been trained on the centralized dataset (ACC=0.77) beating federated learning by a large margin (ACC=0.65). Before further investigating FEDDC empirically in Section 7, we discuss the privacy-aspects of FEDDC in the following section. 6 PRIVACY A major benefit of federated learning is that data remains undisclosed on the local clients and only model parameters are exchanged. It is, however, possible to infer upon local data given model parameters (Ma et al., 2020). In classical federated learning there are two types of attacks that would allow such inference: (i) an attacker intercepting the communication of a client with the coordinator obtaining model updates to infer upon the clients data, and (ii) a malicious coordinator obtaining models to infer upon the data of each client. A malicious client cannot learn about other clients data, since it only obtains the average of all local models. In federated daisychaining there is a third possible attack: (iii) a malicious client obtaining model updates from another client to infer upon its data. In the following, we discuss potential defenses against these three types of attacks in more detail. Note that we limit the discussion on attacks that aim at inferring upon local data, thus breaching data privacy. For a discussion of attacks that aim to poison the learning process (Bhagoji et al., 2019) or create backdoors (Sun et al., 2019) for adversarial examples, we refer to Lyu et al. (2020). A general and wide-spread approach to tackle all three possible attack types is to add noise to the model parameters before sending. Using appropriate clipping and noise, this guarantees , δdifferential privacy for local data (Wei et al., 2020) at the cost of a slight-to-moderate loss in model quality. Another approach to tackle an attack on communication (i) is to use encrypted communication. One can also protect against a malicious coordinator (ii) by using homomorphic encryption that allows the coordinator to average models without decrypting them (Zhang et al., 2020). This, however, only works for particular aggregation operators and does not allow to perform daisy-chaining. Secure daisy-chaining in the presence of a malicious coordinator (ii) can, however, be performed using asymmetric encryption. Assume each client creates a public-private key pair and shares the public key with the coordinator. To avoid the malicious coordinator to send clients its own public key and act as a man in the middle, public keys have to be announced (e.g., by broadcast). While this allows sending clients to identify the recipient of their model, no receiving client can identify the sender. Thus, inference on the origin of a model remains impossible. For a daisy-chaining round the coordinator sends the public key of the receiving client to the sending client, the sending client checks the validity of the key and sends an encrypted model to the coordinator which forwards it to the receiving client. Since only the receiving client can decrypt the model, the communication is secure. In standard federated learning, a malicious client cannot infer upon the data of other clients from model updates, since it only receives the average model. In federated daisy-chaining, it receives the model from a random, unknown client in each daisy-chaining round. Now, the malicious client can infer upon the membership of a particular data point in the local dataset of the client the model originated from, i.e., a membership inference attack (Shokri et al., 2017). Similarly, the malicious client can infer upon the presence of data points with certain attributes in the dataset (Ateniese et al., 2015). The malicious client, however, does not know the client the model was trained on, i.e., it does not know the origin of the dataset. Using a random scheduling of daisy-chaining and averaging rounds at the coordinator, the malicious client cannot even distinguish between a model from another client or the average of all models. Nonetheless, daisy-chaining opens up new potential attack vectors (e.g., by clustering received models to potentially determine their origins). These potential attack vectors can be tackled by adding noise to model parameters as discussed above, since “[d]ifferentially private models are, by construction, secure against membership inference attacks” (Shokri et al., 2017). To investigate the impact of this privacy technique on FEDDC, we apply it in practice: We train a small ResNet on 250 clients using FEDDC with d = 2 and b = 10. Details on the experimental setup can be found in Supp. ??,??. Differential privacy is achieved by clipping local model updates and adding Gaussian noise as proposed by Geyer et al. (2017). The results shown in Figure 2 indicate that the standard trade-off between model quality and privacy holds for FEDDC as well. Moreover, for mild privacy settings the model quality does not decrease. That is, FEDDC is able to robustly predict even under differential privacy. 7 EMPIRICAL EVALUATION We evaluate FEDDC against the state-of-the-art in federated learning on synthetic and real world data. In particular, we compare to standard Federated averaging (FedAvg) (McMahan et al., 2017), FedAvg with equal communication as FEDDC, FedProx (Li et al., 2020a), and simple daisy-chaining without aggregation. As real world applications we consider the image classification problem CIFAR10 (Krizhevsky, 2009), publicly available MRI scans for brain tumors2, and chest X-rays for 2https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection pneumonia (e.g., from COVID-19)3. For reproducibility, we provide details on architectures, and experimental setup in Supp. ??,??. The implementation of the experiments is publicly available at https://anonymous.4open.science/r/FedDC-1BC9. 7.1 SYNTHETIC DATA We first investigate the potential of FEDDC on a synthetic binary classification dataset generated by the sklearn (Pedregosa et al., 2011) make_classification function with 100 features. On this dataset, we train a simple MLP with 3 hidden layers on m = 50 clients with n = 10 samples per client. We compare FEDDC with d = 1 and b = 200 to FedAvg with b = 200. The results presented in Figure 3 show that FEDDC achieves an optimal test performance of 0.89 (centralized training on all data achieves a test accuracy of 0.88), substantially outperforming FedAvg. The results indicate that the main reason is overfitting of local clients, since for FedAvg train accuracy reaches 1.0 quickly after each averaging step. In the following, we investigate how these promising results translate to real-world datasets. 7.2 CIFAR10 To compare FEDDC with the state of the art on real world data, we first consider the CIFAR10 image benchmark. To find a suitable aggregation period b for FEDDC and FedAvg, we first run a search grid across periods for 250 clients with small versions of ResNet (details in Supp. ??). We report the results in Figure 4 and set the period for FEDDC to 10, and consider federated averaging with periods of both 1 and 10. For our next experiment, we equip 150 clients each with a ResNet18. To simulate our setting that each client has a small amount of samples, each one of them only receives 64 samples. Note that the combined amount of examples is only one fifth of the original training data, hence we cannot expect the typical performance on this dataset. As NNs are non-convex, Radon points are no longer suitable as aggre- gation method, we instead resort to averaging. Results are reported in Table 1. We observe that FEDDC achieves substantially higher accuracy of more than 6 percentage points over federated averaging with the same amount of communication. Looking closer, we see that FedAvg drastically overfits, achieving training accuracies of 0.97, a similar trends as reported in Figure 3 for synthetic data. We further see that daisy-chaining alone, besides its privacy issues, performs worse than FEDDC. Similarly, FedProx run with b = 10 and µ = 0.1 only achieves an accuracy of 0.545. 7.3 MEDICAL IMAGE DATA We conduct experiments on real medical image data, which are naturally of small sample size and represent actual health related machine learning tasks. Here, we observe similar trends as for CIFAR10. For the brain MRI scans, we simulate 25 clients equipped with simple CNNs (see App. ??) and 8 samples each. The results for brain tumor prediction based on these scans are reported in 3https://www.kaggle.com/praveengovi/coronahack-chest-xraydataset Table 1. Again, FEDDC performs best, beating both FedAvg and FedProx on this challenging tasks. For pneumonia, we simulate 150 clients training ResNet18 (see again App. ??) with 8 samples per client. The results in Table 1 not only show that FEDDC again outperforms all baselines, but also highlight that FEDDC enables us to train a ResNet18 to high accuracy with as little as 8 samples per client. 8 DISCUSSION Empirical evaluation shows that FEDDC drastically improves upon state-of-the-art methods for federated learning for settings with only small amounts of available data. This confirms the theoretical potential, given by the , δ-guarantees, of improving model quality, which is unique among federated learning methods. Using the iterated Radon point as aggregation method, and given as few as 2 samples per client, FEDDC matches the test accuracy of a model trained on the whole SUSY dataset, outperforming standard federated learning by over 12% points of accuracy. This result shows that unlike federated learning, FEDDC does not heavily overfit and is able to learn a generalized model, and is consistent with a synthetic prediction task using multi-layer perceptrons. To study FEDDC in the context of real data, we consider both the standard image benchmark data CIFAR10, as well as two challenging image classification tasks from the health domain where only little data is available. On each of these tasks, FEDDC consistently outperforms state-of-the-art federate learning methods. Similar to before, we observe overfitting of standard federate learning methods. To rule out any effects due to increased communication, we also considered FedAvg with the same amount of communication as our method, however, FedAvg shows no improvement. Through FEDDC, we present an effective solution to the problem of federated learning on small datasets. We further show that our method is able to robustly predict even under the effect of differential privacy, and suggest effective measures based on encryption as mitigations against attacks on communication or malicious coordinators. 9 CONCLUSION We considered the problem of learning high quality models in settings where data is inherently distributed across sites, data cannot be shared between sites, and each site only has very little data available. We propose an elegant, surprisingly simple approach that effectively solves this problem, by combining the idea of model aggregation approaches from federated learning with the concept of passing individual models around while still maintaining privacy. We showed that this approach theoretically improves models in terms of , δ-guarantees, which state-of-the-art federated averaging can not provide. In extensive empirical evaluations, including challenging image classification tasks from the health domain, we further show that for settings with limited data available per site, our method improves upon existing work by a wide margin. It thus paves the way for learning high quality models from small datasets. Although the amount of communication is not a critical issue for the settings where we intend FEDDC to be used in, it does make for engaging future work to improve its communication efficiency and hence also enable it for settings with limited bandwidth, e.g., regarding model training on mobile devices. Both from a practical, as well as from a security and privacy perspective, it would also be interesting to study how to formulate FEDDC in a decentralized setting, when no coordinator is available.
1. What is the main contribution of the paper in terms of training procedure for federated learning systems? 2. What are the strengths and weaknesses of the proposed method compared to other methods in the literature? 3. How does the paper address privacy concerns in their algorithm, and how does it compare to other methods in this regard? 4. Can you provide more details about the iterated Radon point method used in the paper, and how it compares to other aggregating models? 5. How does the communication cost of FedDC compare to that of FedAvg, and how does it impact the performance of the method? 6. What is the significance of the assumption 1 in the paper, and how does it relate to the monotonicity of n0 with δ? 7. How does the paper handle the issue of overfitting in their model, and how does it compare to other methods in this regard? 8. Can you provide more details about the experimental evaluation of the method, including the number of communication rounds used and the validation-based early stopping approach? 9. How does the paper address the issue of limited-sized local datasets, and how does it compare to other methods that have addressed this issue? 10. What are some potential future directions for research related to this paper's contributions?
Summary Of The Paper Review
Summary Of The Paper The paper presents a new training procedure for federated learning (FL) systems based on a daisy-chain network. Training the system has two phases, a daisy-chain phase in which models are transmitted from one client to another via a coordinator node, and the standard aggregation phase in which models are averaged according to FedAvg rule. The main motivation is reduced overfitting and improved generalization compared to the standard FedAvg training, especially with limited-sized local datasets. The paper presents PAC-like ( ϵ , δ )-guarantees for their algorithm, and provides a discussion on privacy violation concerns of their algorithm. The method is demonstrated on several datasets and shows improved accuracy over the compared methods. Review Strengths: A novel extension to the FedAvg training procedure which includes model sharing between the clients and aggregation of models using iterated Radon point. Although the discussion on privacy concerns of the algorithm is general and may apply to other FL methods, I think that it is a nice addition to the paper. Most experimental details were given. Code was also provided (for the synthetic data experiment only though). Weaknesses: The comparisons in the empirical evaluation are leaking. Most comparisons are to FedAvg which is rather old and generally isn't very strong. The authors keep stating that their method is state-of-the-art, yet it wasn't really compared against recent methods in order to consider it as such (for example, [1-4]). One of the main claims in favor of the proposed method is increased performance when local datasets are limited in size. Yet the paper misses important related work that addressed this issue as well (e.g., [1, 5]). I think that these studies and similar ones should be addressed in the revised version of the paper. Several alternatives were proposed for aggregating models instead of the standard model averaging (e.g., [6]). I believe the authors should address this line of research as well. The authors state that "The amount of communication per communication round is thus linear in the number of clients and model size, similar to federated averaging"; whilst the first part is true, I do not think that it is indeed similar to the communication cost of FedAvg which is sub-linear in the number of clients. At each communication round, FedDC communicates with all the clients while FedAvg (and similar methods) communicate with a small subset of them. In assumption 1, can you please clarify why for fixed ϵ , n 0 is monotonically increasing with δ ? I think that background on the iterated Radon point method is missing. It cannot be expected from the average reader to be familiar with it. In section 7.2 it is stated the FedDC outperforms FedAvg even though they use the same amount of communication rounds. Can you please clarify how many communication rounds were used? I think that an analysis is missing here. Perhaps FedAvg was in overfit state which resulted in decreased generalization. I think that a better approach to compare between the models would be to declare a maximal number of communication rounds and use validation-based early stopping to select the best model. [1] Achituve, I., Shamsian, A., Navon, A., Chechik, G., & Fetaya, E. (2021). Personalized Federated Learning with Gaussian Processes. arXiv preprint arXiv:2106.15482. [2] Collins, L., Hassani, H., Mokhtari, A., & Shakkottai, S. (2021). Exploiting Shared Representations for Personalized Federated Learning. arXiv preprint arXiv:2102.07078. [3] Shamsian, A., Navon, A., Fetaya, E., & Chechik, G. (2021). Personalized Federated Learning using Hypernetworks. arXiv preprint arXiv:2103.04628. [4] Li, Q., He, B., & Song, D. (2021). Model-Contrastive Federated Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10713-10722). [5] Hao, W., El-Khamy, M., Lee, J., Zhang, J., Liang, K. J., Chen, C., & Duke, L. C. (2021). Towards Fair Federated Learning with Zero-Shot Data Augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3310-3319). [6] Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, N., & Khazaeni, Y. (2019, May). Bayesian nonparametric federated learning of neural networks. In International Conference on Machine Learning (pp. 7252-7261). PMLR.
ICLR
Title Width transfer: on the (in)variance of width optimization Abstract Optimizing the channel counts for different layers of a convolutional neural network (CNN) to achieve better accuracy without increasing the number of floatingpoint operations (FLOPs) required during the forward pass at test time is known as CNN width optimization. Prior work on width optimization has cast it as a hyperparameter optimization problem, which introduces large computational overhead (e.g., an additional 2× FLOPs of standard training). Minimizing this overhead could therefore significantly speed up training. With that in mind, this paper sets out to empirically understand width optimization by sensitivity analysis. Specifically, we consider the following research question: “Do similar training configurations for a width optimization algorithm also share similar optimized widths?” If this in fact is the case, it suggests that one can find a proxy training configuration requiring fewer FLOPs to reduce the width optimization overhead. Scientifically, it also suggests that similar training configurations share common architectural structure, which may be harnessed to build better methods. To this end, we control the training configurations, i.e., network architectures and training data, for three existing width optimization algorithms and find that the optimized widths are largely transferable across settings. Per our analysis, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet. Our findings not only suggest an efficient way to conduct width optimization, but also highlight that the widths that lead to better accuracy are invariant to various aspects of network architectures and training data. 1 INTRODUCTION Better designs for the number of channels for each layer of a convolutional neural network (CNN) can lead to improved test performance for image classification without requiring additional floatingpoint operations (FLOPs) during the forward pass at test time (Guo et al., 2020; Gordon et al., 2018; Yu & Huang, 2019). However, designing the width for efficient CNNs is a non-trivial task that often requires intuition and domain expertise together with trial-and-error to do well. To alleviate the labor-intensive trial-and-error procedure, designing the width for each layer based on computational methods has received growing interests. Examples include using reinforcement learning (He et al., 2018b), evolutionary algorithms (Liu et al., 2019; Chin et al., 2020b), and differentiable parameterization (Guo et al., 2020; Dong & Yang, 2019; Ning et al., 2020) to optimize for layer widths. However, these methods often add a large computational overhead for the width optimization procedure. Concretely, even for efficient methods that use differentiable parameterization (Guo et al., 2020), width optimization takes an additional 2× the training time. To contextualize this overhead, using distributed training on 8 V100 GPUs, it takes approximately 100 GPU hours for training a ResNet50 on the ImageNet dataset (Radosavovic et al., 2020). That is, it takes 300 GPU hours for both width optimization using differentiable methods (Guo et al., 2020) and training the optimized ResNet50. Additionally, width optimization algorithms are often parameterized by some target testtime resource constraints, e.g., FLOPs. As a result, the computational overhead scales linearly with the number of target constraint levels considered, which can be exceedingly time-consuming for optimizing CNNs for embodied AI applications (Chin et al., 2020b). Reducing the overhead for width optimization, therefore, would have material practical benefits. Fundamentally, one of the key reasons why width optimization is so costly is due its limited understanding by the community. Without assuming or understanding the structure of the problem, the best hope is to conduct black-box optimization whenever training configurations, datasets, or architectures are changed. In this work, we take the first step to empirically understand the structure underlying the width optimization problem by changing network architectures and the properties of training datasets, and observing how they affect width optimization. Such sensitivity analysis techniques have been used in various contexts in the deep learning literature for empirically unveiling the black-box of deep learning (Morcos et al., 2018; Tan & Le, 2019; Li et al., 2018). Similarly, we manipulate the network architectures and dataset properties to aid in our understanding of the (in)variances of width optimization. A width optimization algorithm, A, takes in a training configuration, C = (D,N ), and outputs a set of optimized widths, w∗, which maxmizes the validation accuracy without additional test-time FLOPs. A can be seen as neural architecture search algorithms that search for layer-wise channel counts. C consists of initial network, N , and training dataset, D. In this paper, we systematically analyze how similar C affects w∗. If similar inputs to the width optimization algorithms result in similar outputs, one can exploit this commonality to reduce the width optimization overhead, especially if the two input configurations have markedly different FLOPs requirements as shown schematically in Figure 1. As a concrete example, if optimizing the widths of a wide CNN (high FLOPS) and a narrow CNN (low FLOPs) results in widths that differ only by a multiplier, one can reduce the computational overhead of width optimization by computing widths for the low FLOPS, narrow CNN and adjusting them to accommodate the high FLOPs, wide CNN. In addition to the potential efficiency benefits from understanding the structure of the width optimization problem, such an exploration can also have scientific benefits. Specifically, via sensitivity analysis, we can provide quantitative results to the following question: Does the level of overparameterization, number of training samples, and the resolution of input images affect width optimization? And if so, by how much? Based on a comprehensive empirical analysis, we provide the following contributions: • We find that there exist shared structures in the width optimization problem across a wide variety of network and dataset configurations. Specifically, the outputs of width optimization algorithms are largely robust to perturbation of the network’s depth and width via the multiplier method and to perturbation of the dataset’s sample size and resolution. • We demonstrate a practical implication of the previous finding by showing that one can achieve 320× reduction in width optimization overhead for a scaled-up MobileNetV2 and ResNet18 on ImageNet without hurting the accuracy improvements brought by width optimization. • We find that, for ResNet18 on ImageNet, width optimization has limited benefits for very deep models. 2 RELATED WORK 2.1 WIDTH OPTIMIZATION The layer-by-layer widths of a deep CNN are often regarded as a hyperparameter optimized to improve classification accuracy. Specifically, the width multiplier method (Howard et al., 2017) was introduced in MobileNet to arrive at models with different FLOPs and accuracy profiles and has been widely adopted in many papers (He et al., 2018a; Tan & Le, 2019; Chin et al., 2020a). Besides simply scaling the width to arrive at CNNs with different FLOPs, width (or channel) optimization has received growing interest recently as a means to improve the efficiency of deployed deep CNNs. To optimize the width of a CNN, one approach is Prune-then-Grow, which uses channel pruning methods to arrive at a down-sized CNN with non-trivial layerwise channel counts followed by re-growing the CNN to its original FLOPs using the width multiplier method (Gordon et al., 2018). Another approach is Grow-then-Prune, which uses the width multiplier method to enlarge the CNN followed by channel pruning methods to trim down channels to match its pre-grown FLOPs (Yu & Huang, 2019; Liu et al., 2019; Guo et al., 2020). The aim of both of these methods is to improve performance while maintaining a given FLOPs count. The schematic view of the two approaches is visualized in the left panel of Figure 2. While there are many papers on channel pruning (Li et al., 2016; Molchanov et al., 2019), they mostly focus their analysis on down-sizing the pre-trained models whereas we focus on improving the classification accuracy of a network by optimizing its width without affecting test-time FLOPs. While one can use either the Prune-then-Grow or Grow-then-Prune strategies to arrive at a CNN of equivalent FLOPs, it is not clear if such strategies generally improve performance over the unoptimized baseline as it is not verified in most channel pruning papers. As a result, in this paper, we focus on analyzing algorithms that have demonstrated the effectiveness over the baseline (uniform) width configurations in either Prune-then-Grow or Grow-then-Prune settings. 2.2 EMPIRICAL SENSITIVITY ANALYSIS FOR UNDERSTANDING DEEP LEARNING Controlling the components of deep learning to further shed light on understanding and optimization is an important direction complementing theoretical understanding for deep learning. While our work is the first that focuses on empirically understanding the sensitivity of width optimization in deep CNNs, we discuss efforts in empirically understanding deep learning more generally. Empirical analysis to gain insight into hyperparameter selection for training deep neural networks has received great interest. Goyal et al. (2017) have shown that the accuracy can be retained across a wide range of batch sizes when the number of epochs is held fixed while the batch size is scaled linearly. Shallue et al. (2019) have conducted a comprehensive analysis that sheds light on the relationship among batch size, training iterations, and learning rate. Tan & Le (2019) have empirically controlled the network’s architecture to identify a more cost-efficient way of scaling deep models with high classification accuracy. Radosavovic et al. (2020) have empirically manipulated the architectural choices for CNNs and identified a parameterizable relationship among depth, channel counts, and group size for ResNets. There are also papers using analysis as a tool for better understanding deep learning phenomena. Morcos et al. (2018) have used Canonical Correlation Analysis to empirically understand if networks of different architectures and optimization behavior are of different clusters. Li et al. (2018) have controlled the network’s architecture and observed that networks of different architectures have different empirical loss landscapes. Frankle et al. (2019) altered the winning ticket generation procedures in the lottery ticket hypothesis (Frankle & Carbin, 2018) and observed that an empirical stability measure predicts well the success of winning tickets. Morcos et al. (2019) have empirically controlled the training configuration for the winning ticket generation in the lottery ticket hypothesis and discovered the transferability of winning tickets. 3 APPROACH 3.1 NOTATION We use [n] to represent a set {1, 2, ..., n}. A width optimization algorithm, A, is a function that takes a training configuration, C = (D,N ), which consists of a training dataset, D, and a network to be optimized, N . The output of A is a vector of width multipliers, w∗, whose dimension is L (the number of layers). Let Fi denote the channel counts for layer i, it is expected that a network with optimized channel counts {wiFi ∀ i ∈ [L]} has the same FLOPs as a network with the original channel counts {Fi ∀i ∈ [L]} but has better test accuracy when trained with D. Following common terminologies, a CNN is divided into stages where the convolutional blocks in each stage share the same output resolutions. Within each stage, several convolutional blocks are repeated where a convolutional block consists of several convolutional layers such as the bottleneck block in ResNet (He et al., 2016) and the inverted residual block in MobileNetV2 (Sandler et al., 2018). We use the concept of stage and block for describing extrapolation mechanisms in Section 3.3. 3.2 WIDTH OPTIMIZATION METHODS Theoretically, we only care about algorithms A that “solve” the width optimization problem. However, the width optimization problem is inherently combinatorially hard. As a result, we use stateof-the-art width optimization algorithms as probes to understand them further. Specifically, we consider MorphNet (Gordon et al., 2018), AutoSlim (Yu & Huang, 2019), and DMCP (Guo et al., 2020). Here, we explicitly consider papers that have analyzed width optimization, i.e., improving the accuracy while maintaining the test time FLOP requirements. We also limited our investigation to methods with publicly available code to ensure correctness of implementation. 3.3 PROJECTION AND EXTRAPOLATION We consider various projection to project a large-scale training setting C = (D,N ) into a smallscale proxy training setting Ĉ = (D̂, N̂ ) and this allows us to understand the structure about A for all three considered algorithms. More specifically, we consider projecting the network N down to narrower networks via the width multiplier method and shallower networks via the depth multiplier method (Tan & Le, 2019). For projection on the dataset D, we consider subsampling the number of training images and changing the input resolutions. The extrapolation function involves two stages: matching dimensions and matching test-time FLOPs between w∗ and ŵ∗, i.e., E(ŵ) def= αMdim(ŵ) where Mdim(ŵ) is a function that matches dimensions and α is responsible for matching test-time FLOPs. If the depth multiplier method is involved during projection, ŵ∗ and w∗ will have different dimensions. As a result, we propose the following two Mdim to extrapolate the found width multipliers to higher dimensions. • Stack-last-block: Stack the width multipliers of the last block of each stage until the desired depth is met. • Stack-average-block: To avoid mismatches among residual connection, we exclude the first block of each stage and compute the average of the width multipliers across all the rest blocks in a stage, then stack the average width multipliers until the desired depth is met. Note that since existing network designs share the same channel widths for all the blocks in each stage, the above two Mdim will have the same results when applied to networks with un-optimized widths. After the dimension is matched, we apply a multiplier α to it such that the resulting network has a test-time FLOPs similar to that induced by w∗. The schematic views of extrapolation is shown in the right panel of Figure 2. 3.4 EXPERIMENTAL SETUP We used the ImageNet dataset (Deng et al., 2009) throughout the experiments. Unless stated otherwise, we use 224 input resolution. For CNNs, we considered both ResNet18 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018). Models were each trained on a single machine with 8 V100 GPUs for all the experiments. The width multiplier method applies to all the layers in the CNNs while the depth-multiplier excludes the first and the last stage of MobileNetV2 as there is only one block for each of them. After we obtained w∗ orE(ŵ∗) we trained the corresponding network from scratch using the same hyperparameters to analyze their performance. The training hyperparameters are detailed in Appendix A. We repeated each experiment three times with different random seeds and reported the mean and standard deviation. 4 EXPERIMENTS In this section, we empirically investigate the transferability of the optimized widths across different projection and extrapolation strategies. Specifically, we study projection across architectures by evaluating different widths and depths as well as across dataset properties by sub-sampling and resolution sub-sampling for dataset projection. In addition to analyzing each of these four settings independently, we also investigate a compound projection that involves all four jointly. To measure the transferability, we plot the ImageNet top-1 accuracy induced by w∗ and E(ŵ∗) across training configurations that have different width optimization overhead. Width optimization overhead refers to the FLOPs needed to carry out width optimization. If transferable, we should observe a horizontal line across different width optimization overheads, suggesting that performance is not compromised by deriving w∗ from a smaller FLOP configuation. Moreover, we also plot the ImageNet top-1 accuracy for the un-optimized baseline to characterize whether width optimization or width transfer is even useful for some configurations. 4.1 PROJECTION: WIDTH Here, we focus on answering the following question: “Do networks with different initial width (varied by the width multiplier method) share common structures in their optimized widths?” The answer to this question is unclear from existing literature as the current practice is to re-run the optimization across different networks (Guo et al., 2020; Gordon et al., 2018; Liu et al., 2019). If the optimized widths are similar across different initial widths, it suggests that the quality of the vector of channel counts are scale-invariant given the current practice of training deep CNNs and the dataset. Additionally, it also has practical benefits where one can use width transfer to reduce the overhead incurred in width optimization. On the other hand, if the optimized widths are dissimilar, it suggests that not only the direction of the vector of channel counts is important, but also its magnitude. That is, for different magnitudes, we need different orientations. Practically, it suggests that existing practice, though costly, is empirically proved to be necessary. To empirically study the aforementioned question, we considered the source width multipliers of {0.312, 0.5, 0.707, 1, 1.414, 1.732} for N̂ and the target width multiplier of 1.732 for N . The set is chosen based on square roots of width optimization overhead. We analyzed the similarity between E(ŵ∗) and w∗ in the accuracy space. In Figure 3a and 3b, we plot the ImageNet top-1 accuracy for the baseline (1.732× wide network) and networks induced by E(ŵ∗) and w∗. For ResNet18, the width optimization overhead can be saved by up to 96% for all three algorithms without compromising the accuracy improvements gained by the width optimization. On MobileNetV2, AutoSlim and MorphNet can transfer well and save up to 80% width optimization overhead. While DMCP for MobileNetV2 results in 0.4% top-1 accuracy loss when using width transfer, the transferred width can still outperform the uniform baseline, which is encouraging for applications that allow such accuracy degradation in exchange for 83% width optimization overhead savings. More specifically, that would reduce compute time from 160 GPU-hours all the way to 30 GPU-hours for MobileNetV2 measured using a batch size of 1024, a major saving. Since E(ŵ∗) in this case is just applying αŵ∗ to N where α makes the resulting network have the same FLOPs as the network induced by w∗, our results suggest that a good orientation for the optimized channel vector continues to be suitable across a wide range of magnitudes. Since the optimized widths are highly transferable, we are interested in the resulting widths for both CNNs. We find that the later layers tend to increase a lot compared to the un-optimized ones. Concretely, in un-optimized networks, ResNet18 has 512 channels in the last layer and MobileNetV2 has 1280 channels in the last layer. In contrast, the average optimized width has 1300 channels for ResNet18 and 3785 channels for MobileNetV2. We visualize the average widths for ResNet18 and MobileNetV2 (average across optimized widths) in Appendix (Figure A1). 4.2 PROJECTION: DEPTH Next, we asked whether networks with different initial depths share common structure in their optimized widths. Because making a network deeper will add new layers with no corresponding optimized width, we will need a mechanism to map the vector optimized widths, w∗, to a vector with far more elements. We empirically investigate two methods for aligning across depth, which are detailed in Section 3.3. We considered {1, 2, 3, 4} as the depth multipliers for N̂ and use 4 for N . Similar to the analysis done in Section 4.1, we analyzed the similarity in the accuracy space. Here, we first compared the two extrapolation methods proposed in Section 3.3 using DMCP for ResNet18 and MobileNetV2. As shown in Figure 4, both strategies perform similarly. We focus on the stack-average-block strategy for the following experiments. As shown in Figure 3c and 3d, we find that the optimized widths stay competitive via simple extrapolation methods and up to 75% width optimization overhead can be saved if we were to optimize the width using width transfer for all three algorithms and two networks. This finding also suggests that the relative values of optimized widths share common structure across networks that differ in depth. In other words, the pattern of width multipliers across depth is scale-invariant. Interestingly, we observe that width optimization itself, even when optimized directly for that configuration, has limited benefits (for all three algorithms) for much deeper ResNet18 models, which suggests that width optimization may be less useful when the network to be optimized is heavily over-parameterized. 4.3 PROJECTION: RESOLUTION The input resolution and the channel counts of a CNN are known to be related when it comes to the test accuracy of a CNN. As an example, it is known empirically that a wider CNN can benefit from inputs with a higher resolution than a narrower net can (Tan & Le, 2019). As a result, it is not clear if width optimization algorithms are sensitive to input resolution. We therefore asked whether networks trained on different input resolutions also share structure in their optimized widths. If the optimized widths are indeed similar, this suggests that although wider networks benefit more from a higher resolution inputs, the non-uniform widths that result in better performance are similar. On the other hand, if the optimized widths are different, it suggests that, when it comes to the test accuracy, the relationship between channel counts and input resolution is more involved than the level of over-parameterization. To empirically study the aforementioned question, we considered the input resolution for D̂ to be {64, 160, 224, 320} and choose a D of 320. As shown in Figure 5a and 5b, we find that except for MorphNet targeting ResNet18, all other algorithm and network combinations can achieve up to 96% width optimization overhead savings with the optimized widths that are still better than the uniform baseline. By saving 75% width optimization overhead, we can stay close to the performance obtained via direct optimization. Interestingly, we find that MorphNet had a very different optimized widths when transferred from resolution 64 for ResNet18, which leads to the worse performance for ResNet18 compared to direct optimization. The similarity among the optimized widths are detailed in Figure A2 in Appendix. 4.4 PROJECTION: DATASET SIZE The dataset size is often critical for understanding the generalization performance of a learning algorithm. Here, we would like to understand how width optimization algorithms are affected by the size of training data. We considered sub-sampling the ImageNet dataset to result in {5%, 10%, 20%, 50%, 100%} of the original training data. Similar to previous analysis, we tried to transfer the optimized widths obtained using the smaller configurations to the largest configuration, i.e., 100% of the original training data. As shown in Figure 5c and 5d, widths optimized for smaller dataset sizes transfer well to large dataset sizes. That is, 95% width optimization overhead can be saved and still outperform the uniform baseline for both networks. On the other hand, 90% width optimization overhead can be saved and still match the performance of direct optimization for DMCP. This suggests that the amount of training data barely affects width optimization, especially for DMCP, which is surprising. We further conducted experiments using CIFAR-100 and have two findings. First, the optimized widths are similar across these two datasets for DMCP. Second, width optimization results in overfitting for CIFAR-100 and calls for cross validation techniques in width optimization. The supporting materials for these two findings are in Appendix C. 4.5 COMPOUND PROJECTION From previous analyses, we find that the optimized widths are largely transferable across various projection methods independently. Here, we further empirically analyzed if the optimized width can be transferable across compound projection. To do so, we considered linearly interpolating all four projection methods and analyzed if the width optimized using cost-efficient settings can transfer to the most costly setting. Specifically, let a tuple (width, depth, resolution, dataset size) denote a training configuration. We considered Ĉ to be {(0.312,1,64,5%), (0.707,1,160,10%), (1,1,224,50%), (1.414,2,320,100%)} and C to be (1.414,2,320,100%). As shown in Figure 6, the optimized width is transferable across compound projection. Specifically, we can achieve up to 320× width optimization overhead reduction with width transfer for the best performing algorithm, DMCP. Additionally, it also suggests that the four projection dimensions are not tightly coupled for width optimization. 5 DISCUSSION In this paper, we take a first step in understanding the transferability of the optimized widths across different width optimization algorithms and projection dimensions. This investigation sheds light on the width optimization problem, which is often regarded as a black-box problem and tackled with general optimization methods (Liu et al., 2019; Guo et al., 2020; Gordon et al., 2018). More specifically, we show that there are common structures in the optimized widths obtained across a wide range of settings such that one can successfully transfer the optimized width to various settings with competitive performance. Per our analysis, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet. Our findings not only suggest an efficient alternative to conduct width optimization, but also imply that width optimization can be done for lower dimensional inputs, which can be beneficial since it allows a more effective traversal of the design space. While encouraging, our study also presents some limitations. Specifically, we empirically consider two types of CNNs: ResNet18 and MobileNetV2. While these networks are popular currently in our community, it is not clear if such encouraging characteristics hold for other CNNs. Additionally, extending this work beyond convolutional neural networks is an interesting direction going forward. A TRAINING HYPERPARAMETERS We use PyTorch (Paszke et al., 2019) as our deep learning framework. We largely follow Radosavovic et al. (2020) for training hyperparameters. Specifically, learning rate grows linearly from 0 to 0.2s within the first 5 epochs from 0 where s depend on batch size B, i.e., s = B256 . We use batch size of 1024 and distributed training over 8 GPUs and we have not used synchronized batch normalization layers. We set the training epochs to be 100. For optimizers, we use stochastic gradient descent (SGD) with 0.9 Nesterov momentum. As for data augmentation, we have adopted 0.1 label smoothing, random resize crop, random horizontal flops, and RandAugment (Cubuk et al., 2020) with parameter N = 2 and M = 9 following common practice in popular repository1. Note that these training hyperparameters are fixed for all experiments. In other words, we always train for 100 epochs regardless of the dataset size when we conduct the dataset projection in Section 4.4 For hyperparameters specific to width optimization algorithms, we largely follow the hyperparameters used in respective methods. Specifically, we use 40 epochs to search for optimized width for all three algorithms. We enlarge the network by 1.5× for DMCP and AutoSlim. Since MorphNet has FLOPs-aware regualrization, we normalize the FLOPs for each network and use λ = 1 for all experiments. B WIDTH VISUALIZATION (a) ResNet18 (b) MobileNetV2 Figure A1: The average optimized width for ResNet18 and MobileNetV2. They are averaged across the optimized widths in Section 4.1. We plot the mean in solid line with shaded area representing standard deviation. C CIFAR-100 EXPERIMENTS In this section, we are interested in understanding how dependent is width optimization on the training data. To do so, we use CIFAR-100 as the dataset and sweep the network configuration to cover both depth and width multipilers. Specifically, we consider width {0.312, 0.5, 0.707, 1, 1.414, 1.732} and depth {1, 2, 3, 4} for ResNet18 using DMCP. To allow using the same architecutres as the experiments for ImageNet, we alter the input resolution for CIFAR-100 from 32×32 to 64×64. Finally we compare the pairwise cosine similarity between the architectures searched across the two different datasets, i.e., ImageNet and CIFAR-100. As shown in Table 1, we find that the channel counts searched on these two datasets are more similar than comparing the optimized width with uniform baselines. At first glance, one might conclude that the optimal channel configurations are similar across these two datasets. However, we find that the optimized widths perform worse than uniform (un-optimized widths) on CIFAR-100 due to over-fitting. This suggests that a cross validation method should be used for width optimization algorithms. 1https://github.com/rwightman/pytorch-image-models (We use their implementation for RandAugment and use ‘rand-m9-mstd0.5’ as the value for the ‘aa’ flag. D SIMILARITY AMONG WIDTH MULTIPLIERS In Section 4, we have analyzed the similarity between w∗ and ŵ∗ in the accuracy space. Here, we show that w∗ and ŵ∗ are in fact similar in the vector space using cosine similarity. 0.3 0.5 0.7 1.0 1.4 1.7 0. 3 0. 5 0. 7 1. 0 1. 4 1. 7 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (a) ResNet18, width projection 0.3 0.5 0.7 1.0 1.4 1.7 0. 3 0. 5 0. 7 1. 0 1. 4 1. 7 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (b) MobileNetV2, width projection 1 2 3 4 1 2 3 4 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (c) ResNet18, depth projection 1 2 3 4 1 2 3 4 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (d) MobileNetV2, depth projection 64 160 224 320 64 16 0 22 4 32 0 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (e) ResNet18, resolution projection 64 160 224 320 64 16 0 22 4 32 0 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (f) MobileNetV2, resolution projection 0.05 0.10 0.20 0.50 1.00 0. 05 0. 10 0. 20 0. 50 1. 00 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (g) ResNet18, dataset size projection 0.05 0.10 0.20 0.50 1.00 0. 05 0. 10 0. 20 0. 50 1. 00 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (h) MobileNetV2, dataset size projection Figure A2: Pairwise cosine similarity between w∗ and E(ŵ∗) for different width optimization algorithms and projection strategies. Within each methods (diagonal blocks), w∗ and E(ŵ∗) are generally similar.
1. What is the main contribution of the paper regarding NAS-based channel search algorithms? 2. What are the strengths and weaknesses of the proposed method in terms of efficiency and technical contribution? 3. How does the reviewer assess the significance and novelty of the empirical results presented in the paper? 4. What are the limitations and suggestions for improving the experiments in the paper? 5. Are there any concerns or suggestions regarding the presentation and organization of the paper?
Review
Review ** Summary The paper mainly aims to study and design efficient proxy tasks to speed up NAS-based channel search algorithms (called “width optimization” in the paper). The methodology is, first to run search procedures on a simplified proxy task (e.g. using fewer channels or depths, smaller input resolutions, or smaller scales of dataset), then apply a simple extrapolation rule to transfer the searched width configurations to the original search space if needed. The paper benchmarks three search methods (AutoSlim, DMCP and MorphNet) on two datasets (ImageNet and Cifar-100) with a variety of proxy tasks. Experiments shows that many of the proxy configurations are good enough to obtain improved search performances, while the search cost is reduced by orders of magnitude. ** Contribution and significance The empirical findings of the paper are interesting: it is good to know that channel search can be efficiently done in much smaller proxy tasks, whose performances keep consistent with that of searching in the original space. However, I still feel that the technical contribution is limited. The significance of the empirical results is relatively low. First, introducing small proxy task for speedup is a convention and widely used in many general NAS frameworks (e.g. NASNet). While in the paper, only a very restricted search space (i.e. width) is considered, which seems to be neither novel nor general. Second, though the paper reports considerable speedup when searching with the proxy tasks, to my knowledge it may be because channel search is not that difficult – even though the search space of width is huge, the “pattern” of optimal solutions seems to be simple (Fig A1). Fig 3, 4 also imply that results of different search methods is very similar. Finally, as mentioned by EfficientNet, adjusting network widths only may result in relatively limited improvements; it is important to search width, depth and resolution jointly. So, I think the authors may study the joint search space on the proxy tasks to improve the significance. ** Experiments I am not satisfied with the diversity of the experiments especially in Sec 4.1~4.5. The underlying optimal (i.e. ground-truth) widths for ResNet18 and MobileNetv2 are similar under different configurations respectively. It is unclear to distinguish whether the search methods capture some architecture bias so that they are insensitive to different proxy configurations, which makes the conclusion less convincing. So, I think at least experiments under multiple target FLOPs budgets are required in each subsections respectively. More architectures and benchmarks on other tasks, e.g. object detection, are also encouraged here. ** Presentation The writing and organization of the paper is fair. The presentation of the experiments needs to improve. For example, for each chart in Fig 3~6 the baseline configurations (i.e. overhead=0) are recommended to describe in the caption respectively.
ICLR
Title Width transfer: on the (in)variance of width optimization Abstract Optimizing the channel counts for different layers of a convolutional neural network (CNN) to achieve better accuracy without increasing the number of floatingpoint operations (FLOPs) required during the forward pass at test time is known as CNN width optimization. Prior work on width optimization has cast it as a hyperparameter optimization problem, which introduces large computational overhead (e.g., an additional 2× FLOPs of standard training). Minimizing this overhead could therefore significantly speed up training. With that in mind, this paper sets out to empirically understand width optimization by sensitivity analysis. Specifically, we consider the following research question: “Do similar training configurations for a width optimization algorithm also share similar optimized widths?” If this in fact is the case, it suggests that one can find a proxy training configuration requiring fewer FLOPs to reduce the width optimization overhead. Scientifically, it also suggests that similar training configurations share common architectural structure, which may be harnessed to build better methods. To this end, we control the training configurations, i.e., network architectures and training data, for three existing width optimization algorithms and find that the optimized widths are largely transferable across settings. Per our analysis, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet. Our findings not only suggest an efficient way to conduct width optimization, but also highlight that the widths that lead to better accuracy are invariant to various aspects of network architectures and training data. 1 INTRODUCTION Better designs for the number of channels for each layer of a convolutional neural network (CNN) can lead to improved test performance for image classification without requiring additional floatingpoint operations (FLOPs) during the forward pass at test time (Guo et al., 2020; Gordon et al., 2018; Yu & Huang, 2019). However, designing the width for efficient CNNs is a non-trivial task that often requires intuition and domain expertise together with trial-and-error to do well. To alleviate the labor-intensive trial-and-error procedure, designing the width for each layer based on computational methods has received growing interests. Examples include using reinforcement learning (He et al., 2018b), evolutionary algorithms (Liu et al., 2019; Chin et al., 2020b), and differentiable parameterization (Guo et al., 2020; Dong & Yang, 2019; Ning et al., 2020) to optimize for layer widths. However, these methods often add a large computational overhead for the width optimization procedure. Concretely, even for efficient methods that use differentiable parameterization (Guo et al., 2020), width optimization takes an additional 2× the training time. To contextualize this overhead, using distributed training on 8 V100 GPUs, it takes approximately 100 GPU hours for training a ResNet50 on the ImageNet dataset (Radosavovic et al., 2020). That is, it takes 300 GPU hours for both width optimization using differentiable methods (Guo et al., 2020) and training the optimized ResNet50. Additionally, width optimization algorithms are often parameterized by some target testtime resource constraints, e.g., FLOPs. As a result, the computational overhead scales linearly with the number of target constraint levels considered, which can be exceedingly time-consuming for optimizing CNNs for embodied AI applications (Chin et al., 2020b). Reducing the overhead for width optimization, therefore, would have material practical benefits. Fundamentally, one of the key reasons why width optimization is so costly is due its limited understanding by the community. Without assuming or understanding the structure of the problem, the best hope is to conduct black-box optimization whenever training configurations, datasets, or architectures are changed. In this work, we take the first step to empirically understand the structure underlying the width optimization problem by changing network architectures and the properties of training datasets, and observing how they affect width optimization. Such sensitivity analysis techniques have been used in various contexts in the deep learning literature for empirically unveiling the black-box of deep learning (Morcos et al., 2018; Tan & Le, 2019; Li et al., 2018). Similarly, we manipulate the network architectures and dataset properties to aid in our understanding of the (in)variances of width optimization. A width optimization algorithm, A, takes in a training configuration, C = (D,N ), and outputs a set of optimized widths, w∗, which maxmizes the validation accuracy without additional test-time FLOPs. A can be seen as neural architecture search algorithms that search for layer-wise channel counts. C consists of initial network, N , and training dataset, D. In this paper, we systematically analyze how similar C affects w∗. If similar inputs to the width optimization algorithms result in similar outputs, one can exploit this commonality to reduce the width optimization overhead, especially if the two input configurations have markedly different FLOPs requirements as shown schematically in Figure 1. As a concrete example, if optimizing the widths of a wide CNN (high FLOPS) and a narrow CNN (low FLOPs) results in widths that differ only by a multiplier, one can reduce the computational overhead of width optimization by computing widths for the low FLOPS, narrow CNN and adjusting them to accommodate the high FLOPs, wide CNN. In addition to the potential efficiency benefits from understanding the structure of the width optimization problem, such an exploration can also have scientific benefits. Specifically, via sensitivity analysis, we can provide quantitative results to the following question: Does the level of overparameterization, number of training samples, and the resolution of input images affect width optimization? And if so, by how much? Based on a comprehensive empirical analysis, we provide the following contributions: • We find that there exist shared structures in the width optimization problem across a wide variety of network and dataset configurations. Specifically, the outputs of width optimization algorithms are largely robust to perturbation of the network’s depth and width via the multiplier method and to perturbation of the dataset’s sample size and resolution. • We demonstrate a practical implication of the previous finding by showing that one can achieve 320× reduction in width optimization overhead for a scaled-up MobileNetV2 and ResNet18 on ImageNet without hurting the accuracy improvements brought by width optimization. • We find that, for ResNet18 on ImageNet, width optimization has limited benefits for very deep models. 2 RELATED WORK 2.1 WIDTH OPTIMIZATION The layer-by-layer widths of a deep CNN are often regarded as a hyperparameter optimized to improve classification accuracy. Specifically, the width multiplier method (Howard et al., 2017) was introduced in MobileNet to arrive at models with different FLOPs and accuracy profiles and has been widely adopted in many papers (He et al., 2018a; Tan & Le, 2019; Chin et al., 2020a). Besides simply scaling the width to arrive at CNNs with different FLOPs, width (or channel) optimization has received growing interest recently as a means to improve the efficiency of deployed deep CNNs. To optimize the width of a CNN, one approach is Prune-then-Grow, which uses channel pruning methods to arrive at a down-sized CNN with non-trivial layerwise channel counts followed by re-growing the CNN to its original FLOPs using the width multiplier method (Gordon et al., 2018). Another approach is Grow-then-Prune, which uses the width multiplier method to enlarge the CNN followed by channel pruning methods to trim down channels to match its pre-grown FLOPs (Yu & Huang, 2019; Liu et al., 2019; Guo et al., 2020). The aim of both of these methods is to improve performance while maintaining a given FLOPs count. The schematic view of the two approaches is visualized in the left panel of Figure 2. While there are many papers on channel pruning (Li et al., 2016; Molchanov et al., 2019), they mostly focus their analysis on down-sizing the pre-trained models whereas we focus on improving the classification accuracy of a network by optimizing its width without affecting test-time FLOPs. While one can use either the Prune-then-Grow or Grow-then-Prune strategies to arrive at a CNN of equivalent FLOPs, it is not clear if such strategies generally improve performance over the unoptimized baseline as it is not verified in most channel pruning papers. As a result, in this paper, we focus on analyzing algorithms that have demonstrated the effectiveness over the baseline (uniform) width configurations in either Prune-then-Grow or Grow-then-Prune settings. 2.2 EMPIRICAL SENSITIVITY ANALYSIS FOR UNDERSTANDING DEEP LEARNING Controlling the components of deep learning to further shed light on understanding and optimization is an important direction complementing theoretical understanding for deep learning. While our work is the first that focuses on empirically understanding the sensitivity of width optimization in deep CNNs, we discuss efforts in empirically understanding deep learning more generally. Empirical analysis to gain insight into hyperparameter selection for training deep neural networks has received great interest. Goyal et al. (2017) have shown that the accuracy can be retained across a wide range of batch sizes when the number of epochs is held fixed while the batch size is scaled linearly. Shallue et al. (2019) have conducted a comprehensive analysis that sheds light on the relationship among batch size, training iterations, and learning rate. Tan & Le (2019) have empirically controlled the network’s architecture to identify a more cost-efficient way of scaling deep models with high classification accuracy. Radosavovic et al. (2020) have empirically manipulated the architectural choices for CNNs and identified a parameterizable relationship among depth, channel counts, and group size for ResNets. There are also papers using analysis as a tool for better understanding deep learning phenomena. Morcos et al. (2018) have used Canonical Correlation Analysis to empirically understand if networks of different architectures and optimization behavior are of different clusters. Li et al. (2018) have controlled the network’s architecture and observed that networks of different architectures have different empirical loss landscapes. Frankle et al. (2019) altered the winning ticket generation procedures in the lottery ticket hypothesis (Frankle & Carbin, 2018) and observed that an empirical stability measure predicts well the success of winning tickets. Morcos et al. (2019) have empirically controlled the training configuration for the winning ticket generation in the lottery ticket hypothesis and discovered the transferability of winning tickets. 3 APPROACH 3.1 NOTATION We use [n] to represent a set {1, 2, ..., n}. A width optimization algorithm, A, is a function that takes a training configuration, C = (D,N ), which consists of a training dataset, D, and a network to be optimized, N . The output of A is a vector of width multipliers, w∗, whose dimension is L (the number of layers). Let Fi denote the channel counts for layer i, it is expected that a network with optimized channel counts {wiFi ∀ i ∈ [L]} has the same FLOPs as a network with the original channel counts {Fi ∀i ∈ [L]} but has better test accuracy when trained with D. Following common terminologies, a CNN is divided into stages where the convolutional blocks in each stage share the same output resolutions. Within each stage, several convolutional blocks are repeated where a convolutional block consists of several convolutional layers such as the bottleneck block in ResNet (He et al., 2016) and the inverted residual block in MobileNetV2 (Sandler et al., 2018). We use the concept of stage and block for describing extrapolation mechanisms in Section 3.3. 3.2 WIDTH OPTIMIZATION METHODS Theoretically, we only care about algorithms A that “solve” the width optimization problem. However, the width optimization problem is inherently combinatorially hard. As a result, we use stateof-the-art width optimization algorithms as probes to understand them further. Specifically, we consider MorphNet (Gordon et al., 2018), AutoSlim (Yu & Huang, 2019), and DMCP (Guo et al., 2020). Here, we explicitly consider papers that have analyzed width optimization, i.e., improving the accuracy while maintaining the test time FLOP requirements. We also limited our investigation to methods with publicly available code to ensure correctness of implementation. 3.3 PROJECTION AND EXTRAPOLATION We consider various projection to project a large-scale training setting C = (D,N ) into a smallscale proxy training setting Ĉ = (D̂, N̂ ) and this allows us to understand the structure about A for all three considered algorithms. More specifically, we consider projecting the network N down to narrower networks via the width multiplier method and shallower networks via the depth multiplier method (Tan & Le, 2019). For projection on the dataset D, we consider subsampling the number of training images and changing the input resolutions. The extrapolation function involves two stages: matching dimensions and matching test-time FLOPs between w∗ and ŵ∗, i.e., E(ŵ) def= αMdim(ŵ) where Mdim(ŵ) is a function that matches dimensions and α is responsible for matching test-time FLOPs. If the depth multiplier method is involved during projection, ŵ∗ and w∗ will have different dimensions. As a result, we propose the following two Mdim to extrapolate the found width multipliers to higher dimensions. • Stack-last-block: Stack the width multipliers of the last block of each stage until the desired depth is met. • Stack-average-block: To avoid mismatches among residual connection, we exclude the first block of each stage and compute the average of the width multipliers across all the rest blocks in a stage, then stack the average width multipliers until the desired depth is met. Note that since existing network designs share the same channel widths for all the blocks in each stage, the above two Mdim will have the same results when applied to networks with un-optimized widths. After the dimension is matched, we apply a multiplier α to it such that the resulting network has a test-time FLOPs similar to that induced by w∗. The schematic views of extrapolation is shown in the right panel of Figure 2. 3.4 EXPERIMENTAL SETUP We used the ImageNet dataset (Deng et al., 2009) throughout the experiments. Unless stated otherwise, we use 224 input resolution. For CNNs, we considered both ResNet18 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018). Models were each trained on a single machine with 8 V100 GPUs for all the experiments. The width multiplier method applies to all the layers in the CNNs while the depth-multiplier excludes the first and the last stage of MobileNetV2 as there is only one block for each of them. After we obtained w∗ orE(ŵ∗) we trained the corresponding network from scratch using the same hyperparameters to analyze their performance. The training hyperparameters are detailed in Appendix A. We repeated each experiment three times with different random seeds and reported the mean and standard deviation. 4 EXPERIMENTS In this section, we empirically investigate the transferability of the optimized widths across different projection and extrapolation strategies. Specifically, we study projection across architectures by evaluating different widths and depths as well as across dataset properties by sub-sampling and resolution sub-sampling for dataset projection. In addition to analyzing each of these four settings independently, we also investigate a compound projection that involves all four jointly. To measure the transferability, we plot the ImageNet top-1 accuracy induced by w∗ and E(ŵ∗) across training configurations that have different width optimization overhead. Width optimization overhead refers to the FLOPs needed to carry out width optimization. If transferable, we should observe a horizontal line across different width optimization overheads, suggesting that performance is not compromised by deriving w∗ from a smaller FLOP configuation. Moreover, we also plot the ImageNet top-1 accuracy for the un-optimized baseline to characterize whether width optimization or width transfer is even useful for some configurations. 4.1 PROJECTION: WIDTH Here, we focus on answering the following question: “Do networks with different initial width (varied by the width multiplier method) share common structures in their optimized widths?” The answer to this question is unclear from existing literature as the current practice is to re-run the optimization across different networks (Guo et al., 2020; Gordon et al., 2018; Liu et al., 2019). If the optimized widths are similar across different initial widths, it suggests that the quality of the vector of channel counts are scale-invariant given the current practice of training deep CNNs and the dataset. Additionally, it also has practical benefits where one can use width transfer to reduce the overhead incurred in width optimization. On the other hand, if the optimized widths are dissimilar, it suggests that not only the direction of the vector of channel counts is important, but also its magnitude. That is, for different magnitudes, we need different orientations. Practically, it suggests that existing practice, though costly, is empirically proved to be necessary. To empirically study the aforementioned question, we considered the source width multipliers of {0.312, 0.5, 0.707, 1, 1.414, 1.732} for N̂ and the target width multiplier of 1.732 for N . The set is chosen based on square roots of width optimization overhead. We analyzed the similarity between E(ŵ∗) and w∗ in the accuracy space. In Figure 3a and 3b, we plot the ImageNet top-1 accuracy for the baseline (1.732× wide network) and networks induced by E(ŵ∗) and w∗. For ResNet18, the width optimization overhead can be saved by up to 96% for all three algorithms without compromising the accuracy improvements gained by the width optimization. On MobileNetV2, AutoSlim and MorphNet can transfer well and save up to 80% width optimization overhead. While DMCP for MobileNetV2 results in 0.4% top-1 accuracy loss when using width transfer, the transferred width can still outperform the uniform baseline, which is encouraging for applications that allow such accuracy degradation in exchange for 83% width optimization overhead savings. More specifically, that would reduce compute time from 160 GPU-hours all the way to 30 GPU-hours for MobileNetV2 measured using a batch size of 1024, a major saving. Since E(ŵ∗) in this case is just applying αŵ∗ to N where α makes the resulting network have the same FLOPs as the network induced by w∗, our results suggest that a good orientation for the optimized channel vector continues to be suitable across a wide range of magnitudes. Since the optimized widths are highly transferable, we are interested in the resulting widths for both CNNs. We find that the later layers tend to increase a lot compared to the un-optimized ones. Concretely, in un-optimized networks, ResNet18 has 512 channels in the last layer and MobileNetV2 has 1280 channels in the last layer. In contrast, the average optimized width has 1300 channels for ResNet18 and 3785 channels for MobileNetV2. We visualize the average widths for ResNet18 and MobileNetV2 (average across optimized widths) in Appendix (Figure A1). 4.2 PROJECTION: DEPTH Next, we asked whether networks with different initial depths share common structure in their optimized widths. Because making a network deeper will add new layers with no corresponding optimized width, we will need a mechanism to map the vector optimized widths, w∗, to a vector with far more elements. We empirically investigate two methods for aligning across depth, which are detailed in Section 3.3. We considered {1, 2, 3, 4} as the depth multipliers for N̂ and use 4 for N . Similar to the analysis done in Section 4.1, we analyzed the similarity in the accuracy space. Here, we first compared the two extrapolation methods proposed in Section 3.3 using DMCP for ResNet18 and MobileNetV2. As shown in Figure 4, both strategies perform similarly. We focus on the stack-average-block strategy for the following experiments. As shown in Figure 3c and 3d, we find that the optimized widths stay competitive via simple extrapolation methods and up to 75% width optimization overhead can be saved if we were to optimize the width using width transfer for all three algorithms and two networks. This finding also suggests that the relative values of optimized widths share common structure across networks that differ in depth. In other words, the pattern of width multipliers across depth is scale-invariant. Interestingly, we observe that width optimization itself, even when optimized directly for that configuration, has limited benefits (for all three algorithms) for much deeper ResNet18 models, which suggests that width optimization may be less useful when the network to be optimized is heavily over-parameterized. 4.3 PROJECTION: RESOLUTION The input resolution and the channel counts of a CNN are known to be related when it comes to the test accuracy of a CNN. As an example, it is known empirically that a wider CNN can benefit from inputs with a higher resolution than a narrower net can (Tan & Le, 2019). As a result, it is not clear if width optimization algorithms are sensitive to input resolution. We therefore asked whether networks trained on different input resolutions also share structure in their optimized widths. If the optimized widths are indeed similar, this suggests that although wider networks benefit more from a higher resolution inputs, the non-uniform widths that result in better performance are similar. On the other hand, if the optimized widths are different, it suggests that, when it comes to the test accuracy, the relationship between channel counts and input resolution is more involved than the level of over-parameterization. To empirically study the aforementioned question, we considered the input resolution for D̂ to be {64, 160, 224, 320} and choose a D of 320. As shown in Figure 5a and 5b, we find that except for MorphNet targeting ResNet18, all other algorithm and network combinations can achieve up to 96% width optimization overhead savings with the optimized widths that are still better than the uniform baseline. By saving 75% width optimization overhead, we can stay close to the performance obtained via direct optimization. Interestingly, we find that MorphNet had a very different optimized widths when transferred from resolution 64 for ResNet18, which leads to the worse performance for ResNet18 compared to direct optimization. The similarity among the optimized widths are detailed in Figure A2 in Appendix. 4.4 PROJECTION: DATASET SIZE The dataset size is often critical for understanding the generalization performance of a learning algorithm. Here, we would like to understand how width optimization algorithms are affected by the size of training data. We considered sub-sampling the ImageNet dataset to result in {5%, 10%, 20%, 50%, 100%} of the original training data. Similar to previous analysis, we tried to transfer the optimized widths obtained using the smaller configurations to the largest configuration, i.e., 100% of the original training data. As shown in Figure 5c and 5d, widths optimized for smaller dataset sizes transfer well to large dataset sizes. That is, 95% width optimization overhead can be saved and still outperform the uniform baseline for both networks. On the other hand, 90% width optimization overhead can be saved and still match the performance of direct optimization for DMCP. This suggests that the amount of training data barely affects width optimization, especially for DMCP, which is surprising. We further conducted experiments using CIFAR-100 and have two findings. First, the optimized widths are similar across these two datasets for DMCP. Second, width optimization results in overfitting for CIFAR-100 and calls for cross validation techniques in width optimization. The supporting materials for these two findings are in Appendix C. 4.5 COMPOUND PROJECTION From previous analyses, we find that the optimized widths are largely transferable across various projection methods independently. Here, we further empirically analyzed if the optimized width can be transferable across compound projection. To do so, we considered linearly interpolating all four projection methods and analyzed if the width optimized using cost-efficient settings can transfer to the most costly setting. Specifically, let a tuple (width, depth, resolution, dataset size) denote a training configuration. We considered Ĉ to be {(0.312,1,64,5%), (0.707,1,160,10%), (1,1,224,50%), (1.414,2,320,100%)} and C to be (1.414,2,320,100%). As shown in Figure 6, the optimized width is transferable across compound projection. Specifically, we can achieve up to 320× width optimization overhead reduction with width transfer for the best performing algorithm, DMCP. Additionally, it also suggests that the four projection dimensions are not tightly coupled for width optimization. 5 DISCUSSION In this paper, we take a first step in understanding the transferability of the optimized widths across different width optimization algorithms and projection dimensions. This investigation sheds light on the width optimization problem, which is often regarded as a black-box problem and tackled with general optimization methods (Liu et al., 2019; Guo et al., 2020; Gordon et al., 2018). More specifically, we show that there are common structures in the optimized widths obtained across a wide range of settings such that one can successfully transfer the optimized width to various settings with competitive performance. Per our analysis, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet. Our findings not only suggest an efficient alternative to conduct width optimization, but also imply that width optimization can be done for lower dimensional inputs, which can be beneficial since it allows a more effective traversal of the design space. While encouraging, our study also presents some limitations. Specifically, we empirically consider two types of CNNs: ResNet18 and MobileNetV2. While these networks are popular currently in our community, it is not clear if such encouraging characteristics hold for other CNNs. Additionally, extending this work beyond convolutional neural networks is an interesting direction going forward. A TRAINING HYPERPARAMETERS We use PyTorch (Paszke et al., 2019) as our deep learning framework. We largely follow Radosavovic et al. (2020) for training hyperparameters. Specifically, learning rate grows linearly from 0 to 0.2s within the first 5 epochs from 0 where s depend on batch size B, i.e., s = B256 . We use batch size of 1024 and distributed training over 8 GPUs and we have not used synchronized batch normalization layers. We set the training epochs to be 100. For optimizers, we use stochastic gradient descent (SGD) with 0.9 Nesterov momentum. As for data augmentation, we have adopted 0.1 label smoothing, random resize crop, random horizontal flops, and RandAugment (Cubuk et al., 2020) with parameter N = 2 and M = 9 following common practice in popular repository1. Note that these training hyperparameters are fixed for all experiments. In other words, we always train for 100 epochs regardless of the dataset size when we conduct the dataset projection in Section 4.4 For hyperparameters specific to width optimization algorithms, we largely follow the hyperparameters used in respective methods. Specifically, we use 40 epochs to search for optimized width for all three algorithms. We enlarge the network by 1.5× for DMCP and AutoSlim. Since MorphNet has FLOPs-aware regualrization, we normalize the FLOPs for each network and use λ = 1 for all experiments. B WIDTH VISUALIZATION (a) ResNet18 (b) MobileNetV2 Figure A1: The average optimized width for ResNet18 and MobileNetV2. They are averaged across the optimized widths in Section 4.1. We plot the mean in solid line with shaded area representing standard deviation. C CIFAR-100 EXPERIMENTS In this section, we are interested in understanding how dependent is width optimization on the training data. To do so, we use CIFAR-100 as the dataset and sweep the network configuration to cover both depth and width multipilers. Specifically, we consider width {0.312, 0.5, 0.707, 1, 1.414, 1.732} and depth {1, 2, 3, 4} for ResNet18 using DMCP. To allow using the same architecutres as the experiments for ImageNet, we alter the input resolution for CIFAR-100 from 32×32 to 64×64. Finally we compare the pairwise cosine similarity between the architectures searched across the two different datasets, i.e., ImageNet and CIFAR-100. As shown in Table 1, we find that the channel counts searched on these two datasets are more similar than comparing the optimized width with uniform baselines. At first glance, one might conclude that the optimal channel configurations are similar across these two datasets. However, we find that the optimized widths perform worse than uniform (un-optimized widths) on CIFAR-100 due to over-fitting. This suggests that a cross validation method should be used for width optimization algorithms. 1https://github.com/rwightman/pytorch-image-models (We use their implementation for RandAugment and use ‘rand-m9-mstd0.5’ as the value for the ‘aa’ flag. D SIMILARITY AMONG WIDTH MULTIPLIERS In Section 4, we have analyzed the similarity between w∗ and ŵ∗ in the accuracy space. Here, we show that w∗ and ŵ∗ are in fact similar in the vector space using cosine similarity. 0.3 0.5 0.7 1.0 1.4 1.7 0. 3 0. 5 0. 7 1. 0 1. 4 1. 7 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (a) ResNet18, width projection 0.3 0.5 0.7 1.0 1.4 1.7 0. 3 0. 5 0. 7 1. 0 1. 4 1. 7 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (b) MobileNetV2, width projection 1 2 3 4 1 2 3 4 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (c) ResNet18, depth projection 1 2 3 4 1 2 3 4 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (d) MobileNetV2, depth projection 64 160 224 320 64 16 0 22 4 32 0 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (e) ResNet18, resolution projection 64 160 224 320 64 16 0 22 4 32 0 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (f) MobileNetV2, resolution projection 0.05 0.10 0.20 0.50 1.00 0. 05 0. 10 0. 20 0. 50 1. 00 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (g) ResNet18, dataset size projection 0.05 0.10 0.20 0.50 1.00 0. 05 0. 10 0. 20 0. 50 1. 00 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (h) MobileNetV2, dataset size projection Figure A2: Pairwise cosine similarity between w∗ and E(ŵ∗) for different width optimization algorithms and projection strategies. Within each methods (diagonal blocks), w∗ and E(ŵ∗) are generally similar.
1. What is the main contribution of the paper, and how does it aim to reduce computational complexity? 2. What are the strengths of the proposed approach, particularly regarding its ability to work with different network architectures? 3. What are the weaknesses of the paper, especially regarding the clarity of its materials and figures? 4. How does the reviewer assess the effectiveness of the proposed method in terms of its ability to project networks and datasets to smaller sizes? 5. What are the limitations of the paper regarding its claims on the applicability of width optimization methods? 6. How does the reviewer suggest improving the paper, particularly regarding the need for more experiments and clearer guidance on using the proposed method?
Review
Review This paper proposes a method that projects networks and datasets to smaller sizes of them in order to reduce the computational complexity of width optimization methods. Pros) The idea of reducing the pruning space to lowered space to reduce the computational costs seem to make sense and but not that impressive Experiments are done multiple times, and reporting means and std make the results more convincing. Cons) Though the idea looks sound, but the materials that support the idea are not clearly stated. Moreover, the paper needs to be refined all the sections and sub-parts, and all the figures are not clear, so one may not readily grasp all the materials. Invariant to various aspects of network architectures as the authors stated in the paper does not seem to have sufficient evidence. The authors only tried with two architecture ResNet18 and MobileNetV2. In section 4.2 about depth projection, ResNet18 in figure 4.(a) does not hold the claim. Why ResNet18's accuracy fluctuate much compared to that of MobileNetV2? Comments) This paper uses a popular github's training code with some training settings which include complicated data augmentation methods such as RandAug that leads to much better results. However, it has not been studied well with the performances of pruning techniques under additional augmentation methods. I think the goal of this paper may not heading towards maximizing the accuracy so recommend the authors to use fundamental settings to see the results more easily and fairly. Width transfer may seem to work well, but any guides of how to use this method is not clearly stated. Any suggested rule-of-thumb? Need more experiments to support the claim
ICLR
Title Width transfer: on the (in)variance of width optimization Abstract Optimizing the channel counts for different layers of a convolutional neural network (CNN) to achieve better accuracy without increasing the number of floatingpoint operations (FLOPs) required during the forward pass at test time is known as CNN width optimization. Prior work on width optimization has cast it as a hyperparameter optimization problem, which introduces large computational overhead (e.g., an additional 2× FLOPs of standard training). Minimizing this overhead could therefore significantly speed up training. With that in mind, this paper sets out to empirically understand width optimization by sensitivity analysis. Specifically, we consider the following research question: “Do similar training configurations for a width optimization algorithm also share similar optimized widths?” If this in fact is the case, it suggests that one can find a proxy training configuration requiring fewer FLOPs to reduce the width optimization overhead. Scientifically, it also suggests that similar training configurations share common architectural structure, which may be harnessed to build better methods. To this end, we control the training configurations, i.e., network architectures and training data, for three existing width optimization algorithms and find that the optimized widths are largely transferable across settings. Per our analysis, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet. Our findings not only suggest an efficient way to conduct width optimization, but also highlight that the widths that lead to better accuracy are invariant to various aspects of network architectures and training data. 1 INTRODUCTION Better designs for the number of channels for each layer of a convolutional neural network (CNN) can lead to improved test performance for image classification without requiring additional floatingpoint operations (FLOPs) during the forward pass at test time (Guo et al., 2020; Gordon et al., 2018; Yu & Huang, 2019). However, designing the width for efficient CNNs is a non-trivial task that often requires intuition and domain expertise together with trial-and-error to do well. To alleviate the labor-intensive trial-and-error procedure, designing the width for each layer based on computational methods has received growing interests. Examples include using reinforcement learning (He et al., 2018b), evolutionary algorithms (Liu et al., 2019; Chin et al., 2020b), and differentiable parameterization (Guo et al., 2020; Dong & Yang, 2019; Ning et al., 2020) to optimize for layer widths. However, these methods often add a large computational overhead for the width optimization procedure. Concretely, even for efficient methods that use differentiable parameterization (Guo et al., 2020), width optimization takes an additional 2× the training time. To contextualize this overhead, using distributed training on 8 V100 GPUs, it takes approximately 100 GPU hours for training a ResNet50 on the ImageNet dataset (Radosavovic et al., 2020). That is, it takes 300 GPU hours for both width optimization using differentiable methods (Guo et al., 2020) and training the optimized ResNet50. Additionally, width optimization algorithms are often parameterized by some target testtime resource constraints, e.g., FLOPs. As a result, the computational overhead scales linearly with the number of target constraint levels considered, which can be exceedingly time-consuming for optimizing CNNs for embodied AI applications (Chin et al., 2020b). Reducing the overhead for width optimization, therefore, would have material practical benefits. Fundamentally, one of the key reasons why width optimization is so costly is due its limited understanding by the community. Without assuming or understanding the structure of the problem, the best hope is to conduct black-box optimization whenever training configurations, datasets, or architectures are changed. In this work, we take the first step to empirically understand the structure underlying the width optimization problem by changing network architectures and the properties of training datasets, and observing how they affect width optimization. Such sensitivity analysis techniques have been used in various contexts in the deep learning literature for empirically unveiling the black-box of deep learning (Morcos et al., 2018; Tan & Le, 2019; Li et al., 2018). Similarly, we manipulate the network architectures and dataset properties to aid in our understanding of the (in)variances of width optimization. A width optimization algorithm, A, takes in a training configuration, C = (D,N ), and outputs a set of optimized widths, w∗, which maxmizes the validation accuracy without additional test-time FLOPs. A can be seen as neural architecture search algorithms that search for layer-wise channel counts. C consists of initial network, N , and training dataset, D. In this paper, we systematically analyze how similar C affects w∗. If similar inputs to the width optimization algorithms result in similar outputs, one can exploit this commonality to reduce the width optimization overhead, especially if the two input configurations have markedly different FLOPs requirements as shown schematically in Figure 1. As a concrete example, if optimizing the widths of a wide CNN (high FLOPS) and a narrow CNN (low FLOPs) results in widths that differ only by a multiplier, one can reduce the computational overhead of width optimization by computing widths for the low FLOPS, narrow CNN and adjusting them to accommodate the high FLOPs, wide CNN. In addition to the potential efficiency benefits from understanding the structure of the width optimization problem, such an exploration can also have scientific benefits. Specifically, via sensitivity analysis, we can provide quantitative results to the following question: Does the level of overparameterization, number of training samples, and the resolution of input images affect width optimization? And if so, by how much? Based on a comprehensive empirical analysis, we provide the following contributions: • We find that there exist shared structures in the width optimization problem across a wide variety of network and dataset configurations. Specifically, the outputs of width optimization algorithms are largely robust to perturbation of the network’s depth and width via the multiplier method and to perturbation of the dataset’s sample size and resolution. • We demonstrate a practical implication of the previous finding by showing that one can achieve 320× reduction in width optimization overhead for a scaled-up MobileNetV2 and ResNet18 on ImageNet without hurting the accuracy improvements brought by width optimization. • We find that, for ResNet18 on ImageNet, width optimization has limited benefits for very deep models. 2 RELATED WORK 2.1 WIDTH OPTIMIZATION The layer-by-layer widths of a deep CNN are often regarded as a hyperparameter optimized to improve classification accuracy. Specifically, the width multiplier method (Howard et al., 2017) was introduced in MobileNet to arrive at models with different FLOPs and accuracy profiles and has been widely adopted in many papers (He et al., 2018a; Tan & Le, 2019; Chin et al., 2020a). Besides simply scaling the width to arrive at CNNs with different FLOPs, width (or channel) optimization has received growing interest recently as a means to improve the efficiency of deployed deep CNNs. To optimize the width of a CNN, one approach is Prune-then-Grow, which uses channel pruning methods to arrive at a down-sized CNN with non-trivial layerwise channel counts followed by re-growing the CNN to its original FLOPs using the width multiplier method (Gordon et al., 2018). Another approach is Grow-then-Prune, which uses the width multiplier method to enlarge the CNN followed by channel pruning methods to trim down channels to match its pre-grown FLOPs (Yu & Huang, 2019; Liu et al., 2019; Guo et al., 2020). The aim of both of these methods is to improve performance while maintaining a given FLOPs count. The schematic view of the two approaches is visualized in the left panel of Figure 2. While there are many papers on channel pruning (Li et al., 2016; Molchanov et al., 2019), they mostly focus their analysis on down-sizing the pre-trained models whereas we focus on improving the classification accuracy of a network by optimizing its width without affecting test-time FLOPs. While one can use either the Prune-then-Grow or Grow-then-Prune strategies to arrive at a CNN of equivalent FLOPs, it is not clear if such strategies generally improve performance over the unoptimized baseline as it is not verified in most channel pruning papers. As a result, in this paper, we focus on analyzing algorithms that have demonstrated the effectiveness over the baseline (uniform) width configurations in either Prune-then-Grow or Grow-then-Prune settings. 2.2 EMPIRICAL SENSITIVITY ANALYSIS FOR UNDERSTANDING DEEP LEARNING Controlling the components of deep learning to further shed light on understanding and optimization is an important direction complementing theoretical understanding for deep learning. While our work is the first that focuses on empirically understanding the sensitivity of width optimization in deep CNNs, we discuss efforts in empirically understanding deep learning more generally. Empirical analysis to gain insight into hyperparameter selection for training deep neural networks has received great interest. Goyal et al. (2017) have shown that the accuracy can be retained across a wide range of batch sizes when the number of epochs is held fixed while the batch size is scaled linearly. Shallue et al. (2019) have conducted a comprehensive analysis that sheds light on the relationship among batch size, training iterations, and learning rate. Tan & Le (2019) have empirically controlled the network’s architecture to identify a more cost-efficient way of scaling deep models with high classification accuracy. Radosavovic et al. (2020) have empirically manipulated the architectural choices for CNNs and identified a parameterizable relationship among depth, channel counts, and group size for ResNets. There are also papers using analysis as a tool for better understanding deep learning phenomena. Morcos et al. (2018) have used Canonical Correlation Analysis to empirically understand if networks of different architectures and optimization behavior are of different clusters. Li et al. (2018) have controlled the network’s architecture and observed that networks of different architectures have different empirical loss landscapes. Frankle et al. (2019) altered the winning ticket generation procedures in the lottery ticket hypothesis (Frankle & Carbin, 2018) and observed that an empirical stability measure predicts well the success of winning tickets. Morcos et al. (2019) have empirically controlled the training configuration for the winning ticket generation in the lottery ticket hypothesis and discovered the transferability of winning tickets. 3 APPROACH 3.1 NOTATION We use [n] to represent a set {1, 2, ..., n}. A width optimization algorithm, A, is a function that takes a training configuration, C = (D,N ), which consists of a training dataset, D, and a network to be optimized, N . The output of A is a vector of width multipliers, w∗, whose dimension is L (the number of layers). Let Fi denote the channel counts for layer i, it is expected that a network with optimized channel counts {wiFi ∀ i ∈ [L]} has the same FLOPs as a network with the original channel counts {Fi ∀i ∈ [L]} but has better test accuracy when trained with D. Following common terminologies, a CNN is divided into stages where the convolutional blocks in each stage share the same output resolutions. Within each stage, several convolutional blocks are repeated where a convolutional block consists of several convolutional layers such as the bottleneck block in ResNet (He et al., 2016) and the inverted residual block in MobileNetV2 (Sandler et al., 2018). We use the concept of stage and block for describing extrapolation mechanisms in Section 3.3. 3.2 WIDTH OPTIMIZATION METHODS Theoretically, we only care about algorithms A that “solve” the width optimization problem. However, the width optimization problem is inherently combinatorially hard. As a result, we use stateof-the-art width optimization algorithms as probes to understand them further. Specifically, we consider MorphNet (Gordon et al., 2018), AutoSlim (Yu & Huang, 2019), and DMCP (Guo et al., 2020). Here, we explicitly consider papers that have analyzed width optimization, i.e., improving the accuracy while maintaining the test time FLOP requirements. We also limited our investigation to methods with publicly available code to ensure correctness of implementation. 3.3 PROJECTION AND EXTRAPOLATION We consider various projection to project a large-scale training setting C = (D,N ) into a smallscale proxy training setting Ĉ = (D̂, N̂ ) and this allows us to understand the structure about A for all three considered algorithms. More specifically, we consider projecting the network N down to narrower networks via the width multiplier method and shallower networks via the depth multiplier method (Tan & Le, 2019). For projection on the dataset D, we consider subsampling the number of training images and changing the input resolutions. The extrapolation function involves two stages: matching dimensions and matching test-time FLOPs between w∗ and ŵ∗, i.e., E(ŵ) def= αMdim(ŵ) where Mdim(ŵ) is a function that matches dimensions and α is responsible for matching test-time FLOPs. If the depth multiplier method is involved during projection, ŵ∗ and w∗ will have different dimensions. As a result, we propose the following two Mdim to extrapolate the found width multipliers to higher dimensions. • Stack-last-block: Stack the width multipliers of the last block of each stage until the desired depth is met. • Stack-average-block: To avoid mismatches among residual connection, we exclude the first block of each stage and compute the average of the width multipliers across all the rest blocks in a stage, then stack the average width multipliers until the desired depth is met. Note that since existing network designs share the same channel widths for all the blocks in each stage, the above two Mdim will have the same results when applied to networks with un-optimized widths. After the dimension is matched, we apply a multiplier α to it such that the resulting network has a test-time FLOPs similar to that induced by w∗. The schematic views of extrapolation is shown in the right panel of Figure 2. 3.4 EXPERIMENTAL SETUP We used the ImageNet dataset (Deng et al., 2009) throughout the experiments. Unless stated otherwise, we use 224 input resolution. For CNNs, we considered both ResNet18 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018). Models were each trained on a single machine with 8 V100 GPUs for all the experiments. The width multiplier method applies to all the layers in the CNNs while the depth-multiplier excludes the first and the last stage of MobileNetV2 as there is only one block for each of them. After we obtained w∗ orE(ŵ∗) we trained the corresponding network from scratch using the same hyperparameters to analyze their performance. The training hyperparameters are detailed in Appendix A. We repeated each experiment three times with different random seeds and reported the mean and standard deviation. 4 EXPERIMENTS In this section, we empirically investigate the transferability of the optimized widths across different projection and extrapolation strategies. Specifically, we study projection across architectures by evaluating different widths and depths as well as across dataset properties by sub-sampling and resolution sub-sampling for dataset projection. In addition to analyzing each of these four settings independently, we also investigate a compound projection that involves all four jointly. To measure the transferability, we plot the ImageNet top-1 accuracy induced by w∗ and E(ŵ∗) across training configurations that have different width optimization overhead. Width optimization overhead refers to the FLOPs needed to carry out width optimization. If transferable, we should observe a horizontal line across different width optimization overheads, suggesting that performance is not compromised by deriving w∗ from a smaller FLOP configuation. Moreover, we also plot the ImageNet top-1 accuracy for the un-optimized baseline to characterize whether width optimization or width transfer is even useful for some configurations. 4.1 PROJECTION: WIDTH Here, we focus on answering the following question: “Do networks with different initial width (varied by the width multiplier method) share common structures in their optimized widths?” The answer to this question is unclear from existing literature as the current practice is to re-run the optimization across different networks (Guo et al., 2020; Gordon et al., 2018; Liu et al., 2019). If the optimized widths are similar across different initial widths, it suggests that the quality of the vector of channel counts are scale-invariant given the current practice of training deep CNNs and the dataset. Additionally, it also has practical benefits where one can use width transfer to reduce the overhead incurred in width optimization. On the other hand, if the optimized widths are dissimilar, it suggests that not only the direction of the vector of channel counts is important, but also its magnitude. That is, for different magnitudes, we need different orientations. Practically, it suggests that existing practice, though costly, is empirically proved to be necessary. To empirically study the aforementioned question, we considered the source width multipliers of {0.312, 0.5, 0.707, 1, 1.414, 1.732} for N̂ and the target width multiplier of 1.732 for N . The set is chosen based on square roots of width optimization overhead. We analyzed the similarity between E(ŵ∗) and w∗ in the accuracy space. In Figure 3a and 3b, we plot the ImageNet top-1 accuracy for the baseline (1.732× wide network) and networks induced by E(ŵ∗) and w∗. For ResNet18, the width optimization overhead can be saved by up to 96% for all three algorithms without compromising the accuracy improvements gained by the width optimization. On MobileNetV2, AutoSlim and MorphNet can transfer well and save up to 80% width optimization overhead. While DMCP for MobileNetV2 results in 0.4% top-1 accuracy loss when using width transfer, the transferred width can still outperform the uniform baseline, which is encouraging for applications that allow such accuracy degradation in exchange for 83% width optimization overhead savings. More specifically, that would reduce compute time from 160 GPU-hours all the way to 30 GPU-hours for MobileNetV2 measured using a batch size of 1024, a major saving. Since E(ŵ∗) in this case is just applying αŵ∗ to N where α makes the resulting network have the same FLOPs as the network induced by w∗, our results suggest that a good orientation for the optimized channel vector continues to be suitable across a wide range of magnitudes. Since the optimized widths are highly transferable, we are interested in the resulting widths for both CNNs. We find that the later layers tend to increase a lot compared to the un-optimized ones. Concretely, in un-optimized networks, ResNet18 has 512 channels in the last layer and MobileNetV2 has 1280 channels in the last layer. In contrast, the average optimized width has 1300 channels for ResNet18 and 3785 channels for MobileNetV2. We visualize the average widths for ResNet18 and MobileNetV2 (average across optimized widths) in Appendix (Figure A1). 4.2 PROJECTION: DEPTH Next, we asked whether networks with different initial depths share common structure in their optimized widths. Because making a network deeper will add new layers with no corresponding optimized width, we will need a mechanism to map the vector optimized widths, w∗, to a vector with far more elements. We empirically investigate two methods for aligning across depth, which are detailed in Section 3.3. We considered {1, 2, 3, 4} as the depth multipliers for N̂ and use 4 for N . Similar to the analysis done in Section 4.1, we analyzed the similarity in the accuracy space. Here, we first compared the two extrapolation methods proposed in Section 3.3 using DMCP for ResNet18 and MobileNetV2. As shown in Figure 4, both strategies perform similarly. We focus on the stack-average-block strategy for the following experiments. As shown in Figure 3c and 3d, we find that the optimized widths stay competitive via simple extrapolation methods and up to 75% width optimization overhead can be saved if we were to optimize the width using width transfer for all three algorithms and two networks. This finding also suggests that the relative values of optimized widths share common structure across networks that differ in depth. In other words, the pattern of width multipliers across depth is scale-invariant. Interestingly, we observe that width optimization itself, even when optimized directly for that configuration, has limited benefits (for all three algorithms) for much deeper ResNet18 models, which suggests that width optimization may be less useful when the network to be optimized is heavily over-parameterized. 4.3 PROJECTION: RESOLUTION The input resolution and the channel counts of a CNN are known to be related when it comes to the test accuracy of a CNN. As an example, it is known empirically that a wider CNN can benefit from inputs with a higher resolution than a narrower net can (Tan & Le, 2019). As a result, it is not clear if width optimization algorithms are sensitive to input resolution. We therefore asked whether networks trained on different input resolutions also share structure in their optimized widths. If the optimized widths are indeed similar, this suggests that although wider networks benefit more from a higher resolution inputs, the non-uniform widths that result in better performance are similar. On the other hand, if the optimized widths are different, it suggests that, when it comes to the test accuracy, the relationship between channel counts and input resolution is more involved than the level of over-parameterization. To empirically study the aforementioned question, we considered the input resolution for D̂ to be {64, 160, 224, 320} and choose a D of 320. As shown in Figure 5a and 5b, we find that except for MorphNet targeting ResNet18, all other algorithm and network combinations can achieve up to 96% width optimization overhead savings with the optimized widths that are still better than the uniform baseline. By saving 75% width optimization overhead, we can stay close to the performance obtained via direct optimization. Interestingly, we find that MorphNet had a very different optimized widths when transferred from resolution 64 for ResNet18, which leads to the worse performance for ResNet18 compared to direct optimization. The similarity among the optimized widths are detailed in Figure A2 in Appendix. 4.4 PROJECTION: DATASET SIZE The dataset size is often critical for understanding the generalization performance of a learning algorithm. Here, we would like to understand how width optimization algorithms are affected by the size of training data. We considered sub-sampling the ImageNet dataset to result in {5%, 10%, 20%, 50%, 100%} of the original training data. Similar to previous analysis, we tried to transfer the optimized widths obtained using the smaller configurations to the largest configuration, i.e., 100% of the original training data. As shown in Figure 5c and 5d, widths optimized for smaller dataset sizes transfer well to large dataset sizes. That is, 95% width optimization overhead can be saved and still outperform the uniform baseline for both networks. On the other hand, 90% width optimization overhead can be saved and still match the performance of direct optimization for DMCP. This suggests that the amount of training data barely affects width optimization, especially for DMCP, which is surprising. We further conducted experiments using CIFAR-100 and have two findings. First, the optimized widths are similar across these two datasets for DMCP. Second, width optimization results in overfitting for CIFAR-100 and calls for cross validation techniques in width optimization. The supporting materials for these two findings are in Appendix C. 4.5 COMPOUND PROJECTION From previous analyses, we find that the optimized widths are largely transferable across various projection methods independently. Here, we further empirically analyzed if the optimized width can be transferable across compound projection. To do so, we considered linearly interpolating all four projection methods and analyzed if the width optimized using cost-efficient settings can transfer to the most costly setting. Specifically, let a tuple (width, depth, resolution, dataset size) denote a training configuration. We considered Ĉ to be {(0.312,1,64,5%), (0.707,1,160,10%), (1,1,224,50%), (1.414,2,320,100%)} and C to be (1.414,2,320,100%). As shown in Figure 6, the optimized width is transferable across compound projection. Specifically, we can achieve up to 320× width optimization overhead reduction with width transfer for the best performing algorithm, DMCP. Additionally, it also suggests that the four projection dimensions are not tightly coupled for width optimization. 5 DISCUSSION In this paper, we take a first step in understanding the transferability of the optimized widths across different width optimization algorithms and projection dimensions. This investigation sheds light on the width optimization problem, which is often regarded as a black-box problem and tackled with general optimization methods (Liu et al., 2019; Guo et al., 2020; Gordon et al., 2018). More specifically, we show that there are common structures in the optimized widths obtained across a wide range of settings such that one can successfully transfer the optimized width to various settings with competitive performance. Per our analysis, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet. Our findings not only suggest an efficient alternative to conduct width optimization, but also imply that width optimization can be done for lower dimensional inputs, which can be beneficial since it allows a more effective traversal of the design space. While encouraging, our study also presents some limitations. Specifically, we empirically consider two types of CNNs: ResNet18 and MobileNetV2. While these networks are popular currently in our community, it is not clear if such encouraging characteristics hold for other CNNs. Additionally, extending this work beyond convolutional neural networks is an interesting direction going forward. A TRAINING HYPERPARAMETERS We use PyTorch (Paszke et al., 2019) as our deep learning framework. We largely follow Radosavovic et al. (2020) for training hyperparameters. Specifically, learning rate grows linearly from 0 to 0.2s within the first 5 epochs from 0 where s depend on batch size B, i.e., s = B256 . We use batch size of 1024 and distributed training over 8 GPUs and we have not used synchronized batch normalization layers. We set the training epochs to be 100. For optimizers, we use stochastic gradient descent (SGD) with 0.9 Nesterov momentum. As for data augmentation, we have adopted 0.1 label smoothing, random resize crop, random horizontal flops, and RandAugment (Cubuk et al., 2020) with parameter N = 2 and M = 9 following common practice in popular repository1. Note that these training hyperparameters are fixed for all experiments. In other words, we always train for 100 epochs regardless of the dataset size when we conduct the dataset projection in Section 4.4 For hyperparameters specific to width optimization algorithms, we largely follow the hyperparameters used in respective methods. Specifically, we use 40 epochs to search for optimized width for all three algorithms. We enlarge the network by 1.5× for DMCP and AutoSlim. Since MorphNet has FLOPs-aware regualrization, we normalize the FLOPs for each network and use λ = 1 for all experiments. B WIDTH VISUALIZATION (a) ResNet18 (b) MobileNetV2 Figure A1: The average optimized width for ResNet18 and MobileNetV2. They are averaged across the optimized widths in Section 4.1. We plot the mean in solid line with shaded area representing standard deviation. C CIFAR-100 EXPERIMENTS In this section, we are interested in understanding how dependent is width optimization on the training data. To do so, we use CIFAR-100 as the dataset and sweep the network configuration to cover both depth and width multipilers. Specifically, we consider width {0.312, 0.5, 0.707, 1, 1.414, 1.732} and depth {1, 2, 3, 4} for ResNet18 using DMCP. To allow using the same architecutres as the experiments for ImageNet, we alter the input resolution for CIFAR-100 from 32×32 to 64×64. Finally we compare the pairwise cosine similarity between the architectures searched across the two different datasets, i.e., ImageNet and CIFAR-100. As shown in Table 1, we find that the channel counts searched on these two datasets are more similar than comparing the optimized width with uniform baselines. At first glance, one might conclude that the optimal channel configurations are similar across these two datasets. However, we find that the optimized widths perform worse than uniform (un-optimized widths) on CIFAR-100 due to over-fitting. This suggests that a cross validation method should be used for width optimization algorithms. 1https://github.com/rwightman/pytorch-image-models (We use their implementation for RandAugment and use ‘rand-m9-mstd0.5’ as the value for the ‘aa’ flag. D SIMILARITY AMONG WIDTH MULTIPLIERS In Section 4, we have analyzed the similarity between w∗ and ŵ∗ in the accuracy space. Here, we show that w∗ and ŵ∗ are in fact similar in the vector space using cosine similarity. 0.3 0.5 0.7 1.0 1.4 1.7 0. 3 0. 5 0. 7 1. 0 1. 4 1. 7 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (a) ResNet18, width projection 0.3 0.5 0.7 1.0 1.4 1.7 0. 3 0. 5 0. 7 1. 0 1. 4 1. 7 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (b) MobileNetV2, width projection 1 2 3 4 1 2 3 4 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (c) ResNet18, depth projection 1 2 3 4 1 2 3 4 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (d) MobileNetV2, depth projection 64 160 224 320 64 16 0 22 4 32 0 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (e) ResNet18, resolution projection 64 160 224 320 64 16 0 22 4 32 0 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (f) MobileNetV2, resolution projection 0.05 0.10 0.20 0.50 1.00 0. 05 0. 10 0. 20 0. 50 1. 00 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (g) ResNet18, dataset size projection 0.05 0.10 0.20 0.50 1.00 0. 05 0. 10 0. 20 0. 50 1. 00 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (h) MobileNetV2, dataset size projection Figure A2: Pairwise cosine similarity between w∗ and E(ŵ∗) for different width optimization algorithms and projection strategies. Within each methods (diagonal blocks), w∗ and E(ŵ∗) are generally similar.
1. What are the main contributions and findings of the paper regarding width pruning? 2. What are the strengths and weaknesses of the experimental design and results presented in the paper? 3. Are there any concerns or suggestions regarding the writing style and clarity of the paper? 4. Are there any gaps or inconsistencies in the reasoning or assumptions made throughout the paper? 5. How does the reviewer assess the overall quality and impact of the paper in the field of neural network optimization?
Review
Review This paper has conducted many experiments about reducing optimization overhead. Authors have researched on the view of width, depth, resolution and dataset size to study the issue of width pruning while preserving the top-1 accuracy. It has also conducted experiments on many datasets. However, my concerns are as follows. ===== The writing of this paper should be improved. ======= For example: In the first line of 3.1 Notation, “We use [n] to represent a set {1, 2, …, n}”. Then, I don’t see anything related to [n]. Why define this [n]? In my view, do you want to see the selection of channels in a layer? Missing "to" in "One of the key reasons why width optimization is so costly is due “to” its limited understanding by the community. " The first sentence of abstract and introduction almost say the same thing. And can you divide it into several pieces? ===== Some missing justification in assumptions. ======== In the 4-6 lines of section 3.1, the paper says that better test accuracy can be obtained by just multiple width multipliers. However, it needs justification. In the 3rd line of section 3.2, I do not really follow your attention that attempts to understand "them further".
ICLR
Title Width transfer: on the (in)variance of width optimization Abstract Optimizing the channel counts for different layers of a convolutional neural network (CNN) to achieve better accuracy without increasing the number of floatingpoint operations (FLOPs) required during the forward pass at test time is known as CNN width optimization. Prior work on width optimization has cast it as a hyperparameter optimization problem, which introduces large computational overhead (e.g., an additional 2× FLOPs of standard training). Minimizing this overhead could therefore significantly speed up training. With that in mind, this paper sets out to empirically understand width optimization by sensitivity analysis. Specifically, we consider the following research question: “Do similar training configurations for a width optimization algorithm also share similar optimized widths?” If this in fact is the case, it suggests that one can find a proxy training configuration requiring fewer FLOPs to reduce the width optimization overhead. Scientifically, it also suggests that similar training configurations share common architectural structure, which may be harnessed to build better methods. To this end, we control the training configurations, i.e., network architectures and training data, for three existing width optimization algorithms and find that the optimized widths are largely transferable across settings. Per our analysis, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet. Our findings not only suggest an efficient way to conduct width optimization, but also highlight that the widths that lead to better accuracy are invariant to various aspects of network architectures and training data. 1 INTRODUCTION Better designs for the number of channels for each layer of a convolutional neural network (CNN) can lead to improved test performance for image classification without requiring additional floatingpoint operations (FLOPs) during the forward pass at test time (Guo et al., 2020; Gordon et al., 2018; Yu & Huang, 2019). However, designing the width for efficient CNNs is a non-trivial task that often requires intuition and domain expertise together with trial-and-error to do well. To alleviate the labor-intensive trial-and-error procedure, designing the width for each layer based on computational methods has received growing interests. Examples include using reinforcement learning (He et al., 2018b), evolutionary algorithms (Liu et al., 2019; Chin et al., 2020b), and differentiable parameterization (Guo et al., 2020; Dong & Yang, 2019; Ning et al., 2020) to optimize for layer widths. However, these methods often add a large computational overhead for the width optimization procedure. Concretely, even for efficient methods that use differentiable parameterization (Guo et al., 2020), width optimization takes an additional 2× the training time. To contextualize this overhead, using distributed training on 8 V100 GPUs, it takes approximately 100 GPU hours for training a ResNet50 on the ImageNet dataset (Radosavovic et al., 2020). That is, it takes 300 GPU hours for both width optimization using differentiable methods (Guo et al., 2020) and training the optimized ResNet50. Additionally, width optimization algorithms are often parameterized by some target testtime resource constraints, e.g., FLOPs. As a result, the computational overhead scales linearly with the number of target constraint levels considered, which can be exceedingly time-consuming for optimizing CNNs for embodied AI applications (Chin et al., 2020b). Reducing the overhead for width optimization, therefore, would have material practical benefits. Fundamentally, one of the key reasons why width optimization is so costly is due its limited understanding by the community. Without assuming or understanding the structure of the problem, the best hope is to conduct black-box optimization whenever training configurations, datasets, or architectures are changed. In this work, we take the first step to empirically understand the structure underlying the width optimization problem by changing network architectures and the properties of training datasets, and observing how they affect width optimization. Such sensitivity analysis techniques have been used in various contexts in the deep learning literature for empirically unveiling the black-box of deep learning (Morcos et al., 2018; Tan & Le, 2019; Li et al., 2018). Similarly, we manipulate the network architectures and dataset properties to aid in our understanding of the (in)variances of width optimization. A width optimization algorithm, A, takes in a training configuration, C = (D,N ), and outputs a set of optimized widths, w∗, which maxmizes the validation accuracy without additional test-time FLOPs. A can be seen as neural architecture search algorithms that search for layer-wise channel counts. C consists of initial network, N , and training dataset, D. In this paper, we systematically analyze how similar C affects w∗. If similar inputs to the width optimization algorithms result in similar outputs, one can exploit this commonality to reduce the width optimization overhead, especially if the two input configurations have markedly different FLOPs requirements as shown schematically in Figure 1. As a concrete example, if optimizing the widths of a wide CNN (high FLOPS) and a narrow CNN (low FLOPs) results in widths that differ only by a multiplier, one can reduce the computational overhead of width optimization by computing widths for the low FLOPS, narrow CNN and adjusting them to accommodate the high FLOPs, wide CNN. In addition to the potential efficiency benefits from understanding the structure of the width optimization problem, such an exploration can also have scientific benefits. Specifically, via sensitivity analysis, we can provide quantitative results to the following question: Does the level of overparameterization, number of training samples, and the resolution of input images affect width optimization? And if so, by how much? Based on a comprehensive empirical analysis, we provide the following contributions: • We find that there exist shared structures in the width optimization problem across a wide variety of network and dataset configurations. Specifically, the outputs of width optimization algorithms are largely robust to perturbation of the network’s depth and width via the multiplier method and to perturbation of the dataset’s sample size and resolution. • We demonstrate a practical implication of the previous finding by showing that one can achieve 320× reduction in width optimization overhead for a scaled-up MobileNetV2 and ResNet18 on ImageNet without hurting the accuracy improvements brought by width optimization. • We find that, for ResNet18 on ImageNet, width optimization has limited benefits for very deep models. 2 RELATED WORK 2.1 WIDTH OPTIMIZATION The layer-by-layer widths of a deep CNN are often regarded as a hyperparameter optimized to improve classification accuracy. Specifically, the width multiplier method (Howard et al., 2017) was introduced in MobileNet to arrive at models with different FLOPs and accuracy profiles and has been widely adopted in many papers (He et al., 2018a; Tan & Le, 2019; Chin et al., 2020a). Besides simply scaling the width to arrive at CNNs with different FLOPs, width (or channel) optimization has received growing interest recently as a means to improve the efficiency of deployed deep CNNs. To optimize the width of a CNN, one approach is Prune-then-Grow, which uses channel pruning methods to arrive at a down-sized CNN with non-trivial layerwise channel counts followed by re-growing the CNN to its original FLOPs using the width multiplier method (Gordon et al., 2018). Another approach is Grow-then-Prune, which uses the width multiplier method to enlarge the CNN followed by channel pruning methods to trim down channels to match its pre-grown FLOPs (Yu & Huang, 2019; Liu et al., 2019; Guo et al., 2020). The aim of both of these methods is to improve performance while maintaining a given FLOPs count. The schematic view of the two approaches is visualized in the left panel of Figure 2. While there are many papers on channel pruning (Li et al., 2016; Molchanov et al., 2019), they mostly focus their analysis on down-sizing the pre-trained models whereas we focus on improving the classification accuracy of a network by optimizing its width without affecting test-time FLOPs. While one can use either the Prune-then-Grow or Grow-then-Prune strategies to arrive at a CNN of equivalent FLOPs, it is not clear if such strategies generally improve performance over the unoptimized baseline as it is not verified in most channel pruning papers. As a result, in this paper, we focus on analyzing algorithms that have demonstrated the effectiveness over the baseline (uniform) width configurations in either Prune-then-Grow or Grow-then-Prune settings. 2.2 EMPIRICAL SENSITIVITY ANALYSIS FOR UNDERSTANDING DEEP LEARNING Controlling the components of deep learning to further shed light on understanding and optimization is an important direction complementing theoretical understanding for deep learning. While our work is the first that focuses on empirically understanding the sensitivity of width optimization in deep CNNs, we discuss efforts in empirically understanding deep learning more generally. Empirical analysis to gain insight into hyperparameter selection for training deep neural networks has received great interest. Goyal et al. (2017) have shown that the accuracy can be retained across a wide range of batch sizes when the number of epochs is held fixed while the batch size is scaled linearly. Shallue et al. (2019) have conducted a comprehensive analysis that sheds light on the relationship among batch size, training iterations, and learning rate. Tan & Le (2019) have empirically controlled the network’s architecture to identify a more cost-efficient way of scaling deep models with high classification accuracy. Radosavovic et al. (2020) have empirically manipulated the architectural choices for CNNs and identified a parameterizable relationship among depth, channel counts, and group size for ResNets. There are also papers using analysis as a tool for better understanding deep learning phenomena. Morcos et al. (2018) have used Canonical Correlation Analysis to empirically understand if networks of different architectures and optimization behavior are of different clusters. Li et al. (2018) have controlled the network’s architecture and observed that networks of different architectures have different empirical loss landscapes. Frankle et al. (2019) altered the winning ticket generation procedures in the lottery ticket hypothesis (Frankle & Carbin, 2018) and observed that an empirical stability measure predicts well the success of winning tickets. Morcos et al. (2019) have empirically controlled the training configuration for the winning ticket generation in the lottery ticket hypothesis and discovered the transferability of winning tickets. 3 APPROACH 3.1 NOTATION We use [n] to represent a set {1, 2, ..., n}. A width optimization algorithm, A, is a function that takes a training configuration, C = (D,N ), which consists of a training dataset, D, and a network to be optimized, N . The output of A is a vector of width multipliers, w∗, whose dimension is L (the number of layers). Let Fi denote the channel counts for layer i, it is expected that a network with optimized channel counts {wiFi ∀ i ∈ [L]} has the same FLOPs as a network with the original channel counts {Fi ∀i ∈ [L]} but has better test accuracy when trained with D. Following common terminologies, a CNN is divided into stages where the convolutional blocks in each stage share the same output resolutions. Within each stage, several convolutional blocks are repeated where a convolutional block consists of several convolutional layers such as the bottleneck block in ResNet (He et al., 2016) and the inverted residual block in MobileNetV2 (Sandler et al., 2018). We use the concept of stage and block for describing extrapolation mechanisms in Section 3.3. 3.2 WIDTH OPTIMIZATION METHODS Theoretically, we only care about algorithms A that “solve” the width optimization problem. However, the width optimization problem is inherently combinatorially hard. As a result, we use stateof-the-art width optimization algorithms as probes to understand them further. Specifically, we consider MorphNet (Gordon et al., 2018), AutoSlim (Yu & Huang, 2019), and DMCP (Guo et al., 2020). Here, we explicitly consider papers that have analyzed width optimization, i.e., improving the accuracy while maintaining the test time FLOP requirements. We also limited our investigation to methods with publicly available code to ensure correctness of implementation. 3.3 PROJECTION AND EXTRAPOLATION We consider various projection to project a large-scale training setting C = (D,N ) into a smallscale proxy training setting Ĉ = (D̂, N̂ ) and this allows us to understand the structure about A for all three considered algorithms. More specifically, we consider projecting the network N down to narrower networks via the width multiplier method and shallower networks via the depth multiplier method (Tan & Le, 2019). For projection on the dataset D, we consider subsampling the number of training images and changing the input resolutions. The extrapolation function involves two stages: matching dimensions and matching test-time FLOPs between w∗ and ŵ∗, i.e., E(ŵ) def= αMdim(ŵ) where Mdim(ŵ) is a function that matches dimensions and α is responsible for matching test-time FLOPs. If the depth multiplier method is involved during projection, ŵ∗ and w∗ will have different dimensions. As a result, we propose the following two Mdim to extrapolate the found width multipliers to higher dimensions. • Stack-last-block: Stack the width multipliers of the last block of each stage until the desired depth is met. • Stack-average-block: To avoid mismatches among residual connection, we exclude the first block of each stage and compute the average of the width multipliers across all the rest blocks in a stage, then stack the average width multipliers until the desired depth is met. Note that since existing network designs share the same channel widths for all the blocks in each stage, the above two Mdim will have the same results when applied to networks with un-optimized widths. After the dimension is matched, we apply a multiplier α to it such that the resulting network has a test-time FLOPs similar to that induced by w∗. The schematic views of extrapolation is shown in the right panel of Figure 2. 3.4 EXPERIMENTAL SETUP We used the ImageNet dataset (Deng et al., 2009) throughout the experiments. Unless stated otherwise, we use 224 input resolution. For CNNs, we considered both ResNet18 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018). Models were each trained on a single machine with 8 V100 GPUs for all the experiments. The width multiplier method applies to all the layers in the CNNs while the depth-multiplier excludes the first and the last stage of MobileNetV2 as there is only one block for each of them. After we obtained w∗ orE(ŵ∗) we trained the corresponding network from scratch using the same hyperparameters to analyze their performance. The training hyperparameters are detailed in Appendix A. We repeated each experiment three times with different random seeds and reported the mean and standard deviation. 4 EXPERIMENTS In this section, we empirically investigate the transferability of the optimized widths across different projection and extrapolation strategies. Specifically, we study projection across architectures by evaluating different widths and depths as well as across dataset properties by sub-sampling and resolution sub-sampling for dataset projection. In addition to analyzing each of these four settings independently, we also investigate a compound projection that involves all four jointly. To measure the transferability, we plot the ImageNet top-1 accuracy induced by w∗ and E(ŵ∗) across training configurations that have different width optimization overhead. Width optimization overhead refers to the FLOPs needed to carry out width optimization. If transferable, we should observe a horizontal line across different width optimization overheads, suggesting that performance is not compromised by deriving w∗ from a smaller FLOP configuation. Moreover, we also plot the ImageNet top-1 accuracy for the un-optimized baseline to characterize whether width optimization or width transfer is even useful for some configurations. 4.1 PROJECTION: WIDTH Here, we focus on answering the following question: “Do networks with different initial width (varied by the width multiplier method) share common structures in their optimized widths?” The answer to this question is unclear from existing literature as the current practice is to re-run the optimization across different networks (Guo et al., 2020; Gordon et al., 2018; Liu et al., 2019). If the optimized widths are similar across different initial widths, it suggests that the quality of the vector of channel counts are scale-invariant given the current practice of training deep CNNs and the dataset. Additionally, it also has practical benefits where one can use width transfer to reduce the overhead incurred in width optimization. On the other hand, if the optimized widths are dissimilar, it suggests that not only the direction of the vector of channel counts is important, but also its magnitude. That is, for different magnitudes, we need different orientations. Practically, it suggests that existing practice, though costly, is empirically proved to be necessary. To empirically study the aforementioned question, we considered the source width multipliers of {0.312, 0.5, 0.707, 1, 1.414, 1.732} for N̂ and the target width multiplier of 1.732 for N . The set is chosen based on square roots of width optimization overhead. We analyzed the similarity between E(ŵ∗) and w∗ in the accuracy space. In Figure 3a and 3b, we plot the ImageNet top-1 accuracy for the baseline (1.732× wide network) and networks induced by E(ŵ∗) and w∗. For ResNet18, the width optimization overhead can be saved by up to 96% for all three algorithms without compromising the accuracy improvements gained by the width optimization. On MobileNetV2, AutoSlim and MorphNet can transfer well and save up to 80% width optimization overhead. While DMCP for MobileNetV2 results in 0.4% top-1 accuracy loss when using width transfer, the transferred width can still outperform the uniform baseline, which is encouraging for applications that allow such accuracy degradation in exchange for 83% width optimization overhead savings. More specifically, that would reduce compute time from 160 GPU-hours all the way to 30 GPU-hours for MobileNetV2 measured using a batch size of 1024, a major saving. Since E(ŵ∗) in this case is just applying αŵ∗ to N where α makes the resulting network have the same FLOPs as the network induced by w∗, our results suggest that a good orientation for the optimized channel vector continues to be suitable across a wide range of magnitudes. Since the optimized widths are highly transferable, we are interested in the resulting widths for both CNNs. We find that the later layers tend to increase a lot compared to the un-optimized ones. Concretely, in un-optimized networks, ResNet18 has 512 channels in the last layer and MobileNetV2 has 1280 channels in the last layer. In contrast, the average optimized width has 1300 channels for ResNet18 and 3785 channels for MobileNetV2. We visualize the average widths for ResNet18 and MobileNetV2 (average across optimized widths) in Appendix (Figure A1). 4.2 PROJECTION: DEPTH Next, we asked whether networks with different initial depths share common structure in their optimized widths. Because making a network deeper will add new layers with no corresponding optimized width, we will need a mechanism to map the vector optimized widths, w∗, to a vector with far more elements. We empirically investigate two methods for aligning across depth, which are detailed in Section 3.3. We considered {1, 2, 3, 4} as the depth multipliers for N̂ and use 4 for N . Similar to the analysis done in Section 4.1, we analyzed the similarity in the accuracy space. Here, we first compared the two extrapolation methods proposed in Section 3.3 using DMCP for ResNet18 and MobileNetV2. As shown in Figure 4, both strategies perform similarly. We focus on the stack-average-block strategy for the following experiments. As shown in Figure 3c and 3d, we find that the optimized widths stay competitive via simple extrapolation methods and up to 75% width optimization overhead can be saved if we were to optimize the width using width transfer for all three algorithms and two networks. This finding also suggests that the relative values of optimized widths share common structure across networks that differ in depth. In other words, the pattern of width multipliers across depth is scale-invariant. Interestingly, we observe that width optimization itself, even when optimized directly for that configuration, has limited benefits (for all three algorithms) for much deeper ResNet18 models, which suggests that width optimization may be less useful when the network to be optimized is heavily over-parameterized. 4.3 PROJECTION: RESOLUTION The input resolution and the channel counts of a CNN are known to be related when it comes to the test accuracy of a CNN. As an example, it is known empirically that a wider CNN can benefit from inputs with a higher resolution than a narrower net can (Tan & Le, 2019). As a result, it is not clear if width optimization algorithms are sensitive to input resolution. We therefore asked whether networks trained on different input resolutions also share structure in their optimized widths. If the optimized widths are indeed similar, this suggests that although wider networks benefit more from a higher resolution inputs, the non-uniform widths that result in better performance are similar. On the other hand, if the optimized widths are different, it suggests that, when it comes to the test accuracy, the relationship between channel counts and input resolution is more involved than the level of over-parameterization. To empirically study the aforementioned question, we considered the input resolution for D̂ to be {64, 160, 224, 320} and choose a D of 320. As shown in Figure 5a and 5b, we find that except for MorphNet targeting ResNet18, all other algorithm and network combinations can achieve up to 96% width optimization overhead savings with the optimized widths that are still better than the uniform baseline. By saving 75% width optimization overhead, we can stay close to the performance obtained via direct optimization. Interestingly, we find that MorphNet had a very different optimized widths when transferred from resolution 64 for ResNet18, which leads to the worse performance for ResNet18 compared to direct optimization. The similarity among the optimized widths are detailed in Figure A2 in Appendix. 4.4 PROJECTION: DATASET SIZE The dataset size is often critical for understanding the generalization performance of a learning algorithm. Here, we would like to understand how width optimization algorithms are affected by the size of training data. We considered sub-sampling the ImageNet dataset to result in {5%, 10%, 20%, 50%, 100%} of the original training data. Similar to previous analysis, we tried to transfer the optimized widths obtained using the smaller configurations to the largest configuration, i.e., 100% of the original training data. As shown in Figure 5c and 5d, widths optimized for smaller dataset sizes transfer well to large dataset sizes. That is, 95% width optimization overhead can be saved and still outperform the uniform baseline for both networks. On the other hand, 90% width optimization overhead can be saved and still match the performance of direct optimization for DMCP. This suggests that the amount of training data barely affects width optimization, especially for DMCP, which is surprising. We further conducted experiments using CIFAR-100 and have two findings. First, the optimized widths are similar across these two datasets for DMCP. Second, width optimization results in overfitting for CIFAR-100 and calls for cross validation techniques in width optimization. The supporting materials for these two findings are in Appendix C. 4.5 COMPOUND PROJECTION From previous analyses, we find that the optimized widths are largely transferable across various projection methods independently. Here, we further empirically analyzed if the optimized width can be transferable across compound projection. To do so, we considered linearly interpolating all four projection methods and analyzed if the width optimized using cost-efficient settings can transfer to the most costly setting. Specifically, let a tuple (width, depth, resolution, dataset size) denote a training configuration. We considered Ĉ to be {(0.312,1,64,5%), (0.707,1,160,10%), (1,1,224,50%), (1.414,2,320,100%)} and C to be (1.414,2,320,100%). As shown in Figure 6, the optimized width is transferable across compound projection. Specifically, we can achieve up to 320× width optimization overhead reduction with width transfer for the best performing algorithm, DMCP. Additionally, it also suggests that the four projection dimensions are not tightly coupled for width optimization. 5 DISCUSSION In this paper, we take a first step in understanding the transferability of the optimized widths across different width optimization algorithms and projection dimensions. This investigation sheds light on the width optimization problem, which is often regarded as a black-box problem and tackled with general optimization methods (Liu et al., 2019; Guo et al., 2020; Gordon et al., 2018). More specifically, we show that there are common structures in the optimized widths obtained across a wide range of settings such that one can successfully transfer the optimized width to various settings with competitive performance. Per our analysis, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet. Our findings not only suggest an efficient alternative to conduct width optimization, but also imply that width optimization can be done for lower dimensional inputs, which can be beneficial since it allows a more effective traversal of the design space. While encouraging, our study also presents some limitations. Specifically, we empirically consider two types of CNNs: ResNet18 and MobileNetV2. While these networks are popular currently in our community, it is not clear if such encouraging characteristics hold for other CNNs. Additionally, extending this work beyond convolutional neural networks is an interesting direction going forward. A TRAINING HYPERPARAMETERS We use PyTorch (Paszke et al., 2019) as our deep learning framework. We largely follow Radosavovic et al. (2020) for training hyperparameters. Specifically, learning rate grows linearly from 0 to 0.2s within the first 5 epochs from 0 where s depend on batch size B, i.e., s = B256 . We use batch size of 1024 and distributed training over 8 GPUs and we have not used synchronized batch normalization layers. We set the training epochs to be 100. For optimizers, we use stochastic gradient descent (SGD) with 0.9 Nesterov momentum. As for data augmentation, we have adopted 0.1 label smoothing, random resize crop, random horizontal flops, and RandAugment (Cubuk et al., 2020) with parameter N = 2 and M = 9 following common practice in popular repository1. Note that these training hyperparameters are fixed for all experiments. In other words, we always train for 100 epochs regardless of the dataset size when we conduct the dataset projection in Section 4.4 For hyperparameters specific to width optimization algorithms, we largely follow the hyperparameters used in respective methods. Specifically, we use 40 epochs to search for optimized width for all three algorithms. We enlarge the network by 1.5× for DMCP and AutoSlim. Since MorphNet has FLOPs-aware regualrization, we normalize the FLOPs for each network and use λ = 1 for all experiments. B WIDTH VISUALIZATION (a) ResNet18 (b) MobileNetV2 Figure A1: The average optimized width for ResNet18 and MobileNetV2. They are averaged across the optimized widths in Section 4.1. We plot the mean in solid line with shaded area representing standard deviation. C CIFAR-100 EXPERIMENTS In this section, we are interested in understanding how dependent is width optimization on the training data. To do so, we use CIFAR-100 as the dataset and sweep the network configuration to cover both depth and width multipilers. Specifically, we consider width {0.312, 0.5, 0.707, 1, 1.414, 1.732} and depth {1, 2, 3, 4} for ResNet18 using DMCP. To allow using the same architecutres as the experiments for ImageNet, we alter the input resolution for CIFAR-100 from 32×32 to 64×64. Finally we compare the pairwise cosine similarity between the architectures searched across the two different datasets, i.e., ImageNet and CIFAR-100. As shown in Table 1, we find that the channel counts searched on these two datasets are more similar than comparing the optimized width with uniform baselines. At first glance, one might conclude that the optimal channel configurations are similar across these two datasets. However, we find that the optimized widths perform worse than uniform (un-optimized widths) on CIFAR-100 due to over-fitting. This suggests that a cross validation method should be used for width optimization algorithms. 1https://github.com/rwightman/pytorch-image-models (We use their implementation for RandAugment and use ‘rand-m9-mstd0.5’ as the value for the ‘aa’ flag. D SIMILARITY AMONG WIDTH MULTIPLIERS In Section 4, we have analyzed the similarity between w∗ and ŵ∗ in the accuracy space. Here, we show that w∗ and ŵ∗ are in fact similar in the vector space using cosine similarity. 0.3 0.5 0.7 1.0 1.4 1.7 0. 3 0. 5 0. 7 1. 0 1. 4 1. 7 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (a) ResNet18, width projection 0.3 0.5 0.7 1.0 1.4 1.7 0. 3 0. 5 0. 7 1. 0 1. 4 1. 7 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (b) MobileNetV2, width projection 1 2 3 4 1 2 3 4 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (c) ResNet18, depth projection 1 2 3 4 1 2 3 4 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (d) MobileNetV2, depth projection 64 160 224 320 64 16 0 22 4 32 0 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (e) ResNet18, resolution projection 64 160 224 320 64 16 0 22 4 32 0 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (f) MobileNetV2, resolution projection 0.05 0.10 0.20 0.50 1.00 0. 05 0. 10 0. 20 0. 50 1. 00 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (g) ResNet18, dataset size projection 0.05 0.10 0.20 0.50 1.00 0. 05 0. 10 0. 20 0. 50 1. 00 Uniform DMCP AutoSlim MorphNet E(w * ) Un ifo rm DM CP Au to Sl im M or ph Ne t w * (h) MobileNetV2, dataset size projection Figure A2: Pairwise cosine similarity between w∗ and E(ŵ∗) for different width optimization algorithms and projection strategies. Within each methods (diagonal blocks), w∗ and E(ŵ∗) are generally similar.
1. What is the focus of the paper in terms of transferability? 2. What are the strengths and weaknesses of the experimental results presented in the paper? 3. Do you have any concerns regarding the conclusion drawn from the experimental results? 4. How does the reviewer assess the novelty and theoretical support of the paper? 5. Is the organization of the paper clear and appropriate?
Review
Review This paper conducts experiments to investigate the transferability of optimized widths obtained by width optimization algorithms across different predefined projection dimensions, including width, depth, resolution and dataset size. Experimental results show that the optimized widths are similar across different configuration settings, suggesting width optimization can be firstly conducted on networks with more convenient configurations (such as smaller width, depth, resolution and dataset size) and then transferred into the target networks. However, I have some concerns as follows: 1. The authors simply conduct some experiments to analyze the effect of different configuration settings. No novel methods or ideas are newly proposed in this paper. 2. The experimental results cannot apparently support the conclusions. In figure 3, 4, 5 and 6, the fluctuation in accuracy is significant but the authors conclude that the optimized widths are similar and highly transferable. Moreover, at each projection setting, the experiment is not thorough enough since there are only 4-5 points for each line, which cannot rigorously reflect the trend. 3. Too many words like “practically” and “empirically” are used, suggesting that many parts of the paper are lack of theoretical support. For instance, some settings like the source width multipliers are selected totally by human experience. 4. Sub-section 3.4 “Experimental Setup” should be put in Section 4 “Experiments” instead of Section 3 “Approach”. All in all, this paper is lack of novelty and theoretical support.
ICLR
Title MAML is a Noisy Contrastive Learner in Classification Abstract Model-agnostic meta-learning (MAML) is one of the most popular and widely adopted meta-learning algorithms, achieving remarkable success in various learning problems. Yet, with the unique design of nested inner-loop and outer-loop updates, which govern the task-specific and meta-model-centric learning, respectively, the underlying learning objective of MAML remains implicit, impeding a more straightforward understanding of it. In this paper, we provide a new perspective of the working mechanism of MAML. We discover that MAML is analogous to a meta-learner using a supervised contrastive objective in classification. The query features are pulled towards the support features of the same class and against those of different classes. Such contrastiveness is experimentally verified via an analysis based on the cosine similarity. Moreover, we reveal that vanilla MAML has an undesirable interference term originating from the random initialization and the cross-task interaction. We thus propose a simple but effective technique, the zeroing trick, to alleviate the interference. Extensive experiments are conducted on both mini-ImageNet and Omniglot datasets to validate the consistent improvement brought by our proposed method. 1 1 INTRODUCTION Humans can learn from very few samples. They can readily establish their cognition and understanding of novel tasks, environments, or domains even with very limited experience in the corresponding circumstances. Meta-learning, a subfield of machine learning, aims at equipping machines with such capacity to accommodate new scenarios effectively (Vilalta & Drissi, 2002; Grant et al., 2018). Machines learn to extract task-agnostic information so that their performance on unseen tasks can be improved (Hospedales et al., 2020). One highly influential meta-learning algorithm is Model Agnostic Meta-Learning (MAML) (Finn et al., 2017), which has inspired numerous follow-up extensions (Nichol et al., 2018; Rajeswaran et al., 2019; Liu et al., 2019; Finn et al., 2019; Jamal & Qi, 2019; Javed & White, 2019). MAML estimates a set of model parameters such that an adaptation of the model to a new task only requires some updates to those parameters. We take the few-shot classification task as an example to review the algorithmic procedure of MAML. A few-shot classification problem refers to classifying samples from some classes (i.e. query data) after seeing a few examples per class (i.e. support data). In a meta-learning scenario, we consider a distribution of tasks, where each task is a few-shot classification problem and different tasks have different target classes. MAML aims to meta-train the base-model based on training tasks (i.e., the meta-training dataset) and evaluate the performance of the base-model on the testing tasks sampled from a held-out unseen dataset (i.e. the meta-testing dataset). In meta-training, MAML follows a bi-level optimization scheme composed of the inner loop and the outer loop, as shown in Appendix A (please refer to Section 2 for detailed definition). In the inner loop (also known as fast adaptation), the base-model θ is updated to θ′ using the support set. In the outer loop, a loss is evaluated on θ′ using the query set, and its gradient is computed with respect to θ to update the base-model. Since the outer loop requires computing the gradient of gradient (as the update in the inner loop is included in the entire computation graph), it is called second-order MAML (SOMAML). To prevent computing the Hessian matrix, Finn et al. 1Code available at https://github.com/IandRover/MAML_noisy_contrasive_learner (2017) propose first-order MAML (FOMAML) that uses the gradient computed with respect to the inner-loop-updated parameters θ′ to update the base-model. The widely accepted intuition behind MAML is that the models are encouraged to learn generalpurpose representations which are broadly applicable not only to the seen tasks but also to novel tasks (Finn et al., 2017; Raghu et al., 2020; Goldblum et al., 2020). Raghu et al. (2020) confirm this perspective by showing that during fast adaptation, the majority of changes being made are in the final linear layers. In contrast, the convolution layers (as the feature encoder) remain almost static. This implies that the models trained with MAML learn a good feature representation and that they only have to change the linear mapping from features to outputs during the fast adaptation. Similar ideas of freezing feature extractors during the inner loop have also been explored (Lee et al., 2019; Bertinetto et al., 2019; Liu et al., 2020), and have been held as an assumption in theoretical works (Du et al., 2021; Tripuraneni et al., 2020; Chua et al., 2021). While this intuition sounds satisfactory, we step further and ask the following fundamental questions: (1) In what sense does MAML guide any model to learn general-purpose representations? (2) How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so? (3) What is the role of support and query data, and how do they interact with each other? In this paper, we answer these questions and give new insights on the working mechanism of MAML, which turns out to be closely connected to supervised contrastive learning (SCL)2. Here, we provide a sketch of our analysis in Figure 1. We consider a setting of (a) a 5-way 1-shot paradigm of few-shot learning, (b) the mean square error (MSE) between the one-hot encoding of groundtruth label and the outputs as the objective function, and (c) MAML with a single inner-loop update. At the beginning of the inner loop, we set the linear layer w0 to zero. Then, the inner loop update of w0 is equivalent to adding the support features to w0. In the outer loop, the output of a query sample q1 is actually the inner product between the query feature ϕ(q1) and all support features (the learning rate is omitted for now). As the groundtruth is an one-hot vector, the encoder is trained to either minimize the inner product between the query features and the support features (when they are from different classes, as shown in the green box), or to pull the inner product between the query features and the support features to 1 (when they have the same label, as shown in the red box). Therefore, the inner loop and the outer loop together manifest a SCL objective. Particularly, as the vanilla implementation of MAML uses non-zero (random) initialization for the linear layer, we will show such initialization leads to a noisy SCL objective which would impede the training. In this paper, we firstly review a formal definition of SCL, present a more general case of MAML with cross entropy loss in classification, and show the underlying learning protocol of vanilla MAML as an interfered SCL in Section 2. We then experimentally verify the supervised contrastiveness of MAML and propose to mitigate the interference with our simple but effective technique of the zeroinitialization and zeroing trick (cf. Section 3). In summary, our main contributions are three-fold: • We show MAML is implicitly an SCL algorithm in classification and the noise comes from the randomly initialized linear layer and the cross-task interaction. • We verify the inherent contrastiveness of MAML based on the cosine similarity analysis. • Our experiments show that applying the zeroing trick induces a notable improvement in testing accuracy during training and that that during meta-testing, a pronounced increase in the accuracy occurs when the zeroing trick is applied. 2 WHY MAML IS IMPLICITLY A NOISY SUPERVISED CONTRASTIVE ALGORITHM? 2.1 PRELIMINARY: SUPERVISED CONTRASTIVE LEARNING In this work, we aim to bridge MAML and supervised contrastive learning (SCL) and attribute the success of MAML to SCL’s capacity in learning good representations. Thus, we would like to introduce SCL briefly. 2We use the term supervised contrastiveness to refer to the setting of using ground truth label information to differentiate positive samples and negative samples (Khosla et al., 2020). This setting is different from (unsupervised/self-supervised) contrastive learning. Supervised contrastive learning, proposed by Khosla et al. (2020), is a generalization of several metric learning algorithms, such as triplet loss and N-pair loss (Schroff et al., 2015; Sohn, 2016), and has shown the best performance in classification compared to SimCLR and CrossEntropy. In Khosla et al. (2020), SCL is described as “contrasts the set of all samples from the same class as positives against the negatives from the remainder of the batch” and “embeddings from the same class are pulled closer together than embeddings from different classes.” For a sample s, the label information is leveraged to indicate positive samples (i.e., samples having the same label as sample s) and negative samples (i.e., samples having different labels to sample s). The loss of SCL is designed to increase the similarity (or decrease the metric distance) of embeddings of positive samples and to reduce the similarity (or increase the metric distance) of embeddings of negative samples (Khosla et al., 2020). In essence, SCL combines supervised learning and contrastive learning and differs from supervised learning in that the loss contains a measurement of the similarity (or distance) between the embedding of a sample and embeddings of its positive/negative sample pairs. Now we give a formal definition of SCL. For a set of N samples drawn from a n-class dataset. Let i ∈ I = {1, ..., N} be the index of an arbitrary sample. Let A(i) = I \ {i}, P (i) be the set of indices of all positive samples of sample i, and N(i) = A(i) \ P (i) be the set of indices of all negative samples of sample i. Let zi indicates the embedding of sample i. Definition 1 Let Msim be a measurement of similarity (e.g., inner product, cosine similarity). Training algorithms that adopt loss of the following form belong to SCL: LSCL = ∑ i ∑ p∈P (i) c−p,iMsim(zi, zp) + ∑ i ∑ n∈N(i) c+n,iMsim(zi, zn) + c (1) where c−p,i < 0 and c + n,i > 0 for all n, p and i; and c is a constant independent of samples. We further define that a training algorithm that follows Eq.(1), but with either (a) c+n,i < 0 for some n, i or (b) c is a constant dependent of samples, belongs to noisy SCL. 2.2 PROBLEM SETUP We provide the detailed derivation to show that MAML is implicitly a noisy SCL, where we adopt the few-shot classification as the example application. In this section, we focus on the meta-training period. Consider drawing a batch of tasks {T1, . . . , TNbatch} from a meta-training task distribution D. Each task Tn contains a support set Sn and a query set Qn, where Sn = {(sm, tm)} Nway×Nshot m=1 , Qn = {(qm, um)} Nway×Nquery m=1 , sm, qm ∈ RNin are data samples, and tm, um ∈ {1, ..., Nway} are labels. We denote Nway the number of classes in each task, and {Nshot, Nquery} respectively the number of support and query samples per class. The architecture of our base-model comprises of a convolutional encoder ϕ : RNin → RNf (parameterized by φ), a fully connected linear head w ∈ RNf×Nway , and a Softmax output layer, where Nf is the dimension of the feature space. We denote the kth column of w as wk. Note that the base-model parameters θ consist of φ and w. As shown in Appendix A, both FOMAML and SOMAML adopt a training strategy comprising the inner loop and the outer loop. At the beginning of a meta-training iteration, we sample Nbatch tasks. For each task Tn, we perform inner loop updates using the inner loop loss (c.f. Eq. (2)) evaluated on the support data, and then evaluate the outer loop loss (c.f. Eq. (3)) on the updated base-model using the query data. In the ith step of the inner loop, the parameters {φi−1,wi−1} are updated to {φi,wi} using the multi-class cross entropy loss evaluated on the support dataset Sn as L{φi,wi},Sn = E (s,t)∼Sn Nway∑ j=1 1j=t[− log exp(ϕi(s)⊤wj i)∑Nway k=1 exp(ϕ i(s)⊤wki) ] (2) After Nstep inner loop updates, we compute the outer loop loss using the query data Qn: L{φNstep ,wNstep},Qn = E(q,u)∼Qn [− log exp(ϕ Nstep(q)⊤wu Nstep)∑Nway k=1 exp(ϕ Nstep(q)⊤wkNstep) ] (3) Then, we sum up the outer loop losses of all tasks, and perform gradient descent to update the base-model’s initial parameters {φ0,w0}. To show the supervised contrastiveness entailed in MAML, we adopt an assumption that the Encoder ϕ is Frozen during the Inner Loop (the EFIL assumption) and we discuss the validity of the assumption in Section 2.6. Without loss of generality, we consider training models with MAML with Nbatch = 1 and Nstep = 1, and we discuss the generalized version in Section 2.6. For simplicity, the kth element of model output exp(ϕ(s) ⊤wk 0)∑Nway j=1 exp(ϕ(s) ⊤wj0) (respectively exp(ϕ(q) ⊤wk 1)∑Nway j=1 exp(ϕ(q) ⊤wj1) ) of sample s (respectively q) is denoted as sk (respectively qk). 2.3 INNER LOOP AND OUTER LOOP UPDATE OF LINEAR LAYER AND ENCODER In this section, we primarily focus on the update of parameters in the case of FOMAML. The full derivation and discussion of SOMAML are provided in Appendix B. Inner loop update of the linear layer. In the inner loop, the linear layer w0 is updated to w1 with a learning rate η as shown in Eq. (4) in both FOMAML and SOMAML. In contrast to the example in Figure 1, the columns of the linear layer are added with the weighted sum of the features extracted from support samples (i.e., support features). Compared to wk0, wk1 is pushed towards the support features of the same class (i.e., class k) with strength of 1 − sk, while being pulled away from the support features of different classes with strength of sk. wk 1 = wk 0 − η ∂L{φ,w0},S ∂wk0 = wk 0 + η E (s,t)∼S (1k=t − sk)ϕ(s) (4) Outer loop update of the linear layer. In the outer loop, w0 is updated using the query data with a learning rate ρ. For FOMAML, the final linear layer is updated as follows. w′k 0 = wk 0 − ρ ∂L{φ,w1},Q ∂wk1 = wk 0 + ρ E (q,u)∼Q (1k=u − qk)ϕ(q) (5) Note that the computation of qk requires the inner-loop updated w 1. Generally speaking, Eq. (5) resembles Eq. (4). It is obvious that, in the outer loop, the query features are added weightedly to the linear layer, and the strength of change relates to the output value. In other words, after the outer loop update, the linear layer memorizes the query features of current tasks. This can cause a crosstask interference because in the next inner loop there would be additional inner products between the support features of the next tasks and the query features of the current tasks. Outer loop update of the encoder. Using the chain rule, the gradient of the outer loop loss with respect to φ (i.e., the parameters of the encoder) is given by ∂L{φ,w1},Q ∂φ = E(q,u)∼Q ∂L{φ,w1},Q ∂ϕ(q) ∂ϕ(q) ∂φ + E(s,t)∼S ∂L{φ,w1},Q ∂ϕ(s) ∂ϕ(s) ∂φ , where the second term can be neglected when FOMAML is considered. Below, we take a deeper look at the backpropagated error of one query data (q, u) ∼ Q. The full derivation is provided in Appendix B.2. ∂L{φ,w1},q ∂ϕ(q) = Nway∑ j=1 (qj − 1j=u)wj 0 + η E (s,t)∼S [−( Nway∑ j=1 qjsj) + su + qt − 1t=u]ϕ(s) (6) 2.4 MAML IS A NOISY CONTRASTIVE LEARNER Reformulating the outer loop loss for the encoder as a noisy SCL loss. We can observe from Eq. (6) that the actual loss for the encoder (evaluated on a single query data (q, u) ∼ Q) is as the following. L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj 0⊤ stop gradient ϕ(q) + η E (s,t)∼S [− Nway∑ j=1 qjsj + su + qt − 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (7) For SOMAML, the range of “stop gradient” in the second term is different: L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj 0⊤ stop gradient ϕ(q) + η E (s,t)∼S [− Nway∑ j=1 qjsj + su + qt − 1t=u] stop gradient ϕ(s)⊤ϕ(q) (8) With these two reformulations, we observe the essential difference between FOMAML and SOMAML is the range of stop gradient. We provide detailed discussion and instinctive illustration in Appendix B.5 on how this explains the phenomenon that SOMAML often leads to faster convergence. To better deliberate the effect of each term in the reformulated outer loop loss, we define the first term in Eq. (7) or Eq. (8) as interference term, the second term as noisy contrastive term, and the coefficients − ∑Nway j=1 qjsj + su + qt − 1t=u as contrastive coefficients. Understanding the interference term. In the case of j = u, the outer loop loss forces the model to minimize (qj − 1)wj0⊤ϕ(q). This can be problematic because (a) at the beginning of training, w0 is assigned with random values and (b) w0 is added with query features of previous tasks as shown in Eq. (5). Consequently, ϕ(q) is pushed to a direction composed of previous query features or to a random direction, introducing an unnecessary cross-task interference or an initialization interference that slows down the training of the encoder. Noting that the cross-task interference also occurs at the testing period, since, at testing stage, w0 is already added with query features of training tasks, which can be an interference to testing tasks. Understanding the noisy contrastive term. When the query and support data have the same label (i.e., u = t), e.g., class 1, the contrastive coefficients becomes − ∑Nway j=2 qjsj − q1s1 + s1 + q1− 1, which is − ∑Nway j=2 qjsj − (1 − q1)(1 − s1) < 0. This indicates the encoder would be updated to maximize the inner product between ϕ(q) and the support features of the same class. However, when the query and support data are in different classes, the sign of the contrastive coefficient can sometimes be negative. The outer loop loss thus cannot well contrast the query features against the support features of different classes, making this loss term not an ordinary SCL loss. To better illustrate the influence of the interference term and the noisy contrastive term, we provide an ablation experiment in Appendix B.7. Theorem 1 below formally connects MAML to SCL. Theorem 1 With the EFIL assumption, FOMAML is a noisy SCL algorithm. With assumptions of (a) EFIL and (b) a single inner-loop update, SOMAML is a noisy SCL algorithm. Proof: For FOMAML, both Eq. (7) (one inner loop update step) and Eq. (26) (multiple inner loop update steps) follows Definition 1. For SOMAML, Eq. (8) follows Definition 1. Introduction of the zeroing trick makes Eq. (7) and Eq. (8) SCL losses. To tackle the interference term and make the contrastive coefficients more accurate, we introduce the zeroing trick: setting the w0 to be zero after each outer loop update, as shown in Appendix A. With the zeroing trick, the original outer loop loss (of FOMAML) becomes L{φ,w1},q = η E (s,t)∼S (qt − 1t=u)ϕ(s) ⊤ stop gradient ϕ(q) (9) For SOMAML, the original outer loop loss becomes L{φ,w1},q = η E (s,t)∼S (qt − 1t=u) stop gradient ϕ(s)⊤ϕ(q) (10) The zeroing trick brings two nontrivial effects: (a) eliminating the interference term in both Eq. (7) and Eq. (8); (b) making the contrastive coefficients follow SCL. For (b), since all the predictive values of support data become the same, i.e., sk = 1Nway , the contrastive coefficient becomes qt − 1t=u, which is negative when the support and query data have the same label, and positive otherwise. With the zeroing trick, the contrastive coefficient follows the SCL loss, as summarized below. Corollary 1 With mild assumptions of (a) EFIL, (b) a single inner-loop update and (c) training with the zeroing trick (i.e., the linear layer is zeroed at the end of each outer loop), both FOMAML and SOMAML are SCL algorithms. Proof: Both Eq. (9) and Eq. (10) follow Definition 1. The introduction of the zeroing trick makes the relationship between MAML and SCL more straightforward. Generally speaking, by connecting MAML and SCL, we can better understand other MAML-based meta-learning studies. 2.5 RESPONSES TO QUESTIONS IN SECTION 1 In what sense does MAML guide any model to learn general-purpose representations? Under the EFIL assumption, MAML is a noisy SCL algorithm in a classification paradigm. The effectiveness of MAML in enabling models to learn general-purpose representations can be attributed to the SCL characteristics of MAML. How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so? MAML adopts the inner and outer loops to perform noisy SCL sequentially. In the inner loop, the features of support data are memorized by w via inner-loop update. In the outer loop, the softmax output of the query data thus contains the inner products between the support features and the query feature. What is the role of support and query data, and how do they interact with each other? We show that the original loss in MAML can be reformulated as a loss term containing the inner products of the embedding of the support and query data. In FOMAML, the support features act as the reference, while the query features are updated to move towards the support features of the same class and against those of the different classes. 2.6 GENERALIZATION OF OUR ANALYSIS In Appendix C, we provide the analysis where Nbatch ≥ 1 and Nstep ≥ 1. For the EFIL assumption, it can hardly be dropped because the behavior of the updated encoder is intractable. Besides, Raghu et al. (2020) show that the representations of intermediate layers do not change notably during the inner loop of MAML, and thus it is understood that the main function of the inner loop is to change the final linear layer. Furthermore, the EFIL assumption is empirically reasonable, since previous works (Raghu et al., 2020; Lee et al., 2019; Bertinetto et al., 2019; Liu et al., 2020) yield comparable performance while leaving the encoder untouched during the inner loop. With our analysis, one may notice that MAML is approximately a metric-based few-shot learning algorithm. From a high-level perspective, under the EFIL assumption, second-order MAML is similar to metric-based few-shot learning algorithms, such as MatchingNet (Vinyals et al., 2016), Prototypical network (Snell et al., 2017), and Relation network (Sung et al., 2018). The main difference lies in the metric and the way prototypes are constructed. Our work follows the setting adopted by MAML, such as using negative LogSoftmax as objective function, but we can effortlessly generalize our analysis to a MSE loss as had been shown in Figure 1. As a result, our work points out a new research direction in improving MAML by changing the objective functions in the inner and the outer loops, e.g., using MSE for the inner loop but negative LogSoftmax for the outer loop. Besides, in MAML, we often obtain the logits by multiplying the features by the linear weight w. Our work implies future direction as to alternatively substitute this inner product operation with other metrics or other similarity measurements such as cosine similarity or negative Euclidean distance. 3 EXPERIMENTAL RESULTS In this section, we provide empirical evidence of the supervised contrastiveness of MAML and show that zero-initialization of w0, reduction in the initial norm of w0, or the application of zeroing trick can speed up the learning profile. This is applicable to both SOMAML and FOMAML. 3.1 SETUP We conduct our experiments on the mini-ImageNet dataset (Vinyals et al., 2016; Ravi & Larochelle, 2017) and the Omniglot dataset (Lake et al., 2015). For the results on the Omniglot dataset, please refer to Appendix E. For the mini-ImageNet, it contains 84 × 84 RGB images of 100 classes from the ImageNet dataset with 600 samples per class. We split the dataset into 64, 16 and 20 classes for training, validation, and testing as proposed in (Ravi & Larochelle, 2017). We do not perform hyperparameter search and thus are not using the validation data. For all our experiments of applying MAML into few-shot classification problem, where we adopt two experimental settings: 5-way 1- shot and 5-way 5-shot, with the batch size Nbatch being 4 and 2, respectively (Finn et al., 2017). The few-shot classification accuracy is calculated by averaging the results over 400 tasks in the test phase. For model architecture, optimizer and other experimental details, please refer to Appendix D.1. 3.2 COSINE SIMILARITY ANALYSIS VERIFIES THE IMPLICIT CONTRASTIVENESS IN MAML In Section 2, we show that the encoder is updated so that the query features are pushed towards the support features of the same class and pulled away from those of different classes. Here we verify this supervised contrastiveness experimentally. Consider a relatively overfitting scenario where there are five classes of images and for each class there are 20 support images and 20 query images. We fix the support and query set (i.e. the data is not resampled every iteration) to verify the concept that the support features work as positive and negative samples. Channel shuffling is used to avoid the undesirable channel memorization effect (Jamal & Qi, 2019; Rajendran et al., 2020). We train the model using FOMAML and examine how well the encoder can separate the data of different classes in the feature space by measuring the averaged cosine similarities between the features of each class. The results are averaged over 10 random seeds. As shown in the top row of Figure 2, the model trained with MAML learns to separate the features of different classes. Moreover, the contrast between the diagonal and the off-diagonal entries of the heatmap increases as we remove the initialization interference (by zero-initializing w0, shown in the middle row) and remove the cross-task interference (by applying the zeroing trick, shown in the bottom row). The result agrees with our analysis that MAML implicitly contains the interference term which can impede the encoder from learning a good feature representation. For experiments on semantically similar classes of images, the result is shown in Section D.3. 3.3 ZEROING LINEAR LAYER AT TESTING TIME INCREASES TESTING ACCURACY Before starting our analysis on benchmark datasets, we note that the cross-task interference can also occur during meta-testing. In the meta-testing stage, the base-model is updated in the inner loop using support data S and then the performance is evaluated using query data Q, where S and Q are drawn from a held-out, unseen meta-testing dataset. Recall that at the end of the outer loop (in meta-training stage), the query features are added weightedly to the linear layer w0. In other words, at the beginning of meta-testing, w0 is already added with the query features of previous training tasks, which can drastically influence the performance on the unseen tasks. To validate this idea, we apply the zeroing trick at meta-testing time (which we refer to zeroing w0 at the beginning of the meta-testing time) and show such trick increases the testing accuracy of the model trained with FOMAML. As illustrated in Figure 3, compared to directly entering meta-testing (i.e. the subplot at the left), additionally zeroing the linear layer at the beginning of each meta-testing time (i.e. the subplot at the right) increases the testing accuracy of the model whose linear layer is randomly initialized or zero-initialized (denoted by the red and orange curves, respectively). And the difference in testing performance sustains across the whole training session. In the following experiments, we evaluate the testing performance only with zeroing the linear layer at the beginning of the meta-testing stage. By zeroing the linear layer, the potential interference brought by the prior (of the linear layer) is ignored. Then, we can fully focus on the capacity of the encoder in learning a good feature representation. 3.4 SINGLE INNER LOOP UPDATE SUFFICES WHEN USING THE ZEROING TRICK In Eq. (4) and Eq. (21), we show that the features of the support data are added to the linear layer in the inner loop. Larger number of inner loop update steps can better offset the effect of interference brought by a non-zeroed linear layer. In other words, when the models are trained with the zeroing trick, a larger number of inner loop updates can not bring any benefit. We validate this intuition in Figure 4 under a 5-way 1-shot setting. In the original FOMAML, the models trained with a single inner loop update step (denoted as red curve) converge slower than those trained with update step of 7 (denoted as purple curve). On the contrary, when the models are trained with the zeroing trick, models with various inner loop update steps converge at the same speed. 3.5 EFFECT OF INITIALIZATION AND THE ZEROING TRICK In Eq. (7), we observe an interference derived from the historical task features or random initialization. We validate our formula by examining the effects of (1) reducing the norm of w0 at initialization and (2) applying the zeroing trick. From Figure 5, the performance is higher when the initial norm of w0 is lower. Compared to random initialization, reducing the norm via down-scaling w0 by 0.7 yields visible differences. Besides, the testing accuracy of MAML with zeroing trick (the purple curve) outperforms that of original MAML. 4 CONCLUSION This paper presents an extensive study to demystify how the seminal MAML algorithm guides the encoder to learn a general-purpose feature representation and how support and query data interact. Our analysis shows that MAML is implicitly a supervised contrastive learner using the support features as positive and negative samples to direct the update of the encoder. Moreover, we unveil an interference term hidden in MAML originated from the random initialization or cross-task interaction, which can impede the representation learning. Driven by our analysis, removing the interference term by a simple zeroing trick renders the model unbiased to seen or unseen tasks. Furthermore, we show constant improvements in the training and testing profiles with this zeroing trick, with experiments conducted on the mini-ImageNet and Omniglot datasets. APPENDIX A ORIGINAL MAML AND MAML WITH THE ZEROING TRICK Algorithm 1 Second-order MAML Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: while not done do 2: Sample tasks {T1, . . . TNbatch} from D 3: for n = 1, 2, . . . , Nbatch do 4: {Sn, Qn} ← sample from Tn 5: θn = θ 6: for i = 1, 2, . . . , Nstep do 7: θn ← θn − η∇θnLθn,Sn 8: end for 9: end for 10: Update θ ← θ − ρ ∑Nbatch n=1 ∇θLθn,Qn 11: end while Algorithm 2 First-order MAML Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: while not done do 2: Sample tasks {T1, . . . TNbatch} from D 3: for n = 1, 2, . . . , Nbatch do 4: {Sn, Qn} ← sample from Tn 5: θn = θ 6: for i = 1, 2, . . . , Nstep do 7: θn ← θn − η∇θnLθn,Sn 8: end for 9: end for 10: Update θ ← θ − ρ ∑Nbatch n=1 ∇θnLθn,Qn 11: end while Algorithm 3 Second-order MAML with the zeroing trick Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: Set w← 0 (the zeroing trick) 2: while not done do 3: Sample tasks {T1, . . . TNbatch} from D 4: for n = 1, 2, . . . , Nbatch do 5: {Sn, Qn} ← sample from Tn 6: θn = θ 7: for i = 1, 2, . . . , Nstep do 8: θn ← θn − η∇θnLθn,Sn 9: end for 10: end for 11: Update θ ← θ − ρ ∑Nbatch n=1 ∇θLθn,Qn 12: Set w← 0 (the zeroing trick) 13: end while B SUPPLEMENTARY DERIVATION In this section, we provide the full generalization and further discussion that supplement the main paper. We consider the case of Nbatch = 1 and Nstep = 1 under the EFIL assumption. We provide the outer loop update of the linear layer under SOMAML in Section B.1. Next, we offer the full derivation of the outer loop update of the encoder in Section B.2. Then, we reformulate the outer loop loss for the encoder in both FOMAML and SOMAML in Section B.3 and Section B.4. Afterward, we discuss the main difference in FOMAML and SOMAML in detail in Section B.5. Finally, we show the performance of the models trained using the reformulated loss in Section B.6. B.1 THE DERIVATION OF OUTER LOOP UPDATE FOR THE LINEAR LAYER USING SOMAML Here, we provide the complete derivation of the outer loop update for the linear layer. Using SOMAML with support set S and query set Q, the update of the linear layer follows w′0k = wk 0 − ρ ∂L{φ,w1},Q ∂wk0 = wk 0 − ρ Nway∑ m=1 ∂wm 1 ∂wk0 · ∂L{φ,w1},Q ∂wm1 = wk 0 − ρ∂wk 1 ∂wk0 · ∂L{φ,w1},Q ∂wk1 − ρ Nway∑ m ̸=k ∂wm 1 ∂wk0 · ∂L{φ,w1},Q ∂wm1 = wk 0 + ρ[I − η E (s,t)∼S (sk − s2k)ϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) + ρη ∑ m ̸=k [ E (s,t)∼S (smsk)ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] = wk 0 + ρ[I − η E (s,t)∼S skϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) + ρη Nway∑ m=1 [ E (s,t)∼S (smsk)ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] (11) We can further simplify Eq. (11) to Eq. (12) with the help of the zeroing trick. w′0k = ρ[I − η E (s,t)∼S skϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) (12) This is because the zeroing trick essentially turns the logits of all support samples to zero, and consequently the predicted probability (softmax) output sm becomes 1Nway for all channel m. Therefore, the third term in Eq. (11) turns out to be zero (c.f. Eq. (13)). The equality of Eq. (13) holds since the summation of the (softmax) outputs is one. ρη N2way Nway∑ m=1 [ E (s,t)∼S ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] = ρη N2way [ E (s,t)∼S ϕ(s)ϕ(s)T ] E (q,u)∼Q ϕ(q) Nway∑ m=1 (1m=u − qm) = 0 (13) B.2 THE FULL DERIVATION OF THE OUTER LOOP UPDATE OF THE ENCODER. As the encoder ϕ is parameterized by φ, the outer loop gradient with respect to φ is given by ∂L{φ,w1},Q ∂φ = E(q,u)∼Q ∂L{φ,w1},Q ∂ϕ(q) ∂ϕ(q) ∂φ +E(s,t)∼S ∂L{φ,w1},Q ∂ϕ(s) ∂ϕ(s) ∂φ . We take a deeper look at the backpropagated error ∂L{φ,w1},Q ∂ϕ(q) of the feature of one query data (q, u) ∼ Q, based on the following form: − ∂L{φ,w1},Q ∂ϕ(q) = wu 1 − Nway∑ j=1 (qjwj 1) = Nway∑ j=1 (1j=u − qj)wj 1 = Nway∑ j=1 (1j=u − qj)wj 0 + η Nway∑ j=1 [1j=u − qj ][ E (s,t)∼S (1j=t − sj)ϕ(s)] = Nway∑ j=1 (1j=u − qj)wj 0 + η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u]ϕ(s) (14) B.3 REFORMULATION OF THE OUTER LOOP LOSS FOR THE ENCODER AS NOISY SCL LOSS. We can derive the actual loss (evaluated on a single query data (q, u) ∼ Q) that the encoder uses under FOMAML scheme as follows: L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj0⊤ stop gradient ϕ(q)− η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (15) For SOMAML, we need to additionally plug Eq. (4) into Eq. (3). L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj0⊤ stop gradient ϕ(q)− η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u] stop gradient ϕ(s)⊤ϕ(q) (16) B.4 INTRODUCTION OF THE ZEROING TRICK MAKES EQ. (7) AND EQ. (8) SCL LOSSES. Apply the zeroing trick to Eq. (7) and Eq. (8), we can derive the actual loss Eq. (17) and Eq. (18) that the encoder follows. L{φ,w1},q = η E (s,t)∼S (qt − 1t=u)ϕ(s) ⊤ stop gradient ϕ(q) (17) L{φ,w1},q = η E (s,t)∼S (qt − 1t=u) stop gradient ϕ(s)⊤ϕ(q) (18) With these two equations, we can observe the essential difference in FOMAML and SOMAML is the range of stopping gradients. We would further discuss the implication of different ranges of gradient stopping in Appendix B.5. B.5 DISCUSSION ABOUT THE DIFFERENCE BETWEEN FOMAML AND SOMAML Central to the mystery of MAML is the difference between FOMAML and SOMAML. Plenty of work is dedicated to approximating or estimating the second-order derivatives in the MAML algorithm in a more computational-efficient or accurate manner (Song et al., 2020; Rothfuss et al., 2019; Liu et al., 2019). With the EFIL assumption and our analysis through connecting SCL to these algorithms, we found that we can better understand the distinction between FOMAML and SOMAML from a novel perspective. To better understand the difference, we can compare Eq. (7) with Eq. (8) or compare Eq. (9) with Eq. (10). To avoid being distracted by the interference terms, we provide the analysis of the latter. The main difference between Eq. (9) and Eq. (10) is the range of gradient stopping and we will show that this difference results in a significant distinction in the feature space. To begin with, by chain rule, we have ∂L∂φ = E(q,u)∼Q ∂L ∂ϕ(q) ∂ϕ(q) ∂φ + E(s,t)∼S ∂L ∂ϕ(s) ∂ϕ(s) ∂φ . As we specifically want to know how the encoded features are updated given different losses, we can look at the terms ∂L∂ϕ(q) and ∂L ∂ϕ(s) by differentiating Eq. (9) and Eq. (10) with respect to the features of query data q and support data s, respectively. FOMAML: ∂L ∂ϕ(q) = η E (s,t)∼S (qt − 1t=u)ϕ(s) ∂L ∂ϕ(s) = 0 (19) SOMAML: ∂L ∂ϕ(q) = η E (s,t)∼S (qt − 1t=u)ϕ(s) ∂L ∂ϕ(s) = η E (s,t)∼S (qt − 1t=u)ϕ(q) (20) Obviously, as the second equation in Eq (19) is zero, we know that in FOMAML, the update of the encoder does consider the change of the support features. The encoder is updated to move the query features closer to support features of the same class and further to support features of different classes in FOMAML. On the contrary, we can tell from the above equations that in SOMAML, the encoder is updated to make support features and query features closer if both come from the same class and make support features and query features further if they come from different classes. We illustrate the difference in Figure 6. For simplicity, we do not consider the scale of the coefficients but their signs. The subplot on the left indicates that this FOMAML loss guides the encoder to be updated so that the feature of the query data moves 1) towards the support feature of the same class, and 2) against the support features of the different classes. On the other hand, the SOMAML loss guides the encoder to be updated so that 1) when the support data and query data belong to the same class, their features move closer, and otherwise, their features move further. This generally explains why models trained using SOMAML generally converge faster than those trained using FOMAML. B.6 EXPLICITLY COMPUTING THE REFORMULATING LOSS USING EQ. (7) AND EQ. (8) Under the EFIL assumption, we show that MAML can be reformulated as a loss taking noisy SCL form. Below, we consider a setting of 5-way 1-shot mini-ImageNet few-shot classification task, under the condition of no inner loop update of the encoder. (This is the assumption that our derivation heavily depends on. It means that we now only update the encoder in the outer loop.) We empirically show that explicitly computing the reformulated losses of Eq. (7), Eq. (17) and Eq. (18) yield almost the same curves as MAML (with the EFIL assumption). Please note that the reformulated losses are used to update the encoders, for the linear layer w0, we explicitly update it using Eq. (5). Note that although the performance models training using FOMAML, FOMAML with the zeroing trick, and SOMAML converge to similar testing accuracy, the overall testing performance during the training process is distinct. The results are averaged over three random seeds. B.7 THE EFFECT OF INTERFERENCE TERM AND NOISY CONTRASTIVE TERM Reformulating the loss of MAML into a noisy SCL form enables us to further investigate the effects brought by the interference term and the noisy contrastive term, which we presume both effects to be negative. To investigate the effect of the interference term, we simply consider the loss adopted by firstorder MAML as in Eq. (7) but with the interference term dropped (denoted as “n1 ×”). As for the noisy contrastive term, the noise comes from the fact that “when the query and support data are in different classes, the sign of the contrastive coefficient can sometimes be negative”, as being discussed in Section 2.4. To mitigate this noise, we consider the loss in Eq. (7) with the term −( ∑Nway j=1 qjsj) + su dropped from the contrastive coefficient, and denote it as “n2 ×”. On the other hand, we also implement a loss with “n1 ×, n2 ×”, which is actually Eq. (9). We adopt the same experimental setting as Section B.6. In Figure 8, we show the testing profiles of the original reformulated loss (i.e., the curve in red, labeled as “n1 ✓, n2 ✓”), dropping the interference term (i.e., the curve in orange, labeled as “n1 ×, n2 ✓”), dropping the noisy part of the contrastive term (i.e., the curve in green, labeled as “n1 ✓, n2 ×”) or dropping both (i.e., the curve in blue, labeled as “n1 ×, n2 ×”). We can see that either dropping the interference term or dropping dropping the noisy part of contrastive coefficients yield profound benefit. To better understand how noisy is the noisy contrastive term, i.e., how many times the sign of the contrastive coefficient is negative when the query and support data are in different classes, we explicitly record the ratio of the contrastive term being positive or negative. We adopt the same experimental setting as Section B.6. The result is shown in Figure 9. When the zeroing trick is applied, the ratio of contrastive term being negative (shown as the red curve on the right subplot) is 0.2, which is 1Nway where Nway = 5 in our setting. On the other hand, when the zeroing trick is not applied, the ratio of contrastive term being negative (shown as the orange color on the right subplot) is larger than 0.2. This additional experiment necessitates the application of the zeroing trick. C A GENERALIZATION OF OUR ANALYSIS In this section, we derive a more general case of the encoder update in the outer loop. We consider drawing Nbatch tasks from the task distribution D and having Nstep update steps in the inner loop while keeping the EFIL assumption. To derive a more general case, we use wki,n to denote the kth column of wi,n, where wi,n is updated from w0 using support data Sn for i inner-loop steps. For simplicity, the kth channel softmax predictive output exp(ϕ(s) ⊤wk i,n)∑Nway j=1 exp(ϕ(s) ⊤wji,n) of sample s (using wi−1,n) is denoted as si,nk . Inner loop update for the linear layer We yield the inner loop update for the final linear layer in Eq. (21) and Eq. (22). wk i,n = wk i−1,n − η ∂L{φ,wi−1,n},Sn ∂wki−1,n = wk i−1,n + η E (s,t)∼Sn (1k=t − si−1,nk )ϕ(s) (21) wk Nstep,n = wk 0 − η Nstep∑ i=1 E (s,t)∼Sn (1k=t − si−1,nk )ϕ(s) (22) Outer loop update for the linear layer We derive the outer loop update for the linear layer in SOMAML, with denoting I = {1, 2, ..., Nway}: w′k 0 = wk 0 − ρ Nbatch∑ n=1 ∂L{φ,wkNstep,n},Qn ∂wk0 = wk 0 − ρ Nbatch∑ n=1 ∑ p0=k,p1∈I,...,pNway∈I [( Nstep−1∏ i=0 ∂wpi+1 i+1,n ∂wpi i,n ) ∂L{φ,wNstep,n},Qn ∂wpNstep Nstep,n ] (23) When it comes to FOMAML, we have w′k 0 = wk 0 − ρ Nbatch∑ n=1 ∂L{φ,wkNstep,n},Qn ∂wkNstep,n = w0k + ρ Nbatch∑ n=1 E (q,u)∼Qn (1k=u − qNstep,nk )ϕ(q) (24) Outer loop update for the encoder We derive the outer loop update of the encoder under FOMAML as below. We consider the back-propagated error of the feature of one query data (q, u) ∼ Qn. Note that the third equality below holds by leveraging Eq. (21). − ∂L{φ,wNstep,n},Qn ∂ϕ(q) = wu Nstep,n − Nway∑ i=1 (qNstep,ni wi Nstep,n) = Nway∑ i=1 (1i=u − qNstep,ni )wi Nstep,n = Nway∑ i=1 (1i=u − qNstep,ni )[w 0 i + η Nstep∑ p=1 E (s,t)∼Sn (1i=t − sp−1,ni )ϕ(s)] = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i + η Nway∑ i=1 (1i=u − qNstep,ni ) Nstep∑ p=1 E (s,t)∼Sn (1i=u − sp−1,ni )ϕ(s) = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i + η E (s,t)∼Sn Nstep∑ p=1 [( Nway∑ j=1 qNstep,nj s p−1,n j )− s p−1,n u − q Nstep,n t + 1t=u]ϕ(s) (25) Reformulating the Outer Loop Loss for the Encoder as Noisy SCL Loss. From Eq. (25), we can derive the generalized loss (of one query sample (q, u) ∼ Qn) that the encoder uses under FOMAML scheme. L{φ,wNstep,n},q = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i ⊤ stop gradient ϕ(q) + η E (s,t)∼Sn Nstep∑ p=1 [( Nway∑ j=1 qNstep,nj s p−1,n j )− s p−1,n u − q Nstep,n t + 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (26) D EXPERIMENTS ON MINI-IMAGENET DATASET D.1 EXPERIMENTAL DETAILS IN MINI-IMAGENET DATASET The model architecture contains four basic blocks and one fully connected linear layer, where each block comprises a convolution layer with a kernel size of 3 × 3 and filter size of 64, batch normalization, ReLU nonlineartity and 2 × 2 max-poling. The models are trained with the softmax cross entropy loss function using the Adam optimizer with an outer loop learning rate of 0.001 (Antoniou et al., 2019). The inner loop step size η is set to 0.01. The models are trained for 30000 iterations (Raghu et al., 2020). The results are averaged over four random seeds, and we use the shaded region to indicate the standard deviation. Each experiment is run on either a single NVIDIA 1080-Ti or V100 GPU. The detailed implementation is based on Long (2018) (MIT License). D.2 THE EXPERIMENTAL RESULT OF SOMAML The results with SOMAML are shown in Figure 10. Note that as it is possible that longer training can eventually overcome the noise factor and reach similar performance as the zeroing trick, the benefit of the zeroing trick is best seen at the observed faster convergence results when compared to vanilla MAML. D.3 COSINE SIMILARITY ANALYSIS ON SEMANTICALLY SIMILAR CLASSES VERIFIES THE IMPLICIT CONTRASTIVENESS IN MAML In Figure 2, we randomly sample five classes of images under each random seed. Given the rich diversity of the classes in mini-ImageNet, we can consider that the five selected classes as semantically dissimilar or independent for each random seed. Here, we also provide the experimental outcomes using a dataset composed of five semantically similar classes selected from the miniImageNet dataset: French bulldog, Saluki, Walker hound, African hunting dog, and Golden retriever. Likewise to the original setting, we train the model using FOMAML and average the results over ten random seeds. As shown in Figure 11, the result is consistent with Figure 2. In conclusion, we show that the supervised contrastiveness is manifested with the application of the zeroing trick even if a semantically similar dataset is considered. D.4 EXPERIMENTAL RESULTS ON LARGER NUMBER OF SHOTS To empirically verify if our theoretical derivation generalizes to the setting where the number of shots is large, we conduct experiment of a 5-way 25-shot classification task using FOMAML with four random seeds where we adopt mini-ImageNet as the example dataset. As shown in Figure 12, we observe that models trained with the zeroing trick again yield the best performance, consistent with our theoretical work that MAML with the zeroing trick is SCL without noises and interference. D.5 THE ZEROING TRICK MITIGATES THE CHANNEL MEMORIZATION PROBLEM The channel memorization problem (Jamal & Qi, 2019; Rajendran et al., 2020) is a known issue occurring in a non-mutually-exclusive task setting, e.g., the task-specific class-to-label is not randomly assigned, and thus the label can be inferred from the query data alone (Yin et al., 2020). Consider a 5-way K-shot experiment where the total number of training classes is 5 × L. Now we construct tasks by assigning the label t to a class sampled from class tL to (t + 1)L. It is conceivable that the model will learn to directly map the query data to the label without using the information of the support data and thus fails to generalize to unseen tasks. This phenomenon can be explained from the perspective that the tth column of the final linear layer already accumulates the query features from tLth to (t + 1)Lth classes. Zeroing the final linear layer implicitly forces the model to use the imprinted information from the support features for inferring the label and thus mitigates this problem. We use the mini-ImageNet dataset and consider the case of L = 12. As shown in Figure 13, the zeroing trick prevents the model from the channel memorization problem whereas zero-initialization of the linear layer only works out at the beginning. Besides, the performance of models trained with the zeroing trick under this non-mutually-exclusive task setting equals the ones under the conventional few-shot setting as shown in Figure 5. As the zeroing trick clears out the final linear layer and equalizes the value of logits, our result essentially accords with Jamal & Qi (2019) that proposes a regularizer to maximize the entropy of prediction of the meta-initialized model. E EXPERIMENTS ON OMNIGLOT DATASET Omniglot is a hand-written character dataset containing 1623 character classes, each with 20 drawn samples from different people (Lake et al., 2015). The dataset set is splitted into training (1028 classes), validation (172 classes) and testing (423 classes) sets (Vinyals et al., 2016). Since we follow Finn et al. (2017) for setting hyperparamters, we do not use the the validation data. The character images are resized to 28 × 28. For all our experiments, we adopt two experimental settings: 5- way 1-shot and 5-way 5-shot where the batch size Nbatch is 32 and Nquery is 15 for both cases (Finn et al., 2017). The inner loop learning rate η is 0.4. The models are trained for 3000 iterations using FOMAML or SOMAML. The few-shot classification accuracy is calculated by averaging the results over 1000 tasks in the test stage. The model architecture follows the architecture used to train on mini-ImageNet, but we substitute the convolution with max-pooling with strided convolution operation as in Finn et al. (2017). The loss function, optimizer, and outer loop learning rate are the same as those used in the experiments on mini-ImageNet. Each experiment is run on either a single NVIDIA 1080-Ti. The results are averaged over four random seeds, and the standard deviation is illustrated with the shaded region. The models are trained using FOMAML unless stated otherwise. The detailed implementation is based on Deleu (2020) (MIT License). We revisit the application of the zeroing trick at the testing stage on Omniglot in Figure 14 and observe the increasing testing accuracy, in which such results are compatible with the ones on miniImageNet (cf. Figure 3 in the main manuscript). In the following experiments, we evaluate the testing performance only after applying the zeroing trick. In Figure 15, the distinction between the performance of models trained with the zeroing trick and zero-initialized models is prominent, sharing remarkable similarity with the results in miniImageNet (cf. Figure 5 in the main manuscript) in both 5-way 1-shot and 5-way 5-shot settings. We also show the testing performance of models trained using SOMAML in Figure 16 under a 5-way 5-shot setting, where there is little distinction in performance (in comparison to the results on miniImageNet, cf. Figure 10 in the main manuscript) between the models trained with the zeroing trick and the ones trained with random initialization. For channel memorization task, we construct non-mutually-exclusive training tasks by assigning the label t (where 1 ≤ t ≤ 5 in a few-shot 5-way setting) to a class sampled from class tL to (t + 1)L where L is 205 on Omniglot. The class-to-channel assignment is not applied to the testing tasks. The result is shown in Figure 17. For a detailed discussion, please refer to Section D.5.
1. What is the main contribution of the paper regarding the analysis of MAML algorithms? 2. What are the strengths of the proposed zeroing trick and its effectiveness in removing interference terms? 3. What are the weaknesses or limitations of the paper's experiments and comparisons? 4. Do you have any questions or concerns about the assumptions made in the paper's analysis?
Summary Of The Paper Review
Summary Of The Paper The paper analyzes MAML algorithms. Assuming in the inner loop the encoder is fixed and only last linear layer is updated, they analyze the gradient update and loss terms in the inner loop and outer loops (sec 2.3). Through this effort, the authors claim that there are noisy supervised contrastive term in the outer loop loss (eqn (7) and (8)). They further claim that there are additional interference terms which may degrade the performance of MAML at the beginning of training when the linear layer weights are largely random. To overcome this, they propose a simple zeroing trick by zeroing the initial linear layer weights after each outer loop update, essentially removing the interference terms (eqn (9) and (10)). They conduct experiments to support the contrastiveness in MAML and performance improvement using zeroing trick. Review Strengths The analysis is quite interesting. Their efforts make it explicit the interaction between the support and query set in MAML, and how MAML learns feature encoding. They discover a noisy supervised contrastive loss term in the outer loop loss using the support features as positive and negative samples. Interestingly, they discover another interference term in the outer loop loss which may degrade performance of MAML if last layer linear weights are randomly initialized. To remove the interference term, and also to remove the noise in the supervised contrastive loss, they propose a simple zeroing trick to set the initial weights to be zero after each outer loop update. The trick is simple and reasonable given their analysis results. They conduct experiments to verify the contrastiveness in MAML and improvements using the zeroing trick. Overall, the paper is quite clear and easy to follow. Question and weakness Comparison between eqn (7) and (8) is not clear. Can the authors discuss further the different range of stop gradient? The results in Figure 2 are nice but limited, not sure if these can be observed for other classes. What are the 5 classes in the experiment? Can the authors experiment other classes, especially some semantically similar classes? Further to the analysis in Figure 2, I am not sure if they really "verify the supervised contrastiveness". In particular, the results show that support and query features of same classes become similar as training progresses. I do not think this is particular for supervised contrastiveness. For example, authors can try standard transfer learning and fine-tuning and see if such pattern of consine similarities can be observed. I am ok with the assumption of freezing the encoder in the inner loop and update only linear layer. But with that, I thought some remarks of the paper sound trivial. For example, “In the inner loop, the features of support data are preserved in the linear layer via inner loop update. In the outer loop, the softmax output of the query data thus contains the inner products between the support features and the query feature.” If you can update only the last linear layer in the inner loop then of course support data feature can only reside there, and therefore in the outer loop it would be inner products between support features and query input as the last layer is a linear layer. Did I miss anything? The analysis focuses on classification. Inner product and softmax are critical components in their analysis. But MAML has been applied to other problems, e.g. regression. Perhaps the focus on classification should be reflected in the paper title.
ICLR
Title MAML is a Noisy Contrastive Learner in Classification Abstract Model-agnostic meta-learning (MAML) is one of the most popular and widely adopted meta-learning algorithms, achieving remarkable success in various learning problems. Yet, with the unique design of nested inner-loop and outer-loop updates, which govern the task-specific and meta-model-centric learning, respectively, the underlying learning objective of MAML remains implicit, impeding a more straightforward understanding of it. In this paper, we provide a new perspective of the working mechanism of MAML. We discover that MAML is analogous to a meta-learner using a supervised contrastive objective in classification. The query features are pulled towards the support features of the same class and against those of different classes. Such contrastiveness is experimentally verified via an analysis based on the cosine similarity. Moreover, we reveal that vanilla MAML has an undesirable interference term originating from the random initialization and the cross-task interaction. We thus propose a simple but effective technique, the zeroing trick, to alleviate the interference. Extensive experiments are conducted on both mini-ImageNet and Omniglot datasets to validate the consistent improvement brought by our proposed method. 1 1 INTRODUCTION Humans can learn from very few samples. They can readily establish their cognition and understanding of novel tasks, environments, or domains even with very limited experience in the corresponding circumstances. Meta-learning, a subfield of machine learning, aims at equipping machines with such capacity to accommodate new scenarios effectively (Vilalta & Drissi, 2002; Grant et al., 2018). Machines learn to extract task-agnostic information so that their performance on unseen tasks can be improved (Hospedales et al., 2020). One highly influential meta-learning algorithm is Model Agnostic Meta-Learning (MAML) (Finn et al., 2017), which has inspired numerous follow-up extensions (Nichol et al., 2018; Rajeswaran et al., 2019; Liu et al., 2019; Finn et al., 2019; Jamal & Qi, 2019; Javed & White, 2019). MAML estimates a set of model parameters such that an adaptation of the model to a new task only requires some updates to those parameters. We take the few-shot classification task as an example to review the algorithmic procedure of MAML. A few-shot classification problem refers to classifying samples from some classes (i.e. query data) after seeing a few examples per class (i.e. support data). In a meta-learning scenario, we consider a distribution of tasks, where each task is a few-shot classification problem and different tasks have different target classes. MAML aims to meta-train the base-model based on training tasks (i.e., the meta-training dataset) and evaluate the performance of the base-model on the testing tasks sampled from a held-out unseen dataset (i.e. the meta-testing dataset). In meta-training, MAML follows a bi-level optimization scheme composed of the inner loop and the outer loop, as shown in Appendix A (please refer to Section 2 for detailed definition). In the inner loop (also known as fast adaptation), the base-model θ is updated to θ′ using the support set. In the outer loop, a loss is evaluated on θ′ using the query set, and its gradient is computed with respect to θ to update the base-model. Since the outer loop requires computing the gradient of gradient (as the update in the inner loop is included in the entire computation graph), it is called second-order MAML (SOMAML). To prevent computing the Hessian matrix, Finn et al. 1Code available at https://github.com/IandRover/MAML_noisy_contrasive_learner (2017) propose first-order MAML (FOMAML) that uses the gradient computed with respect to the inner-loop-updated parameters θ′ to update the base-model. The widely accepted intuition behind MAML is that the models are encouraged to learn generalpurpose representations which are broadly applicable not only to the seen tasks but also to novel tasks (Finn et al., 2017; Raghu et al., 2020; Goldblum et al., 2020). Raghu et al. (2020) confirm this perspective by showing that during fast adaptation, the majority of changes being made are in the final linear layers. In contrast, the convolution layers (as the feature encoder) remain almost static. This implies that the models trained with MAML learn a good feature representation and that they only have to change the linear mapping from features to outputs during the fast adaptation. Similar ideas of freezing feature extractors during the inner loop have also been explored (Lee et al., 2019; Bertinetto et al., 2019; Liu et al., 2020), and have been held as an assumption in theoretical works (Du et al., 2021; Tripuraneni et al., 2020; Chua et al., 2021). While this intuition sounds satisfactory, we step further and ask the following fundamental questions: (1) In what sense does MAML guide any model to learn general-purpose representations? (2) How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so? (3) What is the role of support and query data, and how do they interact with each other? In this paper, we answer these questions and give new insights on the working mechanism of MAML, which turns out to be closely connected to supervised contrastive learning (SCL)2. Here, we provide a sketch of our analysis in Figure 1. We consider a setting of (a) a 5-way 1-shot paradigm of few-shot learning, (b) the mean square error (MSE) between the one-hot encoding of groundtruth label and the outputs as the objective function, and (c) MAML with a single inner-loop update. At the beginning of the inner loop, we set the linear layer w0 to zero. Then, the inner loop update of w0 is equivalent to adding the support features to w0. In the outer loop, the output of a query sample q1 is actually the inner product between the query feature ϕ(q1) and all support features (the learning rate is omitted for now). As the groundtruth is an one-hot vector, the encoder is trained to either minimize the inner product between the query features and the support features (when they are from different classes, as shown in the green box), or to pull the inner product between the query features and the support features to 1 (when they have the same label, as shown in the red box). Therefore, the inner loop and the outer loop together manifest a SCL objective. Particularly, as the vanilla implementation of MAML uses non-zero (random) initialization for the linear layer, we will show such initialization leads to a noisy SCL objective which would impede the training. In this paper, we firstly review a formal definition of SCL, present a more general case of MAML with cross entropy loss in classification, and show the underlying learning protocol of vanilla MAML as an interfered SCL in Section 2. We then experimentally verify the supervised contrastiveness of MAML and propose to mitigate the interference with our simple but effective technique of the zeroinitialization and zeroing trick (cf. Section 3). In summary, our main contributions are three-fold: • We show MAML is implicitly an SCL algorithm in classification and the noise comes from the randomly initialized linear layer and the cross-task interaction. • We verify the inherent contrastiveness of MAML based on the cosine similarity analysis. • Our experiments show that applying the zeroing trick induces a notable improvement in testing accuracy during training and that that during meta-testing, a pronounced increase in the accuracy occurs when the zeroing trick is applied. 2 WHY MAML IS IMPLICITLY A NOISY SUPERVISED CONTRASTIVE ALGORITHM? 2.1 PRELIMINARY: SUPERVISED CONTRASTIVE LEARNING In this work, we aim to bridge MAML and supervised contrastive learning (SCL) and attribute the success of MAML to SCL’s capacity in learning good representations. Thus, we would like to introduce SCL briefly. 2We use the term supervised contrastiveness to refer to the setting of using ground truth label information to differentiate positive samples and negative samples (Khosla et al., 2020). This setting is different from (unsupervised/self-supervised) contrastive learning. Supervised contrastive learning, proposed by Khosla et al. (2020), is a generalization of several metric learning algorithms, such as triplet loss and N-pair loss (Schroff et al., 2015; Sohn, 2016), and has shown the best performance in classification compared to SimCLR and CrossEntropy. In Khosla et al. (2020), SCL is described as “contrasts the set of all samples from the same class as positives against the negatives from the remainder of the batch” and “embeddings from the same class are pulled closer together than embeddings from different classes.” For a sample s, the label information is leveraged to indicate positive samples (i.e., samples having the same label as sample s) and negative samples (i.e., samples having different labels to sample s). The loss of SCL is designed to increase the similarity (or decrease the metric distance) of embeddings of positive samples and to reduce the similarity (or increase the metric distance) of embeddings of negative samples (Khosla et al., 2020). In essence, SCL combines supervised learning and contrastive learning and differs from supervised learning in that the loss contains a measurement of the similarity (or distance) between the embedding of a sample and embeddings of its positive/negative sample pairs. Now we give a formal definition of SCL. For a set of N samples drawn from a n-class dataset. Let i ∈ I = {1, ..., N} be the index of an arbitrary sample. Let A(i) = I \ {i}, P (i) be the set of indices of all positive samples of sample i, and N(i) = A(i) \ P (i) be the set of indices of all negative samples of sample i. Let zi indicates the embedding of sample i. Definition 1 Let Msim be a measurement of similarity (e.g., inner product, cosine similarity). Training algorithms that adopt loss of the following form belong to SCL: LSCL = ∑ i ∑ p∈P (i) c−p,iMsim(zi, zp) + ∑ i ∑ n∈N(i) c+n,iMsim(zi, zn) + c (1) where c−p,i < 0 and c + n,i > 0 for all n, p and i; and c is a constant independent of samples. We further define that a training algorithm that follows Eq.(1), but with either (a) c+n,i < 0 for some n, i or (b) c is a constant dependent of samples, belongs to noisy SCL. 2.2 PROBLEM SETUP We provide the detailed derivation to show that MAML is implicitly a noisy SCL, where we adopt the few-shot classification as the example application. In this section, we focus on the meta-training period. Consider drawing a batch of tasks {T1, . . . , TNbatch} from a meta-training task distribution D. Each task Tn contains a support set Sn and a query set Qn, where Sn = {(sm, tm)} Nway×Nshot m=1 , Qn = {(qm, um)} Nway×Nquery m=1 , sm, qm ∈ RNin are data samples, and tm, um ∈ {1, ..., Nway} are labels. We denote Nway the number of classes in each task, and {Nshot, Nquery} respectively the number of support and query samples per class. The architecture of our base-model comprises of a convolutional encoder ϕ : RNin → RNf (parameterized by φ), a fully connected linear head w ∈ RNf×Nway , and a Softmax output layer, where Nf is the dimension of the feature space. We denote the kth column of w as wk. Note that the base-model parameters θ consist of φ and w. As shown in Appendix A, both FOMAML and SOMAML adopt a training strategy comprising the inner loop and the outer loop. At the beginning of a meta-training iteration, we sample Nbatch tasks. For each task Tn, we perform inner loop updates using the inner loop loss (c.f. Eq. (2)) evaluated on the support data, and then evaluate the outer loop loss (c.f. Eq. (3)) on the updated base-model using the query data. In the ith step of the inner loop, the parameters {φi−1,wi−1} are updated to {φi,wi} using the multi-class cross entropy loss evaluated on the support dataset Sn as L{φi,wi},Sn = E (s,t)∼Sn Nway∑ j=1 1j=t[− log exp(ϕi(s)⊤wj i)∑Nway k=1 exp(ϕ i(s)⊤wki) ] (2) After Nstep inner loop updates, we compute the outer loop loss using the query data Qn: L{φNstep ,wNstep},Qn = E(q,u)∼Qn [− log exp(ϕ Nstep(q)⊤wu Nstep)∑Nway k=1 exp(ϕ Nstep(q)⊤wkNstep) ] (3) Then, we sum up the outer loop losses of all tasks, and perform gradient descent to update the base-model’s initial parameters {φ0,w0}. To show the supervised contrastiveness entailed in MAML, we adopt an assumption that the Encoder ϕ is Frozen during the Inner Loop (the EFIL assumption) and we discuss the validity of the assumption in Section 2.6. Without loss of generality, we consider training models with MAML with Nbatch = 1 and Nstep = 1, and we discuss the generalized version in Section 2.6. For simplicity, the kth element of model output exp(ϕ(s) ⊤wk 0)∑Nway j=1 exp(ϕ(s) ⊤wj0) (respectively exp(ϕ(q) ⊤wk 1)∑Nway j=1 exp(ϕ(q) ⊤wj1) ) of sample s (respectively q) is denoted as sk (respectively qk). 2.3 INNER LOOP AND OUTER LOOP UPDATE OF LINEAR LAYER AND ENCODER In this section, we primarily focus on the update of parameters in the case of FOMAML. The full derivation and discussion of SOMAML are provided in Appendix B. Inner loop update of the linear layer. In the inner loop, the linear layer w0 is updated to w1 with a learning rate η as shown in Eq. (4) in both FOMAML and SOMAML. In contrast to the example in Figure 1, the columns of the linear layer are added with the weighted sum of the features extracted from support samples (i.e., support features). Compared to wk0, wk1 is pushed towards the support features of the same class (i.e., class k) with strength of 1 − sk, while being pulled away from the support features of different classes with strength of sk. wk 1 = wk 0 − η ∂L{φ,w0},S ∂wk0 = wk 0 + η E (s,t)∼S (1k=t − sk)ϕ(s) (4) Outer loop update of the linear layer. In the outer loop, w0 is updated using the query data with a learning rate ρ. For FOMAML, the final linear layer is updated as follows. w′k 0 = wk 0 − ρ ∂L{φ,w1},Q ∂wk1 = wk 0 + ρ E (q,u)∼Q (1k=u − qk)ϕ(q) (5) Note that the computation of qk requires the inner-loop updated w 1. Generally speaking, Eq. (5) resembles Eq. (4). It is obvious that, in the outer loop, the query features are added weightedly to the linear layer, and the strength of change relates to the output value. In other words, after the outer loop update, the linear layer memorizes the query features of current tasks. This can cause a crosstask interference because in the next inner loop there would be additional inner products between the support features of the next tasks and the query features of the current tasks. Outer loop update of the encoder. Using the chain rule, the gradient of the outer loop loss with respect to φ (i.e., the parameters of the encoder) is given by ∂L{φ,w1},Q ∂φ = E(q,u)∼Q ∂L{φ,w1},Q ∂ϕ(q) ∂ϕ(q) ∂φ + E(s,t)∼S ∂L{φ,w1},Q ∂ϕ(s) ∂ϕ(s) ∂φ , where the second term can be neglected when FOMAML is considered. Below, we take a deeper look at the backpropagated error of one query data (q, u) ∼ Q. The full derivation is provided in Appendix B.2. ∂L{φ,w1},q ∂ϕ(q) = Nway∑ j=1 (qj − 1j=u)wj 0 + η E (s,t)∼S [−( Nway∑ j=1 qjsj) + su + qt − 1t=u]ϕ(s) (6) 2.4 MAML IS A NOISY CONTRASTIVE LEARNER Reformulating the outer loop loss for the encoder as a noisy SCL loss. We can observe from Eq. (6) that the actual loss for the encoder (evaluated on a single query data (q, u) ∼ Q) is as the following. L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj 0⊤ stop gradient ϕ(q) + η E (s,t)∼S [− Nway∑ j=1 qjsj + su + qt − 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (7) For SOMAML, the range of “stop gradient” in the second term is different: L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj 0⊤ stop gradient ϕ(q) + η E (s,t)∼S [− Nway∑ j=1 qjsj + su + qt − 1t=u] stop gradient ϕ(s)⊤ϕ(q) (8) With these two reformulations, we observe the essential difference between FOMAML and SOMAML is the range of stop gradient. We provide detailed discussion and instinctive illustration in Appendix B.5 on how this explains the phenomenon that SOMAML often leads to faster convergence. To better deliberate the effect of each term in the reformulated outer loop loss, we define the first term in Eq. (7) or Eq. (8) as interference term, the second term as noisy contrastive term, and the coefficients − ∑Nway j=1 qjsj + su + qt − 1t=u as contrastive coefficients. Understanding the interference term. In the case of j = u, the outer loop loss forces the model to minimize (qj − 1)wj0⊤ϕ(q). This can be problematic because (a) at the beginning of training, w0 is assigned with random values and (b) w0 is added with query features of previous tasks as shown in Eq. (5). Consequently, ϕ(q) is pushed to a direction composed of previous query features or to a random direction, introducing an unnecessary cross-task interference or an initialization interference that slows down the training of the encoder. Noting that the cross-task interference also occurs at the testing period, since, at testing stage, w0 is already added with query features of training tasks, which can be an interference to testing tasks. Understanding the noisy contrastive term. When the query and support data have the same label (i.e., u = t), e.g., class 1, the contrastive coefficients becomes − ∑Nway j=2 qjsj − q1s1 + s1 + q1− 1, which is − ∑Nway j=2 qjsj − (1 − q1)(1 − s1) < 0. This indicates the encoder would be updated to maximize the inner product between ϕ(q) and the support features of the same class. However, when the query and support data are in different classes, the sign of the contrastive coefficient can sometimes be negative. The outer loop loss thus cannot well contrast the query features against the support features of different classes, making this loss term not an ordinary SCL loss. To better illustrate the influence of the interference term and the noisy contrastive term, we provide an ablation experiment in Appendix B.7. Theorem 1 below formally connects MAML to SCL. Theorem 1 With the EFIL assumption, FOMAML is a noisy SCL algorithm. With assumptions of (a) EFIL and (b) a single inner-loop update, SOMAML is a noisy SCL algorithm. Proof: For FOMAML, both Eq. (7) (one inner loop update step) and Eq. (26) (multiple inner loop update steps) follows Definition 1. For SOMAML, Eq. (8) follows Definition 1. Introduction of the zeroing trick makes Eq. (7) and Eq. (8) SCL losses. To tackle the interference term and make the contrastive coefficients more accurate, we introduce the zeroing trick: setting the w0 to be zero after each outer loop update, as shown in Appendix A. With the zeroing trick, the original outer loop loss (of FOMAML) becomes L{φ,w1},q = η E (s,t)∼S (qt − 1t=u)ϕ(s) ⊤ stop gradient ϕ(q) (9) For SOMAML, the original outer loop loss becomes L{φ,w1},q = η E (s,t)∼S (qt − 1t=u) stop gradient ϕ(s)⊤ϕ(q) (10) The zeroing trick brings two nontrivial effects: (a) eliminating the interference term in both Eq. (7) and Eq. (8); (b) making the contrastive coefficients follow SCL. For (b), since all the predictive values of support data become the same, i.e., sk = 1Nway , the contrastive coefficient becomes qt − 1t=u, which is negative when the support and query data have the same label, and positive otherwise. With the zeroing trick, the contrastive coefficient follows the SCL loss, as summarized below. Corollary 1 With mild assumptions of (a) EFIL, (b) a single inner-loop update and (c) training with the zeroing trick (i.e., the linear layer is zeroed at the end of each outer loop), both FOMAML and SOMAML are SCL algorithms. Proof: Both Eq. (9) and Eq. (10) follow Definition 1. The introduction of the zeroing trick makes the relationship between MAML and SCL more straightforward. Generally speaking, by connecting MAML and SCL, we can better understand other MAML-based meta-learning studies. 2.5 RESPONSES TO QUESTIONS IN SECTION 1 In what sense does MAML guide any model to learn general-purpose representations? Under the EFIL assumption, MAML is a noisy SCL algorithm in a classification paradigm. The effectiveness of MAML in enabling models to learn general-purpose representations can be attributed to the SCL characteristics of MAML. How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so? MAML adopts the inner and outer loops to perform noisy SCL sequentially. In the inner loop, the features of support data are memorized by w via inner-loop update. In the outer loop, the softmax output of the query data thus contains the inner products between the support features and the query feature. What is the role of support and query data, and how do they interact with each other? We show that the original loss in MAML can be reformulated as a loss term containing the inner products of the embedding of the support and query data. In FOMAML, the support features act as the reference, while the query features are updated to move towards the support features of the same class and against those of the different classes. 2.6 GENERALIZATION OF OUR ANALYSIS In Appendix C, we provide the analysis where Nbatch ≥ 1 and Nstep ≥ 1. For the EFIL assumption, it can hardly be dropped because the behavior of the updated encoder is intractable. Besides, Raghu et al. (2020) show that the representations of intermediate layers do not change notably during the inner loop of MAML, and thus it is understood that the main function of the inner loop is to change the final linear layer. Furthermore, the EFIL assumption is empirically reasonable, since previous works (Raghu et al., 2020; Lee et al., 2019; Bertinetto et al., 2019; Liu et al., 2020) yield comparable performance while leaving the encoder untouched during the inner loop. With our analysis, one may notice that MAML is approximately a metric-based few-shot learning algorithm. From a high-level perspective, under the EFIL assumption, second-order MAML is similar to metric-based few-shot learning algorithms, such as MatchingNet (Vinyals et al., 2016), Prototypical network (Snell et al., 2017), and Relation network (Sung et al., 2018). The main difference lies in the metric and the way prototypes are constructed. Our work follows the setting adopted by MAML, such as using negative LogSoftmax as objective function, but we can effortlessly generalize our analysis to a MSE loss as had been shown in Figure 1. As a result, our work points out a new research direction in improving MAML by changing the objective functions in the inner and the outer loops, e.g., using MSE for the inner loop but negative LogSoftmax for the outer loop. Besides, in MAML, we often obtain the logits by multiplying the features by the linear weight w. Our work implies future direction as to alternatively substitute this inner product operation with other metrics or other similarity measurements such as cosine similarity or negative Euclidean distance. 3 EXPERIMENTAL RESULTS In this section, we provide empirical evidence of the supervised contrastiveness of MAML and show that zero-initialization of w0, reduction in the initial norm of w0, or the application of zeroing trick can speed up the learning profile. This is applicable to both SOMAML and FOMAML. 3.1 SETUP We conduct our experiments on the mini-ImageNet dataset (Vinyals et al., 2016; Ravi & Larochelle, 2017) and the Omniglot dataset (Lake et al., 2015). For the results on the Omniglot dataset, please refer to Appendix E. For the mini-ImageNet, it contains 84 × 84 RGB images of 100 classes from the ImageNet dataset with 600 samples per class. We split the dataset into 64, 16 and 20 classes for training, validation, and testing as proposed in (Ravi & Larochelle, 2017). We do not perform hyperparameter search and thus are not using the validation data. For all our experiments of applying MAML into few-shot classification problem, where we adopt two experimental settings: 5-way 1- shot and 5-way 5-shot, with the batch size Nbatch being 4 and 2, respectively (Finn et al., 2017). The few-shot classification accuracy is calculated by averaging the results over 400 tasks in the test phase. For model architecture, optimizer and other experimental details, please refer to Appendix D.1. 3.2 COSINE SIMILARITY ANALYSIS VERIFIES THE IMPLICIT CONTRASTIVENESS IN MAML In Section 2, we show that the encoder is updated so that the query features are pushed towards the support features of the same class and pulled away from those of different classes. Here we verify this supervised contrastiveness experimentally. Consider a relatively overfitting scenario where there are five classes of images and for each class there are 20 support images and 20 query images. We fix the support and query set (i.e. the data is not resampled every iteration) to verify the concept that the support features work as positive and negative samples. Channel shuffling is used to avoid the undesirable channel memorization effect (Jamal & Qi, 2019; Rajendran et al., 2020). We train the model using FOMAML and examine how well the encoder can separate the data of different classes in the feature space by measuring the averaged cosine similarities between the features of each class. The results are averaged over 10 random seeds. As shown in the top row of Figure 2, the model trained with MAML learns to separate the features of different classes. Moreover, the contrast between the diagonal and the off-diagonal entries of the heatmap increases as we remove the initialization interference (by zero-initializing w0, shown in the middle row) and remove the cross-task interference (by applying the zeroing trick, shown in the bottom row). The result agrees with our analysis that MAML implicitly contains the interference term which can impede the encoder from learning a good feature representation. For experiments on semantically similar classes of images, the result is shown in Section D.3. 3.3 ZEROING LINEAR LAYER AT TESTING TIME INCREASES TESTING ACCURACY Before starting our analysis on benchmark datasets, we note that the cross-task interference can also occur during meta-testing. In the meta-testing stage, the base-model is updated in the inner loop using support data S and then the performance is evaluated using query data Q, where S and Q are drawn from a held-out, unseen meta-testing dataset. Recall that at the end of the outer loop (in meta-training stage), the query features are added weightedly to the linear layer w0. In other words, at the beginning of meta-testing, w0 is already added with the query features of previous training tasks, which can drastically influence the performance on the unseen tasks. To validate this idea, we apply the zeroing trick at meta-testing time (which we refer to zeroing w0 at the beginning of the meta-testing time) and show such trick increases the testing accuracy of the model trained with FOMAML. As illustrated in Figure 3, compared to directly entering meta-testing (i.e. the subplot at the left), additionally zeroing the linear layer at the beginning of each meta-testing time (i.e. the subplot at the right) increases the testing accuracy of the model whose linear layer is randomly initialized or zero-initialized (denoted by the red and orange curves, respectively). And the difference in testing performance sustains across the whole training session. In the following experiments, we evaluate the testing performance only with zeroing the linear layer at the beginning of the meta-testing stage. By zeroing the linear layer, the potential interference brought by the prior (of the linear layer) is ignored. Then, we can fully focus on the capacity of the encoder in learning a good feature representation. 3.4 SINGLE INNER LOOP UPDATE SUFFICES WHEN USING THE ZEROING TRICK In Eq. (4) and Eq. (21), we show that the features of the support data are added to the linear layer in the inner loop. Larger number of inner loop update steps can better offset the effect of interference brought by a non-zeroed linear layer. In other words, when the models are trained with the zeroing trick, a larger number of inner loop updates can not bring any benefit. We validate this intuition in Figure 4 under a 5-way 1-shot setting. In the original FOMAML, the models trained with a single inner loop update step (denoted as red curve) converge slower than those trained with update step of 7 (denoted as purple curve). On the contrary, when the models are trained with the zeroing trick, models with various inner loop update steps converge at the same speed. 3.5 EFFECT OF INITIALIZATION AND THE ZEROING TRICK In Eq. (7), we observe an interference derived from the historical task features or random initialization. We validate our formula by examining the effects of (1) reducing the norm of w0 at initialization and (2) applying the zeroing trick. From Figure 5, the performance is higher when the initial norm of w0 is lower. Compared to random initialization, reducing the norm via down-scaling w0 by 0.7 yields visible differences. Besides, the testing accuracy of MAML with zeroing trick (the purple curve) outperforms that of original MAML. 4 CONCLUSION This paper presents an extensive study to demystify how the seminal MAML algorithm guides the encoder to learn a general-purpose feature representation and how support and query data interact. Our analysis shows that MAML is implicitly a supervised contrastive learner using the support features as positive and negative samples to direct the update of the encoder. Moreover, we unveil an interference term hidden in MAML originated from the random initialization or cross-task interaction, which can impede the representation learning. Driven by our analysis, removing the interference term by a simple zeroing trick renders the model unbiased to seen or unseen tasks. Furthermore, we show constant improvements in the training and testing profiles with this zeroing trick, with experiments conducted on the mini-ImageNet and Omniglot datasets. APPENDIX A ORIGINAL MAML AND MAML WITH THE ZEROING TRICK Algorithm 1 Second-order MAML Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: while not done do 2: Sample tasks {T1, . . . TNbatch} from D 3: for n = 1, 2, . . . , Nbatch do 4: {Sn, Qn} ← sample from Tn 5: θn = θ 6: for i = 1, 2, . . . , Nstep do 7: θn ← θn − η∇θnLθn,Sn 8: end for 9: end for 10: Update θ ← θ − ρ ∑Nbatch n=1 ∇θLθn,Qn 11: end while Algorithm 2 First-order MAML Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: while not done do 2: Sample tasks {T1, . . . TNbatch} from D 3: for n = 1, 2, . . . , Nbatch do 4: {Sn, Qn} ← sample from Tn 5: θn = θ 6: for i = 1, 2, . . . , Nstep do 7: θn ← θn − η∇θnLθn,Sn 8: end for 9: end for 10: Update θ ← θ − ρ ∑Nbatch n=1 ∇θnLθn,Qn 11: end while Algorithm 3 Second-order MAML with the zeroing trick Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: Set w← 0 (the zeroing trick) 2: while not done do 3: Sample tasks {T1, . . . TNbatch} from D 4: for n = 1, 2, . . . , Nbatch do 5: {Sn, Qn} ← sample from Tn 6: θn = θ 7: for i = 1, 2, . . . , Nstep do 8: θn ← θn − η∇θnLθn,Sn 9: end for 10: end for 11: Update θ ← θ − ρ ∑Nbatch n=1 ∇θLθn,Qn 12: Set w← 0 (the zeroing trick) 13: end while B SUPPLEMENTARY DERIVATION In this section, we provide the full generalization and further discussion that supplement the main paper. We consider the case of Nbatch = 1 and Nstep = 1 under the EFIL assumption. We provide the outer loop update of the linear layer under SOMAML in Section B.1. Next, we offer the full derivation of the outer loop update of the encoder in Section B.2. Then, we reformulate the outer loop loss for the encoder in both FOMAML and SOMAML in Section B.3 and Section B.4. Afterward, we discuss the main difference in FOMAML and SOMAML in detail in Section B.5. Finally, we show the performance of the models trained using the reformulated loss in Section B.6. B.1 THE DERIVATION OF OUTER LOOP UPDATE FOR THE LINEAR LAYER USING SOMAML Here, we provide the complete derivation of the outer loop update for the linear layer. Using SOMAML with support set S and query set Q, the update of the linear layer follows w′0k = wk 0 − ρ ∂L{φ,w1},Q ∂wk0 = wk 0 − ρ Nway∑ m=1 ∂wm 1 ∂wk0 · ∂L{φ,w1},Q ∂wm1 = wk 0 − ρ∂wk 1 ∂wk0 · ∂L{φ,w1},Q ∂wk1 − ρ Nway∑ m ̸=k ∂wm 1 ∂wk0 · ∂L{φ,w1},Q ∂wm1 = wk 0 + ρ[I − η E (s,t)∼S (sk − s2k)ϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) + ρη ∑ m ̸=k [ E (s,t)∼S (smsk)ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] = wk 0 + ρ[I − η E (s,t)∼S skϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) + ρη Nway∑ m=1 [ E (s,t)∼S (smsk)ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] (11) We can further simplify Eq. (11) to Eq. (12) with the help of the zeroing trick. w′0k = ρ[I − η E (s,t)∼S skϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) (12) This is because the zeroing trick essentially turns the logits of all support samples to zero, and consequently the predicted probability (softmax) output sm becomes 1Nway for all channel m. Therefore, the third term in Eq. (11) turns out to be zero (c.f. Eq. (13)). The equality of Eq. (13) holds since the summation of the (softmax) outputs is one. ρη N2way Nway∑ m=1 [ E (s,t)∼S ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] = ρη N2way [ E (s,t)∼S ϕ(s)ϕ(s)T ] E (q,u)∼Q ϕ(q) Nway∑ m=1 (1m=u − qm) = 0 (13) B.2 THE FULL DERIVATION OF THE OUTER LOOP UPDATE OF THE ENCODER. As the encoder ϕ is parameterized by φ, the outer loop gradient with respect to φ is given by ∂L{φ,w1},Q ∂φ = E(q,u)∼Q ∂L{φ,w1},Q ∂ϕ(q) ∂ϕ(q) ∂φ +E(s,t)∼S ∂L{φ,w1},Q ∂ϕ(s) ∂ϕ(s) ∂φ . We take a deeper look at the backpropagated error ∂L{φ,w1},Q ∂ϕ(q) of the feature of one query data (q, u) ∼ Q, based on the following form: − ∂L{φ,w1},Q ∂ϕ(q) = wu 1 − Nway∑ j=1 (qjwj 1) = Nway∑ j=1 (1j=u − qj)wj 1 = Nway∑ j=1 (1j=u − qj)wj 0 + η Nway∑ j=1 [1j=u − qj ][ E (s,t)∼S (1j=t − sj)ϕ(s)] = Nway∑ j=1 (1j=u − qj)wj 0 + η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u]ϕ(s) (14) B.3 REFORMULATION OF THE OUTER LOOP LOSS FOR THE ENCODER AS NOISY SCL LOSS. We can derive the actual loss (evaluated on a single query data (q, u) ∼ Q) that the encoder uses under FOMAML scheme as follows: L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj0⊤ stop gradient ϕ(q)− η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (15) For SOMAML, we need to additionally plug Eq. (4) into Eq. (3). L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj0⊤ stop gradient ϕ(q)− η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u] stop gradient ϕ(s)⊤ϕ(q) (16) B.4 INTRODUCTION OF THE ZEROING TRICK MAKES EQ. (7) AND EQ. (8) SCL LOSSES. Apply the zeroing trick to Eq. (7) and Eq. (8), we can derive the actual loss Eq. (17) and Eq. (18) that the encoder follows. L{φ,w1},q = η E (s,t)∼S (qt − 1t=u)ϕ(s) ⊤ stop gradient ϕ(q) (17) L{φ,w1},q = η E (s,t)∼S (qt − 1t=u) stop gradient ϕ(s)⊤ϕ(q) (18) With these two equations, we can observe the essential difference in FOMAML and SOMAML is the range of stopping gradients. We would further discuss the implication of different ranges of gradient stopping in Appendix B.5. B.5 DISCUSSION ABOUT THE DIFFERENCE BETWEEN FOMAML AND SOMAML Central to the mystery of MAML is the difference between FOMAML and SOMAML. Plenty of work is dedicated to approximating or estimating the second-order derivatives in the MAML algorithm in a more computational-efficient or accurate manner (Song et al., 2020; Rothfuss et al., 2019; Liu et al., 2019). With the EFIL assumption and our analysis through connecting SCL to these algorithms, we found that we can better understand the distinction between FOMAML and SOMAML from a novel perspective. To better understand the difference, we can compare Eq. (7) with Eq. (8) or compare Eq. (9) with Eq. (10). To avoid being distracted by the interference terms, we provide the analysis of the latter. The main difference between Eq. (9) and Eq. (10) is the range of gradient stopping and we will show that this difference results in a significant distinction in the feature space. To begin with, by chain rule, we have ∂L∂φ = E(q,u)∼Q ∂L ∂ϕ(q) ∂ϕ(q) ∂φ + E(s,t)∼S ∂L ∂ϕ(s) ∂ϕ(s) ∂φ . As we specifically want to know how the encoded features are updated given different losses, we can look at the terms ∂L∂ϕ(q) and ∂L ∂ϕ(s) by differentiating Eq. (9) and Eq. (10) with respect to the features of query data q and support data s, respectively. FOMAML: ∂L ∂ϕ(q) = η E (s,t)∼S (qt − 1t=u)ϕ(s) ∂L ∂ϕ(s) = 0 (19) SOMAML: ∂L ∂ϕ(q) = η E (s,t)∼S (qt − 1t=u)ϕ(s) ∂L ∂ϕ(s) = η E (s,t)∼S (qt − 1t=u)ϕ(q) (20) Obviously, as the second equation in Eq (19) is zero, we know that in FOMAML, the update of the encoder does consider the change of the support features. The encoder is updated to move the query features closer to support features of the same class and further to support features of different classes in FOMAML. On the contrary, we can tell from the above equations that in SOMAML, the encoder is updated to make support features and query features closer if both come from the same class and make support features and query features further if they come from different classes. We illustrate the difference in Figure 6. For simplicity, we do not consider the scale of the coefficients but their signs. The subplot on the left indicates that this FOMAML loss guides the encoder to be updated so that the feature of the query data moves 1) towards the support feature of the same class, and 2) against the support features of the different classes. On the other hand, the SOMAML loss guides the encoder to be updated so that 1) when the support data and query data belong to the same class, their features move closer, and otherwise, their features move further. This generally explains why models trained using SOMAML generally converge faster than those trained using FOMAML. B.6 EXPLICITLY COMPUTING THE REFORMULATING LOSS USING EQ. (7) AND EQ. (8) Under the EFIL assumption, we show that MAML can be reformulated as a loss taking noisy SCL form. Below, we consider a setting of 5-way 1-shot mini-ImageNet few-shot classification task, under the condition of no inner loop update of the encoder. (This is the assumption that our derivation heavily depends on. It means that we now only update the encoder in the outer loop.) We empirically show that explicitly computing the reformulated losses of Eq. (7), Eq. (17) and Eq. (18) yield almost the same curves as MAML (with the EFIL assumption). Please note that the reformulated losses are used to update the encoders, for the linear layer w0, we explicitly update it using Eq. (5). Note that although the performance models training using FOMAML, FOMAML with the zeroing trick, and SOMAML converge to similar testing accuracy, the overall testing performance during the training process is distinct. The results are averaged over three random seeds. B.7 THE EFFECT OF INTERFERENCE TERM AND NOISY CONTRASTIVE TERM Reformulating the loss of MAML into a noisy SCL form enables us to further investigate the effects brought by the interference term and the noisy contrastive term, which we presume both effects to be negative. To investigate the effect of the interference term, we simply consider the loss adopted by firstorder MAML as in Eq. (7) but with the interference term dropped (denoted as “n1 ×”). As for the noisy contrastive term, the noise comes from the fact that “when the query and support data are in different classes, the sign of the contrastive coefficient can sometimes be negative”, as being discussed in Section 2.4. To mitigate this noise, we consider the loss in Eq. (7) with the term −( ∑Nway j=1 qjsj) + su dropped from the contrastive coefficient, and denote it as “n2 ×”. On the other hand, we also implement a loss with “n1 ×, n2 ×”, which is actually Eq. (9). We adopt the same experimental setting as Section B.6. In Figure 8, we show the testing profiles of the original reformulated loss (i.e., the curve in red, labeled as “n1 ✓, n2 ✓”), dropping the interference term (i.e., the curve in orange, labeled as “n1 ×, n2 ✓”), dropping the noisy part of the contrastive term (i.e., the curve in green, labeled as “n1 ✓, n2 ×”) or dropping both (i.e., the curve in blue, labeled as “n1 ×, n2 ×”). We can see that either dropping the interference term or dropping dropping the noisy part of contrastive coefficients yield profound benefit. To better understand how noisy is the noisy contrastive term, i.e., how many times the sign of the contrastive coefficient is negative when the query and support data are in different classes, we explicitly record the ratio of the contrastive term being positive or negative. We adopt the same experimental setting as Section B.6. The result is shown in Figure 9. When the zeroing trick is applied, the ratio of contrastive term being negative (shown as the red curve on the right subplot) is 0.2, which is 1Nway where Nway = 5 in our setting. On the other hand, when the zeroing trick is not applied, the ratio of contrastive term being negative (shown as the orange color on the right subplot) is larger than 0.2. This additional experiment necessitates the application of the zeroing trick. C A GENERALIZATION OF OUR ANALYSIS In this section, we derive a more general case of the encoder update in the outer loop. We consider drawing Nbatch tasks from the task distribution D and having Nstep update steps in the inner loop while keeping the EFIL assumption. To derive a more general case, we use wki,n to denote the kth column of wi,n, where wi,n is updated from w0 using support data Sn for i inner-loop steps. For simplicity, the kth channel softmax predictive output exp(ϕ(s) ⊤wk i,n)∑Nway j=1 exp(ϕ(s) ⊤wji,n) of sample s (using wi−1,n) is denoted as si,nk . Inner loop update for the linear layer We yield the inner loop update for the final linear layer in Eq. (21) and Eq. (22). wk i,n = wk i−1,n − η ∂L{φ,wi−1,n},Sn ∂wki−1,n = wk i−1,n + η E (s,t)∼Sn (1k=t − si−1,nk )ϕ(s) (21) wk Nstep,n = wk 0 − η Nstep∑ i=1 E (s,t)∼Sn (1k=t − si−1,nk )ϕ(s) (22) Outer loop update for the linear layer We derive the outer loop update for the linear layer in SOMAML, with denoting I = {1, 2, ..., Nway}: w′k 0 = wk 0 − ρ Nbatch∑ n=1 ∂L{φ,wkNstep,n},Qn ∂wk0 = wk 0 − ρ Nbatch∑ n=1 ∑ p0=k,p1∈I,...,pNway∈I [( Nstep−1∏ i=0 ∂wpi+1 i+1,n ∂wpi i,n ) ∂L{φ,wNstep,n},Qn ∂wpNstep Nstep,n ] (23) When it comes to FOMAML, we have w′k 0 = wk 0 − ρ Nbatch∑ n=1 ∂L{φ,wkNstep,n},Qn ∂wkNstep,n = w0k + ρ Nbatch∑ n=1 E (q,u)∼Qn (1k=u − qNstep,nk )ϕ(q) (24) Outer loop update for the encoder We derive the outer loop update of the encoder under FOMAML as below. We consider the back-propagated error of the feature of one query data (q, u) ∼ Qn. Note that the third equality below holds by leveraging Eq. (21). − ∂L{φ,wNstep,n},Qn ∂ϕ(q) = wu Nstep,n − Nway∑ i=1 (qNstep,ni wi Nstep,n) = Nway∑ i=1 (1i=u − qNstep,ni )wi Nstep,n = Nway∑ i=1 (1i=u − qNstep,ni )[w 0 i + η Nstep∑ p=1 E (s,t)∼Sn (1i=t − sp−1,ni )ϕ(s)] = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i + η Nway∑ i=1 (1i=u − qNstep,ni ) Nstep∑ p=1 E (s,t)∼Sn (1i=u − sp−1,ni )ϕ(s) = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i + η E (s,t)∼Sn Nstep∑ p=1 [( Nway∑ j=1 qNstep,nj s p−1,n j )− s p−1,n u − q Nstep,n t + 1t=u]ϕ(s) (25) Reformulating the Outer Loop Loss for the Encoder as Noisy SCL Loss. From Eq. (25), we can derive the generalized loss (of one query sample (q, u) ∼ Qn) that the encoder uses under FOMAML scheme. L{φ,wNstep,n},q = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i ⊤ stop gradient ϕ(q) + η E (s,t)∼Sn Nstep∑ p=1 [( Nway∑ j=1 qNstep,nj s p−1,n j )− s p−1,n u − q Nstep,n t + 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (26) D EXPERIMENTS ON MINI-IMAGENET DATASET D.1 EXPERIMENTAL DETAILS IN MINI-IMAGENET DATASET The model architecture contains four basic blocks and one fully connected linear layer, where each block comprises a convolution layer with a kernel size of 3 × 3 and filter size of 64, batch normalization, ReLU nonlineartity and 2 × 2 max-poling. The models are trained with the softmax cross entropy loss function using the Adam optimizer with an outer loop learning rate of 0.001 (Antoniou et al., 2019). The inner loop step size η is set to 0.01. The models are trained for 30000 iterations (Raghu et al., 2020). The results are averaged over four random seeds, and we use the shaded region to indicate the standard deviation. Each experiment is run on either a single NVIDIA 1080-Ti or V100 GPU. The detailed implementation is based on Long (2018) (MIT License). D.2 THE EXPERIMENTAL RESULT OF SOMAML The results with SOMAML are shown in Figure 10. Note that as it is possible that longer training can eventually overcome the noise factor and reach similar performance as the zeroing trick, the benefit of the zeroing trick is best seen at the observed faster convergence results when compared to vanilla MAML. D.3 COSINE SIMILARITY ANALYSIS ON SEMANTICALLY SIMILAR CLASSES VERIFIES THE IMPLICIT CONTRASTIVENESS IN MAML In Figure 2, we randomly sample five classes of images under each random seed. Given the rich diversity of the classes in mini-ImageNet, we can consider that the five selected classes as semantically dissimilar or independent for each random seed. Here, we also provide the experimental outcomes using a dataset composed of five semantically similar classes selected from the miniImageNet dataset: French bulldog, Saluki, Walker hound, African hunting dog, and Golden retriever. Likewise to the original setting, we train the model using FOMAML and average the results over ten random seeds. As shown in Figure 11, the result is consistent with Figure 2. In conclusion, we show that the supervised contrastiveness is manifested with the application of the zeroing trick even if a semantically similar dataset is considered. D.4 EXPERIMENTAL RESULTS ON LARGER NUMBER OF SHOTS To empirically verify if our theoretical derivation generalizes to the setting where the number of shots is large, we conduct experiment of a 5-way 25-shot classification task using FOMAML with four random seeds where we adopt mini-ImageNet as the example dataset. As shown in Figure 12, we observe that models trained with the zeroing trick again yield the best performance, consistent with our theoretical work that MAML with the zeroing trick is SCL without noises and interference. D.5 THE ZEROING TRICK MITIGATES THE CHANNEL MEMORIZATION PROBLEM The channel memorization problem (Jamal & Qi, 2019; Rajendran et al., 2020) is a known issue occurring in a non-mutually-exclusive task setting, e.g., the task-specific class-to-label is not randomly assigned, and thus the label can be inferred from the query data alone (Yin et al., 2020). Consider a 5-way K-shot experiment where the total number of training classes is 5 × L. Now we construct tasks by assigning the label t to a class sampled from class tL to (t + 1)L. It is conceivable that the model will learn to directly map the query data to the label without using the information of the support data and thus fails to generalize to unseen tasks. This phenomenon can be explained from the perspective that the tth column of the final linear layer already accumulates the query features from tLth to (t + 1)Lth classes. Zeroing the final linear layer implicitly forces the model to use the imprinted information from the support features for inferring the label and thus mitigates this problem. We use the mini-ImageNet dataset and consider the case of L = 12. As shown in Figure 13, the zeroing trick prevents the model from the channel memorization problem whereas zero-initialization of the linear layer only works out at the beginning. Besides, the performance of models trained with the zeroing trick under this non-mutually-exclusive task setting equals the ones under the conventional few-shot setting as shown in Figure 5. As the zeroing trick clears out the final linear layer and equalizes the value of logits, our result essentially accords with Jamal & Qi (2019) that proposes a regularizer to maximize the entropy of prediction of the meta-initialized model. E EXPERIMENTS ON OMNIGLOT DATASET Omniglot is a hand-written character dataset containing 1623 character classes, each with 20 drawn samples from different people (Lake et al., 2015). The dataset set is splitted into training (1028 classes), validation (172 classes) and testing (423 classes) sets (Vinyals et al., 2016). Since we follow Finn et al. (2017) for setting hyperparamters, we do not use the the validation data. The character images are resized to 28 × 28. For all our experiments, we adopt two experimental settings: 5- way 1-shot and 5-way 5-shot where the batch size Nbatch is 32 and Nquery is 15 for both cases (Finn et al., 2017). The inner loop learning rate η is 0.4. The models are trained for 3000 iterations using FOMAML or SOMAML. The few-shot classification accuracy is calculated by averaging the results over 1000 tasks in the test stage. The model architecture follows the architecture used to train on mini-ImageNet, but we substitute the convolution with max-pooling with strided convolution operation as in Finn et al. (2017). The loss function, optimizer, and outer loop learning rate are the same as those used in the experiments on mini-ImageNet. Each experiment is run on either a single NVIDIA 1080-Ti. The results are averaged over four random seeds, and the standard deviation is illustrated with the shaded region. The models are trained using FOMAML unless stated otherwise. The detailed implementation is based on Deleu (2020) (MIT License). We revisit the application of the zeroing trick at the testing stage on Omniglot in Figure 14 and observe the increasing testing accuracy, in which such results are compatible with the ones on miniImageNet (cf. Figure 3 in the main manuscript). In the following experiments, we evaluate the testing performance only after applying the zeroing trick. In Figure 15, the distinction between the performance of models trained with the zeroing trick and zero-initialized models is prominent, sharing remarkable similarity with the results in miniImageNet (cf. Figure 5 in the main manuscript) in both 5-way 1-shot and 5-way 5-shot settings. We also show the testing performance of models trained using SOMAML in Figure 16 under a 5-way 5-shot setting, where there is little distinction in performance (in comparison to the results on miniImageNet, cf. Figure 10 in the main manuscript) between the models trained with the zeroing trick and the ones trained with random initialization. For channel memorization task, we construct non-mutually-exclusive training tasks by assigning the label t (where 1 ≤ t ≤ 5 in a few-shot 5-way setting) to a class sampled from class tL to (t + 1)L where L is 205 on Omniglot. The class-to-channel assignment is not applied to the testing tasks. The result is shown in Figure 17. For a detailed discussion, please refer to Section D.5.
1. What is the main contribution of the paper regarding MAML? 2. What are the strengths of the proposed approach, particularly in its relation to few-shot learning? 3. What are the weaknesses of the paper, especially regarding its assumptions and limitations? 4. How does the reviewer assess the significance of the SCL view introduced in the paper? 5. Do you have any concerns or suggestions regarding the paper's content or its relevance to the field?
Summary Of The Paper Review
Summary Of The Paper In this paper, a new view of MAML under few-shot learning is proposed. The main result is that under the assumption that the inner loop updates are only applied on the top linear layer, MAML actually performs supervised contrastive learning (SCL). SCL shows that MAML learns the feature transformation that makes the intra-class feature distances small, meanwhile the inter-class feature distances large. The zeroing trick is proposed based on this result, showing performance gain in the experiments. Review Strengths: The SCL view proposed in the paper shows the link between MAML and the metric-based approaches for few-shot learning, such as matching network and prototypical network. In my view, this is interesting and might inspire future improvement of MAML for few-shot learning. Weaknesses: The analysis in the paper is restricted. The SCL view seems to be only valid for few-shot learning. It is also over-simplified to assume that the inner-loop update does not affect the feature backbone if the task is not few-shot learning. I think a better title for the paper is "MAML is approximately a noisy contrastive learner for few-shot learning". There are no comparisons of MAML with the zeroing trick to metric-based few-shot learning methods in the experiments. In my view, the SCL view of MAML indeed says that MAML is similar to metric-based approaches, especially when the zeroing trick is applied. The difference only lies in that metric-based approaches explicitly introduce metrics, while MAML with zeroing trick does perceptron-like learning on the top layer. I am surprised to see that the metric-based approaches are not even mentioned in the paper. It would make the results in the paper more insightful if this connection is explored in depth.
ICLR
Title MAML is a Noisy Contrastive Learner in Classification Abstract Model-agnostic meta-learning (MAML) is one of the most popular and widely adopted meta-learning algorithms, achieving remarkable success in various learning problems. Yet, with the unique design of nested inner-loop and outer-loop updates, which govern the task-specific and meta-model-centric learning, respectively, the underlying learning objective of MAML remains implicit, impeding a more straightforward understanding of it. In this paper, we provide a new perspective of the working mechanism of MAML. We discover that MAML is analogous to a meta-learner using a supervised contrastive objective in classification. The query features are pulled towards the support features of the same class and against those of different classes. Such contrastiveness is experimentally verified via an analysis based on the cosine similarity. Moreover, we reveal that vanilla MAML has an undesirable interference term originating from the random initialization and the cross-task interaction. We thus propose a simple but effective technique, the zeroing trick, to alleviate the interference. Extensive experiments are conducted on both mini-ImageNet and Omniglot datasets to validate the consistent improvement brought by our proposed method. 1 1 INTRODUCTION Humans can learn from very few samples. They can readily establish their cognition and understanding of novel tasks, environments, or domains even with very limited experience in the corresponding circumstances. Meta-learning, a subfield of machine learning, aims at equipping machines with such capacity to accommodate new scenarios effectively (Vilalta & Drissi, 2002; Grant et al., 2018). Machines learn to extract task-agnostic information so that their performance on unseen tasks can be improved (Hospedales et al., 2020). One highly influential meta-learning algorithm is Model Agnostic Meta-Learning (MAML) (Finn et al., 2017), which has inspired numerous follow-up extensions (Nichol et al., 2018; Rajeswaran et al., 2019; Liu et al., 2019; Finn et al., 2019; Jamal & Qi, 2019; Javed & White, 2019). MAML estimates a set of model parameters such that an adaptation of the model to a new task only requires some updates to those parameters. We take the few-shot classification task as an example to review the algorithmic procedure of MAML. A few-shot classification problem refers to classifying samples from some classes (i.e. query data) after seeing a few examples per class (i.e. support data). In a meta-learning scenario, we consider a distribution of tasks, where each task is a few-shot classification problem and different tasks have different target classes. MAML aims to meta-train the base-model based on training tasks (i.e., the meta-training dataset) and evaluate the performance of the base-model on the testing tasks sampled from a held-out unseen dataset (i.e. the meta-testing dataset). In meta-training, MAML follows a bi-level optimization scheme composed of the inner loop and the outer loop, as shown in Appendix A (please refer to Section 2 for detailed definition). In the inner loop (also known as fast adaptation), the base-model θ is updated to θ′ using the support set. In the outer loop, a loss is evaluated on θ′ using the query set, and its gradient is computed with respect to θ to update the base-model. Since the outer loop requires computing the gradient of gradient (as the update in the inner loop is included in the entire computation graph), it is called second-order MAML (SOMAML). To prevent computing the Hessian matrix, Finn et al. 1Code available at https://github.com/IandRover/MAML_noisy_contrasive_learner (2017) propose first-order MAML (FOMAML) that uses the gradient computed with respect to the inner-loop-updated parameters θ′ to update the base-model. The widely accepted intuition behind MAML is that the models are encouraged to learn generalpurpose representations which are broadly applicable not only to the seen tasks but also to novel tasks (Finn et al., 2017; Raghu et al., 2020; Goldblum et al., 2020). Raghu et al. (2020) confirm this perspective by showing that during fast adaptation, the majority of changes being made are in the final linear layers. In contrast, the convolution layers (as the feature encoder) remain almost static. This implies that the models trained with MAML learn a good feature representation and that they only have to change the linear mapping from features to outputs during the fast adaptation. Similar ideas of freezing feature extractors during the inner loop have also been explored (Lee et al., 2019; Bertinetto et al., 2019; Liu et al., 2020), and have been held as an assumption in theoretical works (Du et al., 2021; Tripuraneni et al., 2020; Chua et al., 2021). While this intuition sounds satisfactory, we step further and ask the following fundamental questions: (1) In what sense does MAML guide any model to learn general-purpose representations? (2) How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so? (3) What is the role of support and query data, and how do they interact with each other? In this paper, we answer these questions and give new insights on the working mechanism of MAML, which turns out to be closely connected to supervised contrastive learning (SCL)2. Here, we provide a sketch of our analysis in Figure 1. We consider a setting of (a) a 5-way 1-shot paradigm of few-shot learning, (b) the mean square error (MSE) between the one-hot encoding of groundtruth label and the outputs as the objective function, and (c) MAML with a single inner-loop update. At the beginning of the inner loop, we set the linear layer w0 to zero. Then, the inner loop update of w0 is equivalent to adding the support features to w0. In the outer loop, the output of a query sample q1 is actually the inner product between the query feature ϕ(q1) and all support features (the learning rate is omitted for now). As the groundtruth is an one-hot vector, the encoder is trained to either minimize the inner product between the query features and the support features (when they are from different classes, as shown in the green box), or to pull the inner product between the query features and the support features to 1 (when they have the same label, as shown in the red box). Therefore, the inner loop and the outer loop together manifest a SCL objective. Particularly, as the vanilla implementation of MAML uses non-zero (random) initialization for the linear layer, we will show such initialization leads to a noisy SCL objective which would impede the training. In this paper, we firstly review a formal definition of SCL, present a more general case of MAML with cross entropy loss in classification, and show the underlying learning protocol of vanilla MAML as an interfered SCL in Section 2. We then experimentally verify the supervised contrastiveness of MAML and propose to mitigate the interference with our simple but effective technique of the zeroinitialization and zeroing trick (cf. Section 3). In summary, our main contributions are three-fold: • We show MAML is implicitly an SCL algorithm in classification and the noise comes from the randomly initialized linear layer and the cross-task interaction. • We verify the inherent contrastiveness of MAML based on the cosine similarity analysis. • Our experiments show that applying the zeroing trick induces a notable improvement in testing accuracy during training and that that during meta-testing, a pronounced increase in the accuracy occurs when the zeroing trick is applied. 2 WHY MAML IS IMPLICITLY A NOISY SUPERVISED CONTRASTIVE ALGORITHM? 2.1 PRELIMINARY: SUPERVISED CONTRASTIVE LEARNING In this work, we aim to bridge MAML and supervised contrastive learning (SCL) and attribute the success of MAML to SCL’s capacity in learning good representations. Thus, we would like to introduce SCL briefly. 2We use the term supervised contrastiveness to refer to the setting of using ground truth label information to differentiate positive samples and negative samples (Khosla et al., 2020). This setting is different from (unsupervised/self-supervised) contrastive learning. Supervised contrastive learning, proposed by Khosla et al. (2020), is a generalization of several metric learning algorithms, such as triplet loss and N-pair loss (Schroff et al., 2015; Sohn, 2016), and has shown the best performance in classification compared to SimCLR and CrossEntropy. In Khosla et al. (2020), SCL is described as “contrasts the set of all samples from the same class as positives against the negatives from the remainder of the batch” and “embeddings from the same class are pulled closer together than embeddings from different classes.” For a sample s, the label information is leveraged to indicate positive samples (i.e., samples having the same label as sample s) and negative samples (i.e., samples having different labels to sample s). The loss of SCL is designed to increase the similarity (or decrease the metric distance) of embeddings of positive samples and to reduce the similarity (or increase the metric distance) of embeddings of negative samples (Khosla et al., 2020). In essence, SCL combines supervised learning and contrastive learning and differs from supervised learning in that the loss contains a measurement of the similarity (or distance) between the embedding of a sample and embeddings of its positive/negative sample pairs. Now we give a formal definition of SCL. For a set of N samples drawn from a n-class dataset. Let i ∈ I = {1, ..., N} be the index of an arbitrary sample. Let A(i) = I \ {i}, P (i) be the set of indices of all positive samples of sample i, and N(i) = A(i) \ P (i) be the set of indices of all negative samples of sample i. Let zi indicates the embedding of sample i. Definition 1 Let Msim be a measurement of similarity (e.g., inner product, cosine similarity). Training algorithms that adopt loss of the following form belong to SCL: LSCL = ∑ i ∑ p∈P (i) c−p,iMsim(zi, zp) + ∑ i ∑ n∈N(i) c+n,iMsim(zi, zn) + c (1) where c−p,i < 0 and c + n,i > 0 for all n, p and i; and c is a constant independent of samples. We further define that a training algorithm that follows Eq.(1), but with either (a) c+n,i < 0 for some n, i or (b) c is a constant dependent of samples, belongs to noisy SCL. 2.2 PROBLEM SETUP We provide the detailed derivation to show that MAML is implicitly a noisy SCL, where we adopt the few-shot classification as the example application. In this section, we focus on the meta-training period. Consider drawing a batch of tasks {T1, . . . , TNbatch} from a meta-training task distribution D. Each task Tn contains a support set Sn and a query set Qn, where Sn = {(sm, tm)} Nway×Nshot m=1 , Qn = {(qm, um)} Nway×Nquery m=1 , sm, qm ∈ RNin are data samples, and tm, um ∈ {1, ..., Nway} are labels. We denote Nway the number of classes in each task, and {Nshot, Nquery} respectively the number of support and query samples per class. The architecture of our base-model comprises of a convolutional encoder ϕ : RNin → RNf (parameterized by φ), a fully connected linear head w ∈ RNf×Nway , and a Softmax output layer, where Nf is the dimension of the feature space. We denote the kth column of w as wk. Note that the base-model parameters θ consist of φ and w. As shown in Appendix A, both FOMAML and SOMAML adopt a training strategy comprising the inner loop and the outer loop. At the beginning of a meta-training iteration, we sample Nbatch tasks. For each task Tn, we perform inner loop updates using the inner loop loss (c.f. Eq. (2)) evaluated on the support data, and then evaluate the outer loop loss (c.f. Eq. (3)) on the updated base-model using the query data. In the ith step of the inner loop, the parameters {φi−1,wi−1} are updated to {φi,wi} using the multi-class cross entropy loss evaluated on the support dataset Sn as L{φi,wi},Sn = E (s,t)∼Sn Nway∑ j=1 1j=t[− log exp(ϕi(s)⊤wj i)∑Nway k=1 exp(ϕ i(s)⊤wki) ] (2) After Nstep inner loop updates, we compute the outer loop loss using the query data Qn: L{φNstep ,wNstep},Qn = E(q,u)∼Qn [− log exp(ϕ Nstep(q)⊤wu Nstep)∑Nway k=1 exp(ϕ Nstep(q)⊤wkNstep) ] (3) Then, we sum up the outer loop losses of all tasks, and perform gradient descent to update the base-model’s initial parameters {φ0,w0}. To show the supervised contrastiveness entailed in MAML, we adopt an assumption that the Encoder ϕ is Frozen during the Inner Loop (the EFIL assumption) and we discuss the validity of the assumption in Section 2.6. Without loss of generality, we consider training models with MAML with Nbatch = 1 and Nstep = 1, and we discuss the generalized version in Section 2.6. For simplicity, the kth element of model output exp(ϕ(s) ⊤wk 0)∑Nway j=1 exp(ϕ(s) ⊤wj0) (respectively exp(ϕ(q) ⊤wk 1)∑Nway j=1 exp(ϕ(q) ⊤wj1) ) of sample s (respectively q) is denoted as sk (respectively qk). 2.3 INNER LOOP AND OUTER LOOP UPDATE OF LINEAR LAYER AND ENCODER In this section, we primarily focus on the update of parameters in the case of FOMAML. The full derivation and discussion of SOMAML are provided in Appendix B. Inner loop update of the linear layer. In the inner loop, the linear layer w0 is updated to w1 with a learning rate η as shown in Eq. (4) in both FOMAML and SOMAML. In contrast to the example in Figure 1, the columns of the linear layer are added with the weighted sum of the features extracted from support samples (i.e., support features). Compared to wk0, wk1 is pushed towards the support features of the same class (i.e., class k) with strength of 1 − sk, while being pulled away from the support features of different classes with strength of sk. wk 1 = wk 0 − η ∂L{φ,w0},S ∂wk0 = wk 0 + η E (s,t)∼S (1k=t − sk)ϕ(s) (4) Outer loop update of the linear layer. In the outer loop, w0 is updated using the query data with a learning rate ρ. For FOMAML, the final linear layer is updated as follows. w′k 0 = wk 0 − ρ ∂L{φ,w1},Q ∂wk1 = wk 0 + ρ E (q,u)∼Q (1k=u − qk)ϕ(q) (5) Note that the computation of qk requires the inner-loop updated w 1. Generally speaking, Eq. (5) resembles Eq. (4). It is obvious that, in the outer loop, the query features are added weightedly to the linear layer, and the strength of change relates to the output value. In other words, after the outer loop update, the linear layer memorizes the query features of current tasks. This can cause a crosstask interference because in the next inner loop there would be additional inner products between the support features of the next tasks and the query features of the current tasks. Outer loop update of the encoder. Using the chain rule, the gradient of the outer loop loss with respect to φ (i.e., the parameters of the encoder) is given by ∂L{φ,w1},Q ∂φ = E(q,u)∼Q ∂L{φ,w1},Q ∂ϕ(q) ∂ϕ(q) ∂φ + E(s,t)∼S ∂L{φ,w1},Q ∂ϕ(s) ∂ϕ(s) ∂φ , where the second term can be neglected when FOMAML is considered. Below, we take a deeper look at the backpropagated error of one query data (q, u) ∼ Q. The full derivation is provided in Appendix B.2. ∂L{φ,w1},q ∂ϕ(q) = Nway∑ j=1 (qj − 1j=u)wj 0 + η E (s,t)∼S [−( Nway∑ j=1 qjsj) + su + qt − 1t=u]ϕ(s) (6) 2.4 MAML IS A NOISY CONTRASTIVE LEARNER Reformulating the outer loop loss for the encoder as a noisy SCL loss. We can observe from Eq. (6) that the actual loss for the encoder (evaluated on a single query data (q, u) ∼ Q) is as the following. L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj 0⊤ stop gradient ϕ(q) + η E (s,t)∼S [− Nway∑ j=1 qjsj + su + qt − 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (7) For SOMAML, the range of “stop gradient” in the second term is different: L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj 0⊤ stop gradient ϕ(q) + η E (s,t)∼S [− Nway∑ j=1 qjsj + su + qt − 1t=u] stop gradient ϕ(s)⊤ϕ(q) (8) With these two reformulations, we observe the essential difference between FOMAML and SOMAML is the range of stop gradient. We provide detailed discussion and instinctive illustration in Appendix B.5 on how this explains the phenomenon that SOMAML often leads to faster convergence. To better deliberate the effect of each term in the reformulated outer loop loss, we define the first term in Eq. (7) or Eq. (8) as interference term, the second term as noisy contrastive term, and the coefficients − ∑Nway j=1 qjsj + su + qt − 1t=u as contrastive coefficients. Understanding the interference term. In the case of j = u, the outer loop loss forces the model to minimize (qj − 1)wj0⊤ϕ(q). This can be problematic because (a) at the beginning of training, w0 is assigned with random values and (b) w0 is added with query features of previous tasks as shown in Eq. (5). Consequently, ϕ(q) is pushed to a direction composed of previous query features or to a random direction, introducing an unnecessary cross-task interference or an initialization interference that slows down the training of the encoder. Noting that the cross-task interference also occurs at the testing period, since, at testing stage, w0 is already added with query features of training tasks, which can be an interference to testing tasks. Understanding the noisy contrastive term. When the query and support data have the same label (i.e., u = t), e.g., class 1, the contrastive coefficients becomes − ∑Nway j=2 qjsj − q1s1 + s1 + q1− 1, which is − ∑Nway j=2 qjsj − (1 − q1)(1 − s1) < 0. This indicates the encoder would be updated to maximize the inner product between ϕ(q) and the support features of the same class. However, when the query and support data are in different classes, the sign of the contrastive coefficient can sometimes be negative. The outer loop loss thus cannot well contrast the query features against the support features of different classes, making this loss term not an ordinary SCL loss. To better illustrate the influence of the interference term and the noisy contrastive term, we provide an ablation experiment in Appendix B.7. Theorem 1 below formally connects MAML to SCL. Theorem 1 With the EFIL assumption, FOMAML is a noisy SCL algorithm. With assumptions of (a) EFIL and (b) a single inner-loop update, SOMAML is a noisy SCL algorithm. Proof: For FOMAML, both Eq. (7) (one inner loop update step) and Eq. (26) (multiple inner loop update steps) follows Definition 1. For SOMAML, Eq. (8) follows Definition 1. Introduction of the zeroing trick makes Eq. (7) and Eq. (8) SCL losses. To tackle the interference term and make the contrastive coefficients more accurate, we introduce the zeroing trick: setting the w0 to be zero after each outer loop update, as shown in Appendix A. With the zeroing trick, the original outer loop loss (of FOMAML) becomes L{φ,w1},q = η E (s,t)∼S (qt − 1t=u)ϕ(s) ⊤ stop gradient ϕ(q) (9) For SOMAML, the original outer loop loss becomes L{φ,w1},q = η E (s,t)∼S (qt − 1t=u) stop gradient ϕ(s)⊤ϕ(q) (10) The zeroing trick brings two nontrivial effects: (a) eliminating the interference term in both Eq. (7) and Eq. (8); (b) making the contrastive coefficients follow SCL. For (b), since all the predictive values of support data become the same, i.e., sk = 1Nway , the contrastive coefficient becomes qt − 1t=u, which is negative when the support and query data have the same label, and positive otherwise. With the zeroing trick, the contrastive coefficient follows the SCL loss, as summarized below. Corollary 1 With mild assumptions of (a) EFIL, (b) a single inner-loop update and (c) training with the zeroing trick (i.e., the linear layer is zeroed at the end of each outer loop), both FOMAML and SOMAML are SCL algorithms. Proof: Both Eq. (9) and Eq. (10) follow Definition 1. The introduction of the zeroing trick makes the relationship between MAML and SCL more straightforward. Generally speaking, by connecting MAML and SCL, we can better understand other MAML-based meta-learning studies. 2.5 RESPONSES TO QUESTIONS IN SECTION 1 In what sense does MAML guide any model to learn general-purpose representations? Under the EFIL assumption, MAML is a noisy SCL algorithm in a classification paradigm. The effectiveness of MAML in enabling models to learn general-purpose representations can be attributed to the SCL characteristics of MAML. How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so? MAML adopts the inner and outer loops to perform noisy SCL sequentially. In the inner loop, the features of support data are memorized by w via inner-loop update. In the outer loop, the softmax output of the query data thus contains the inner products between the support features and the query feature. What is the role of support and query data, and how do they interact with each other? We show that the original loss in MAML can be reformulated as a loss term containing the inner products of the embedding of the support and query data. In FOMAML, the support features act as the reference, while the query features are updated to move towards the support features of the same class and against those of the different classes. 2.6 GENERALIZATION OF OUR ANALYSIS In Appendix C, we provide the analysis where Nbatch ≥ 1 and Nstep ≥ 1. For the EFIL assumption, it can hardly be dropped because the behavior of the updated encoder is intractable. Besides, Raghu et al. (2020) show that the representations of intermediate layers do not change notably during the inner loop of MAML, and thus it is understood that the main function of the inner loop is to change the final linear layer. Furthermore, the EFIL assumption is empirically reasonable, since previous works (Raghu et al., 2020; Lee et al., 2019; Bertinetto et al., 2019; Liu et al., 2020) yield comparable performance while leaving the encoder untouched during the inner loop. With our analysis, one may notice that MAML is approximately a metric-based few-shot learning algorithm. From a high-level perspective, under the EFIL assumption, second-order MAML is similar to metric-based few-shot learning algorithms, such as MatchingNet (Vinyals et al., 2016), Prototypical network (Snell et al., 2017), and Relation network (Sung et al., 2018). The main difference lies in the metric and the way prototypes are constructed. Our work follows the setting adopted by MAML, such as using negative LogSoftmax as objective function, but we can effortlessly generalize our analysis to a MSE loss as had been shown in Figure 1. As a result, our work points out a new research direction in improving MAML by changing the objective functions in the inner and the outer loops, e.g., using MSE for the inner loop but negative LogSoftmax for the outer loop. Besides, in MAML, we often obtain the logits by multiplying the features by the linear weight w. Our work implies future direction as to alternatively substitute this inner product operation with other metrics or other similarity measurements such as cosine similarity or negative Euclidean distance. 3 EXPERIMENTAL RESULTS In this section, we provide empirical evidence of the supervised contrastiveness of MAML and show that zero-initialization of w0, reduction in the initial norm of w0, or the application of zeroing trick can speed up the learning profile. This is applicable to both SOMAML and FOMAML. 3.1 SETUP We conduct our experiments on the mini-ImageNet dataset (Vinyals et al., 2016; Ravi & Larochelle, 2017) and the Omniglot dataset (Lake et al., 2015). For the results on the Omniglot dataset, please refer to Appendix E. For the mini-ImageNet, it contains 84 × 84 RGB images of 100 classes from the ImageNet dataset with 600 samples per class. We split the dataset into 64, 16 and 20 classes for training, validation, and testing as proposed in (Ravi & Larochelle, 2017). We do not perform hyperparameter search and thus are not using the validation data. For all our experiments of applying MAML into few-shot classification problem, where we adopt two experimental settings: 5-way 1- shot and 5-way 5-shot, with the batch size Nbatch being 4 and 2, respectively (Finn et al., 2017). The few-shot classification accuracy is calculated by averaging the results over 400 tasks in the test phase. For model architecture, optimizer and other experimental details, please refer to Appendix D.1. 3.2 COSINE SIMILARITY ANALYSIS VERIFIES THE IMPLICIT CONTRASTIVENESS IN MAML In Section 2, we show that the encoder is updated so that the query features are pushed towards the support features of the same class and pulled away from those of different classes. Here we verify this supervised contrastiveness experimentally. Consider a relatively overfitting scenario where there are five classes of images and for each class there are 20 support images and 20 query images. We fix the support and query set (i.e. the data is not resampled every iteration) to verify the concept that the support features work as positive and negative samples. Channel shuffling is used to avoid the undesirable channel memorization effect (Jamal & Qi, 2019; Rajendran et al., 2020). We train the model using FOMAML and examine how well the encoder can separate the data of different classes in the feature space by measuring the averaged cosine similarities between the features of each class. The results are averaged over 10 random seeds. As shown in the top row of Figure 2, the model trained with MAML learns to separate the features of different classes. Moreover, the contrast between the diagonal and the off-diagonal entries of the heatmap increases as we remove the initialization interference (by zero-initializing w0, shown in the middle row) and remove the cross-task interference (by applying the zeroing trick, shown in the bottom row). The result agrees with our analysis that MAML implicitly contains the interference term which can impede the encoder from learning a good feature representation. For experiments on semantically similar classes of images, the result is shown in Section D.3. 3.3 ZEROING LINEAR LAYER AT TESTING TIME INCREASES TESTING ACCURACY Before starting our analysis on benchmark datasets, we note that the cross-task interference can also occur during meta-testing. In the meta-testing stage, the base-model is updated in the inner loop using support data S and then the performance is evaluated using query data Q, where S and Q are drawn from a held-out, unseen meta-testing dataset. Recall that at the end of the outer loop (in meta-training stage), the query features are added weightedly to the linear layer w0. In other words, at the beginning of meta-testing, w0 is already added with the query features of previous training tasks, which can drastically influence the performance on the unseen tasks. To validate this idea, we apply the zeroing trick at meta-testing time (which we refer to zeroing w0 at the beginning of the meta-testing time) and show such trick increases the testing accuracy of the model trained with FOMAML. As illustrated in Figure 3, compared to directly entering meta-testing (i.e. the subplot at the left), additionally zeroing the linear layer at the beginning of each meta-testing time (i.e. the subplot at the right) increases the testing accuracy of the model whose linear layer is randomly initialized or zero-initialized (denoted by the red and orange curves, respectively). And the difference in testing performance sustains across the whole training session. In the following experiments, we evaluate the testing performance only with zeroing the linear layer at the beginning of the meta-testing stage. By zeroing the linear layer, the potential interference brought by the prior (of the linear layer) is ignored. Then, we can fully focus on the capacity of the encoder in learning a good feature representation. 3.4 SINGLE INNER LOOP UPDATE SUFFICES WHEN USING THE ZEROING TRICK In Eq. (4) and Eq. (21), we show that the features of the support data are added to the linear layer in the inner loop. Larger number of inner loop update steps can better offset the effect of interference brought by a non-zeroed linear layer. In other words, when the models are trained with the zeroing trick, a larger number of inner loop updates can not bring any benefit. We validate this intuition in Figure 4 under a 5-way 1-shot setting. In the original FOMAML, the models trained with a single inner loop update step (denoted as red curve) converge slower than those trained with update step of 7 (denoted as purple curve). On the contrary, when the models are trained with the zeroing trick, models with various inner loop update steps converge at the same speed. 3.5 EFFECT OF INITIALIZATION AND THE ZEROING TRICK In Eq. (7), we observe an interference derived from the historical task features or random initialization. We validate our formula by examining the effects of (1) reducing the norm of w0 at initialization and (2) applying the zeroing trick. From Figure 5, the performance is higher when the initial norm of w0 is lower. Compared to random initialization, reducing the norm via down-scaling w0 by 0.7 yields visible differences. Besides, the testing accuracy of MAML with zeroing trick (the purple curve) outperforms that of original MAML. 4 CONCLUSION This paper presents an extensive study to demystify how the seminal MAML algorithm guides the encoder to learn a general-purpose feature representation and how support and query data interact. Our analysis shows that MAML is implicitly a supervised contrastive learner using the support features as positive and negative samples to direct the update of the encoder. Moreover, we unveil an interference term hidden in MAML originated from the random initialization or cross-task interaction, which can impede the representation learning. Driven by our analysis, removing the interference term by a simple zeroing trick renders the model unbiased to seen or unseen tasks. Furthermore, we show constant improvements in the training and testing profiles with this zeroing trick, with experiments conducted on the mini-ImageNet and Omniglot datasets. APPENDIX A ORIGINAL MAML AND MAML WITH THE ZEROING TRICK Algorithm 1 Second-order MAML Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: while not done do 2: Sample tasks {T1, . . . TNbatch} from D 3: for n = 1, 2, . . . , Nbatch do 4: {Sn, Qn} ← sample from Tn 5: θn = θ 6: for i = 1, 2, . . . , Nstep do 7: θn ← θn − η∇θnLθn,Sn 8: end for 9: end for 10: Update θ ← θ − ρ ∑Nbatch n=1 ∇θLθn,Qn 11: end while Algorithm 2 First-order MAML Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: while not done do 2: Sample tasks {T1, . . . TNbatch} from D 3: for n = 1, 2, . . . , Nbatch do 4: {Sn, Qn} ← sample from Tn 5: θn = θ 6: for i = 1, 2, . . . , Nstep do 7: θn ← θn − η∇θnLθn,Sn 8: end for 9: end for 10: Update θ ← θ − ρ ∑Nbatch n=1 ∇θnLθn,Qn 11: end while Algorithm 3 Second-order MAML with the zeroing trick Require: Task distribution D Require: η, ρ: inner loop and outer loop learning rates Require: Randomly initialized base-model parameters θ 1: Set w← 0 (the zeroing trick) 2: while not done do 3: Sample tasks {T1, . . . TNbatch} from D 4: for n = 1, 2, . . . , Nbatch do 5: {Sn, Qn} ← sample from Tn 6: θn = θ 7: for i = 1, 2, . . . , Nstep do 8: θn ← θn − η∇θnLθn,Sn 9: end for 10: end for 11: Update θ ← θ − ρ ∑Nbatch n=1 ∇θLθn,Qn 12: Set w← 0 (the zeroing trick) 13: end while B SUPPLEMENTARY DERIVATION In this section, we provide the full generalization and further discussion that supplement the main paper. We consider the case of Nbatch = 1 and Nstep = 1 under the EFIL assumption. We provide the outer loop update of the linear layer under SOMAML in Section B.1. Next, we offer the full derivation of the outer loop update of the encoder in Section B.2. Then, we reformulate the outer loop loss for the encoder in both FOMAML and SOMAML in Section B.3 and Section B.4. Afterward, we discuss the main difference in FOMAML and SOMAML in detail in Section B.5. Finally, we show the performance of the models trained using the reformulated loss in Section B.6. B.1 THE DERIVATION OF OUTER LOOP UPDATE FOR THE LINEAR LAYER USING SOMAML Here, we provide the complete derivation of the outer loop update for the linear layer. Using SOMAML with support set S and query set Q, the update of the linear layer follows w′0k = wk 0 − ρ ∂L{φ,w1},Q ∂wk0 = wk 0 − ρ Nway∑ m=1 ∂wm 1 ∂wk0 · ∂L{φ,w1},Q ∂wm1 = wk 0 − ρ∂wk 1 ∂wk0 · ∂L{φ,w1},Q ∂wk1 − ρ Nway∑ m ̸=k ∂wm 1 ∂wk0 · ∂L{φ,w1},Q ∂wm1 = wk 0 + ρ[I − η E (s,t)∼S (sk − s2k)ϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) + ρη ∑ m ̸=k [ E (s,t)∼S (smsk)ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] = wk 0 + ρ[I − η E (s,t)∼S skϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) + ρη Nway∑ m=1 [ E (s,t)∼S (smsk)ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] (11) We can further simplify Eq. (11) to Eq. (12) with the help of the zeroing trick. w′0k = ρ[I − η E (s,t)∼S skϕ(s)ϕ(s)T ] E (q,u)∼Q (1k=u − qk)ϕ(q) (12) This is because the zeroing trick essentially turns the logits of all support samples to zero, and consequently the predicted probability (softmax) output sm becomes 1Nway for all channel m. Therefore, the third term in Eq. (11) turns out to be zero (c.f. Eq. (13)). The equality of Eq. (13) holds since the summation of the (softmax) outputs is one. ρη N2way Nway∑ m=1 [ E (s,t)∼S ϕ(s)ϕ(s)T ][ E (q,u)∼Q (1m=u − qm)ϕ(q)] = ρη N2way [ E (s,t)∼S ϕ(s)ϕ(s)T ] E (q,u)∼Q ϕ(q) Nway∑ m=1 (1m=u − qm) = 0 (13) B.2 THE FULL DERIVATION OF THE OUTER LOOP UPDATE OF THE ENCODER. As the encoder ϕ is parameterized by φ, the outer loop gradient with respect to φ is given by ∂L{φ,w1},Q ∂φ = E(q,u)∼Q ∂L{φ,w1},Q ∂ϕ(q) ∂ϕ(q) ∂φ +E(s,t)∼S ∂L{φ,w1},Q ∂ϕ(s) ∂ϕ(s) ∂φ . We take a deeper look at the backpropagated error ∂L{φ,w1},Q ∂ϕ(q) of the feature of one query data (q, u) ∼ Q, based on the following form: − ∂L{φ,w1},Q ∂ϕ(q) = wu 1 − Nway∑ j=1 (qjwj 1) = Nway∑ j=1 (1j=u − qj)wj 1 = Nway∑ j=1 (1j=u − qj)wj 0 + η Nway∑ j=1 [1j=u − qj ][ E (s,t)∼S (1j=t − sj)ϕ(s)] = Nway∑ j=1 (1j=u − qj)wj 0 + η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u]ϕ(s) (14) B.3 REFORMULATION OF THE OUTER LOOP LOSS FOR THE ENCODER AS NOISY SCL LOSS. We can derive the actual loss (evaluated on a single query data (q, u) ∼ Q) that the encoder uses under FOMAML scheme as follows: L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj0⊤ stop gradient ϕ(q)− η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (15) For SOMAML, we need to additionally plug Eq. (4) into Eq. (3). L{φ,w1},q = Nway∑ j=1 (qj − 1j=u)wj0⊤ stop gradient ϕ(q)− η E (s,t)∼S [( Nway∑ j=1 qjsj)− su − qt + 1t=u] stop gradient ϕ(s)⊤ϕ(q) (16) B.4 INTRODUCTION OF THE ZEROING TRICK MAKES EQ. (7) AND EQ. (8) SCL LOSSES. Apply the zeroing trick to Eq. (7) and Eq. (8), we can derive the actual loss Eq. (17) and Eq. (18) that the encoder follows. L{φ,w1},q = η E (s,t)∼S (qt − 1t=u)ϕ(s) ⊤ stop gradient ϕ(q) (17) L{φ,w1},q = η E (s,t)∼S (qt − 1t=u) stop gradient ϕ(s)⊤ϕ(q) (18) With these two equations, we can observe the essential difference in FOMAML and SOMAML is the range of stopping gradients. We would further discuss the implication of different ranges of gradient stopping in Appendix B.5. B.5 DISCUSSION ABOUT THE DIFFERENCE BETWEEN FOMAML AND SOMAML Central to the mystery of MAML is the difference between FOMAML and SOMAML. Plenty of work is dedicated to approximating or estimating the second-order derivatives in the MAML algorithm in a more computational-efficient or accurate manner (Song et al., 2020; Rothfuss et al., 2019; Liu et al., 2019). With the EFIL assumption and our analysis through connecting SCL to these algorithms, we found that we can better understand the distinction between FOMAML and SOMAML from a novel perspective. To better understand the difference, we can compare Eq. (7) with Eq. (8) or compare Eq. (9) with Eq. (10). To avoid being distracted by the interference terms, we provide the analysis of the latter. The main difference between Eq. (9) and Eq. (10) is the range of gradient stopping and we will show that this difference results in a significant distinction in the feature space. To begin with, by chain rule, we have ∂L∂φ = E(q,u)∼Q ∂L ∂ϕ(q) ∂ϕ(q) ∂φ + E(s,t)∼S ∂L ∂ϕ(s) ∂ϕ(s) ∂φ . As we specifically want to know how the encoded features are updated given different losses, we can look at the terms ∂L∂ϕ(q) and ∂L ∂ϕ(s) by differentiating Eq. (9) and Eq. (10) with respect to the features of query data q and support data s, respectively. FOMAML: ∂L ∂ϕ(q) = η E (s,t)∼S (qt − 1t=u)ϕ(s) ∂L ∂ϕ(s) = 0 (19) SOMAML: ∂L ∂ϕ(q) = η E (s,t)∼S (qt − 1t=u)ϕ(s) ∂L ∂ϕ(s) = η E (s,t)∼S (qt − 1t=u)ϕ(q) (20) Obviously, as the second equation in Eq (19) is zero, we know that in FOMAML, the update of the encoder does consider the change of the support features. The encoder is updated to move the query features closer to support features of the same class and further to support features of different classes in FOMAML. On the contrary, we can tell from the above equations that in SOMAML, the encoder is updated to make support features and query features closer if both come from the same class and make support features and query features further if they come from different classes. We illustrate the difference in Figure 6. For simplicity, we do not consider the scale of the coefficients but their signs. The subplot on the left indicates that this FOMAML loss guides the encoder to be updated so that the feature of the query data moves 1) towards the support feature of the same class, and 2) against the support features of the different classes. On the other hand, the SOMAML loss guides the encoder to be updated so that 1) when the support data and query data belong to the same class, their features move closer, and otherwise, their features move further. This generally explains why models trained using SOMAML generally converge faster than those trained using FOMAML. B.6 EXPLICITLY COMPUTING THE REFORMULATING LOSS USING EQ. (7) AND EQ. (8) Under the EFIL assumption, we show that MAML can be reformulated as a loss taking noisy SCL form. Below, we consider a setting of 5-way 1-shot mini-ImageNet few-shot classification task, under the condition of no inner loop update of the encoder. (This is the assumption that our derivation heavily depends on. It means that we now only update the encoder in the outer loop.) We empirically show that explicitly computing the reformulated losses of Eq. (7), Eq. (17) and Eq. (18) yield almost the same curves as MAML (with the EFIL assumption). Please note that the reformulated losses are used to update the encoders, for the linear layer w0, we explicitly update it using Eq. (5). Note that although the performance models training using FOMAML, FOMAML with the zeroing trick, and SOMAML converge to similar testing accuracy, the overall testing performance during the training process is distinct. The results are averaged over three random seeds. B.7 THE EFFECT OF INTERFERENCE TERM AND NOISY CONTRASTIVE TERM Reformulating the loss of MAML into a noisy SCL form enables us to further investigate the effects brought by the interference term and the noisy contrastive term, which we presume both effects to be negative. To investigate the effect of the interference term, we simply consider the loss adopted by firstorder MAML as in Eq. (7) but with the interference term dropped (denoted as “n1 ×”). As for the noisy contrastive term, the noise comes from the fact that “when the query and support data are in different classes, the sign of the contrastive coefficient can sometimes be negative”, as being discussed in Section 2.4. To mitigate this noise, we consider the loss in Eq. (7) with the term −( ∑Nway j=1 qjsj) + su dropped from the contrastive coefficient, and denote it as “n2 ×”. On the other hand, we also implement a loss with “n1 ×, n2 ×”, which is actually Eq. (9). We adopt the same experimental setting as Section B.6. In Figure 8, we show the testing profiles of the original reformulated loss (i.e., the curve in red, labeled as “n1 ✓, n2 ✓”), dropping the interference term (i.e., the curve in orange, labeled as “n1 ×, n2 ✓”), dropping the noisy part of the contrastive term (i.e., the curve in green, labeled as “n1 ✓, n2 ×”) or dropping both (i.e., the curve in blue, labeled as “n1 ×, n2 ×”). We can see that either dropping the interference term or dropping dropping the noisy part of contrastive coefficients yield profound benefit. To better understand how noisy is the noisy contrastive term, i.e., how many times the sign of the contrastive coefficient is negative when the query and support data are in different classes, we explicitly record the ratio of the contrastive term being positive or negative. We adopt the same experimental setting as Section B.6. The result is shown in Figure 9. When the zeroing trick is applied, the ratio of contrastive term being negative (shown as the red curve on the right subplot) is 0.2, which is 1Nway where Nway = 5 in our setting. On the other hand, when the zeroing trick is not applied, the ratio of contrastive term being negative (shown as the orange color on the right subplot) is larger than 0.2. This additional experiment necessitates the application of the zeroing trick. C A GENERALIZATION OF OUR ANALYSIS In this section, we derive a more general case of the encoder update in the outer loop. We consider drawing Nbatch tasks from the task distribution D and having Nstep update steps in the inner loop while keeping the EFIL assumption. To derive a more general case, we use wki,n to denote the kth column of wi,n, where wi,n is updated from w0 using support data Sn for i inner-loop steps. For simplicity, the kth channel softmax predictive output exp(ϕ(s) ⊤wk i,n)∑Nway j=1 exp(ϕ(s) ⊤wji,n) of sample s (using wi−1,n) is denoted as si,nk . Inner loop update for the linear layer We yield the inner loop update for the final linear layer in Eq. (21) and Eq. (22). wk i,n = wk i−1,n − η ∂L{φ,wi−1,n},Sn ∂wki−1,n = wk i−1,n + η E (s,t)∼Sn (1k=t − si−1,nk )ϕ(s) (21) wk Nstep,n = wk 0 − η Nstep∑ i=1 E (s,t)∼Sn (1k=t − si−1,nk )ϕ(s) (22) Outer loop update for the linear layer We derive the outer loop update for the linear layer in SOMAML, with denoting I = {1, 2, ..., Nway}: w′k 0 = wk 0 − ρ Nbatch∑ n=1 ∂L{φ,wkNstep,n},Qn ∂wk0 = wk 0 − ρ Nbatch∑ n=1 ∑ p0=k,p1∈I,...,pNway∈I [( Nstep−1∏ i=0 ∂wpi+1 i+1,n ∂wpi i,n ) ∂L{φ,wNstep,n},Qn ∂wpNstep Nstep,n ] (23) When it comes to FOMAML, we have w′k 0 = wk 0 − ρ Nbatch∑ n=1 ∂L{φ,wkNstep,n},Qn ∂wkNstep,n = w0k + ρ Nbatch∑ n=1 E (q,u)∼Qn (1k=u − qNstep,nk )ϕ(q) (24) Outer loop update for the encoder We derive the outer loop update of the encoder under FOMAML as below. We consider the back-propagated error of the feature of one query data (q, u) ∼ Qn. Note that the third equality below holds by leveraging Eq. (21). − ∂L{φ,wNstep,n},Qn ∂ϕ(q) = wu Nstep,n − Nway∑ i=1 (qNstep,ni wi Nstep,n) = Nway∑ i=1 (1i=u − qNstep,ni )wi Nstep,n = Nway∑ i=1 (1i=u − qNstep,ni )[w 0 i + η Nstep∑ p=1 E (s,t)∼Sn (1i=t − sp−1,ni )ϕ(s)] = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i + η Nway∑ i=1 (1i=u − qNstep,ni ) Nstep∑ p=1 E (s,t)∼Sn (1i=u − sp−1,ni )ϕ(s) = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i + η E (s,t)∼Sn Nstep∑ p=1 [( Nway∑ j=1 qNstep,nj s p−1,n j )− s p−1,n u − q Nstep,n t + 1t=u]ϕ(s) (25) Reformulating the Outer Loop Loss for the Encoder as Noisy SCL Loss. From Eq. (25), we can derive the generalized loss (of one query sample (q, u) ∼ Qn) that the encoder uses under FOMAML scheme. L{φ,wNstep,n},q = Nway∑ i=1 (1i=u − qNstep,ni )w 0 i ⊤ stop gradient ϕ(q) + η E (s,t)∼Sn Nstep∑ p=1 [( Nway∑ j=1 qNstep,nj s p−1,n j )− s p−1,n u − q Nstep,n t + 1t=u]ϕ(s) ⊤ stop gradient ϕ(q) (26) D EXPERIMENTS ON MINI-IMAGENET DATASET D.1 EXPERIMENTAL DETAILS IN MINI-IMAGENET DATASET The model architecture contains four basic blocks and one fully connected linear layer, where each block comprises a convolution layer with a kernel size of 3 × 3 and filter size of 64, batch normalization, ReLU nonlineartity and 2 × 2 max-poling. The models are trained with the softmax cross entropy loss function using the Adam optimizer with an outer loop learning rate of 0.001 (Antoniou et al., 2019). The inner loop step size η is set to 0.01. The models are trained for 30000 iterations (Raghu et al., 2020). The results are averaged over four random seeds, and we use the shaded region to indicate the standard deviation. Each experiment is run on either a single NVIDIA 1080-Ti or V100 GPU. The detailed implementation is based on Long (2018) (MIT License). D.2 THE EXPERIMENTAL RESULT OF SOMAML The results with SOMAML are shown in Figure 10. Note that as it is possible that longer training can eventually overcome the noise factor and reach similar performance as the zeroing trick, the benefit of the zeroing trick is best seen at the observed faster convergence results when compared to vanilla MAML. D.3 COSINE SIMILARITY ANALYSIS ON SEMANTICALLY SIMILAR CLASSES VERIFIES THE IMPLICIT CONTRASTIVENESS IN MAML In Figure 2, we randomly sample five classes of images under each random seed. Given the rich diversity of the classes in mini-ImageNet, we can consider that the five selected classes as semantically dissimilar or independent for each random seed. Here, we also provide the experimental outcomes using a dataset composed of five semantically similar classes selected from the miniImageNet dataset: French bulldog, Saluki, Walker hound, African hunting dog, and Golden retriever. Likewise to the original setting, we train the model using FOMAML and average the results over ten random seeds. As shown in Figure 11, the result is consistent with Figure 2. In conclusion, we show that the supervised contrastiveness is manifested with the application of the zeroing trick even if a semantically similar dataset is considered. D.4 EXPERIMENTAL RESULTS ON LARGER NUMBER OF SHOTS To empirically verify if our theoretical derivation generalizes to the setting where the number of shots is large, we conduct experiment of a 5-way 25-shot classification task using FOMAML with four random seeds where we adopt mini-ImageNet as the example dataset. As shown in Figure 12, we observe that models trained with the zeroing trick again yield the best performance, consistent with our theoretical work that MAML with the zeroing trick is SCL without noises and interference. D.5 THE ZEROING TRICK MITIGATES THE CHANNEL MEMORIZATION PROBLEM The channel memorization problem (Jamal & Qi, 2019; Rajendran et al., 2020) is a known issue occurring in a non-mutually-exclusive task setting, e.g., the task-specific class-to-label is not randomly assigned, and thus the label can be inferred from the query data alone (Yin et al., 2020). Consider a 5-way K-shot experiment where the total number of training classes is 5 × L. Now we construct tasks by assigning the label t to a class sampled from class tL to (t + 1)L. It is conceivable that the model will learn to directly map the query data to the label without using the information of the support data and thus fails to generalize to unseen tasks. This phenomenon can be explained from the perspective that the tth column of the final linear layer already accumulates the query features from tLth to (t + 1)Lth classes. Zeroing the final linear layer implicitly forces the model to use the imprinted information from the support features for inferring the label and thus mitigates this problem. We use the mini-ImageNet dataset and consider the case of L = 12. As shown in Figure 13, the zeroing trick prevents the model from the channel memorization problem whereas zero-initialization of the linear layer only works out at the beginning. Besides, the performance of models trained with the zeroing trick under this non-mutually-exclusive task setting equals the ones under the conventional few-shot setting as shown in Figure 5. As the zeroing trick clears out the final linear layer and equalizes the value of logits, our result essentially accords with Jamal & Qi (2019) that proposes a regularizer to maximize the entropy of prediction of the meta-initialized model. E EXPERIMENTS ON OMNIGLOT DATASET Omniglot is a hand-written character dataset containing 1623 character classes, each with 20 drawn samples from different people (Lake et al., 2015). The dataset set is splitted into training (1028 classes), validation (172 classes) and testing (423 classes) sets (Vinyals et al., 2016). Since we follow Finn et al. (2017) for setting hyperparamters, we do not use the the validation data. The character images are resized to 28 × 28. For all our experiments, we adopt two experimental settings: 5- way 1-shot and 5-way 5-shot where the batch size Nbatch is 32 and Nquery is 15 for both cases (Finn et al., 2017). The inner loop learning rate η is 0.4. The models are trained for 3000 iterations using FOMAML or SOMAML. The few-shot classification accuracy is calculated by averaging the results over 1000 tasks in the test stage. The model architecture follows the architecture used to train on mini-ImageNet, but we substitute the convolution with max-pooling with strided convolution operation as in Finn et al. (2017). The loss function, optimizer, and outer loop learning rate are the same as those used in the experiments on mini-ImageNet. Each experiment is run on either a single NVIDIA 1080-Ti. The results are averaged over four random seeds, and the standard deviation is illustrated with the shaded region. The models are trained using FOMAML unless stated otherwise. The detailed implementation is based on Deleu (2020) (MIT License). We revisit the application of the zeroing trick at the testing stage on Omniglot in Figure 14 and observe the increasing testing accuracy, in which such results are compatible with the ones on miniImageNet (cf. Figure 3 in the main manuscript). In the following experiments, we evaluate the testing performance only after applying the zeroing trick. In Figure 15, the distinction between the performance of models trained with the zeroing trick and zero-initialized models is prominent, sharing remarkable similarity with the results in miniImageNet (cf. Figure 5 in the main manuscript) in both 5-way 1-shot and 5-way 5-shot settings. We also show the testing performance of models trained using SOMAML in Figure 16 under a 5-way 5-shot setting, where there is little distinction in performance (in comparison to the results on miniImageNet, cf. Figure 10 in the main manuscript) between the models trained with the zeroing trick and the ones trained with random initialization. For channel memorization task, we construct non-mutually-exclusive training tasks by assigning the label t (where 1 ≤ t ≤ 5 in a few-shot 5-way setting) to a class sampled from class tL to (t + 1)L where L is 205 on Omniglot. The class-to-channel assignment is not applied to the testing tasks. The result is shown in Figure 17. For a detailed discussion, please refer to Section D.5.
1. What is the main contribution of the paper regarding MAML? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and motivation? 3. What are the weaknesses of the paper, especially regarding its scope and limitations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper's experimental results and their implications?
Summary Of The Paper Review
Summary Of The Paper This work shows that, in the setting of few-shot classification, MAML is a (noisy) contrastive learner. Complementary theoretical and experimental results are provided. The theoretical results also lead to a (to my knowledge) new proposal, namely to zero out the linear layer after each MAML outer loop update. In their experiments, the authors show that making this small change to the MAML algorithm can lead to meaningful improvements in performance. Review I like the simplicity of the zeroing trick, and I found it useful to see it motivated via your theoretical arguments / perform well in your numerical experiments. In my view, Section 2.1 could have done a better job of motivating why being a supervised contrastive learner is so appealing. This seems critical given that the rest of the paper provides conditions and theoretical/numerical results supporting that MAML is a (noisy) contrastive learner under certain conditions. Your theoretical analysis focuses on a specific setting: few-shot classification tasks, softmax output, frozen convolutional encoder during inner loop To what extent do your findings generalize beyond these settings? I had been hoping that the discussion in Section 2.6 ("Generalization of our Analysis") would shed some light on this, but it only seemed to address one specific aspect of your analysis, namely the fact that you took N b a t c h = N s t e p = 1 . Along the lines of the above, I think that the title and abstract are overly general (neither explicitly referencing your focus on few-shot classification problems, for example) given what you actually showed in the paper. Sorry if I missed this, but did you freeze the encoder in the inner loop for your simulations? If so, did you also evaluate what happens when you didn't freeze it? This would help to give some experimental indication of how general your theoretical findings might be.
ICLR
Title Scalable Sampling for Nonsymmetric Determinantal Point Processes Abstract A determinantal point process (DPP) on a collection of M items is a model, parameterized by a symmetric kernel matrix, that assigns a probability to every subset of those items. Recent work shows that removing the kernel symmetry constraint, yielding nonsymmetric DPPs (NDPPs), can lead to significant predictive performance gains for machine learning applications. However, existing work leaves open the question of scalable NDPP sampling. There is only one known DPP sampling algorithm, based on Cholesky decomposition, that can directly apply to NDPPs as well. Unfortunately, its runtime is cubic in M , and thus does not scale to large item collections. In this work, we first note that this algorithm can be transformed into a linear-time one for kernels with low-rank structure. Furthermore, we develop a scalable sublinear-time rejection sampling algorithm by constructing a novel proposal distribution. Additionally, we show that imposing certain structural constraints on the NDPP kernel enables us to bound the rejection rate in a way that depends only on the kernel rank. In our experiments we compare the speed of all of these samplers for a variety of real-world tasks. 1 INTRODUCTION A determinantal point process (DPP) on M items is a model, parameterized by a symmetric kernel matrix, that assigns a probability to every subset of those items. DPPs have been applied to a wide range of machine learning tasks, including stochastic gradient descent (SGD) (Zhang et al., 2017), reinforcement learning (Osogami & Raymond, 2019; Yang et al., 2020), text summarization (Dupuy & Bach, 2018), coresets (Tremblay et al., 2019), and more. However, a symmetric kernel can only capture negative correlations between items. Recent works (Brunel, 2018; Gartrell et al., 2019) have shown that using a nonsymmetric DPP (NDPP) allows modeling of positive correlations as well, which can lead to significant predictive performance gains. Gartrell et al. (2021) provides scalable NDPP kernel learning and MAP inference algorithms, but leaves open the question of scalable sampling. The only known sampling algorithm for NDPPs is the Cholesky-based approach described in Poulson (2019), which has a runtime of O(M3) and thus does not scale to large item collections. There is a rich body of work on efficient sampling algorithms for (symmetric) DPPs, including recent works such as Derezinski et al. (2019); Poulson (2019); Calandriello et al. (2020). Key distinctions between existing sampling algorithms include whether they are for exact or approximate sampling, whether they assume the DPP kernel has some low-rank K ≪ M , and whether they sample from the space of all 2M subsets or from the restricted space of size-k subsets, so-called k-DPPs. In the context of MAP inference, influential work, including Summa et al. (2014); Chen et al. (2018); Hassani et al. (2019); Ebrahimi et al. (2017); Indyk et al. (2020), proposed efficient algorithms that the approximate (sub)determinant maximization problem and provide rigorous guarantees. In this work we focus on exact sampling for low-rank kernels, and provide scalable algorithms for NDPPs. Our contributions are as follows, with runtime and memory details summarized in Table 1: • Linear-time sampling (Section 3): We show how to transform the O(M3) Choleskydecomposition-based sampler from Poulson (2019) into an O(MK2) sampler for rank-K kernels. • Sublinear-time sampling (Section 4): Using rejection sampling, we show how to leverage existing sublinear-time samplers for symmetric DPPs to implement a sublinear-time sampler for a subclass of NDPPs that we call orthogonal NDPPs (ONDPPs). • Learning with orthogonality constraints (Section 5): We show that the scalable NDPP kernel learning of Gartrell et al. (2021) can be slightly modified to impose an orthogonality constraint, yielding the ONDPP subclass. The constraint allows us to control the rejection sampling algorithm’s rejection rate, ensuring its scalability. Experiments suggest that the predictive performance of the kernels is not degraded by this change. For a common large-scale setting where M is 1 million, our sublinear-time sampler results in runtime that is hundreds of times faster than the linear-time sampler. In the same setting, our linear-time sampler provides runtime that is millions of times faster than the only previously known NDPP sampling algorithm, which has cubic time complexity and is thus impractical in this scenario. 2 BACKGROUND Notation. We use [M ] := {1, . . . ,M} to denote the set of items 1 through M . We use IK to denote the K-by-K identity matrix, and often write I := IM when the dimensionality should be clear from context. Given L ∈ RM×M , we use Li,j to denote the entry in the i-th row and j-th column, and LA,B ∈ R|A|×|B| for the submatrix formed by taking rows A and columns B. We also slightly abuse notation to denote principal submatrices with a single subscript, LA := LA,A. Kernels. As discussed earlier, both (symmetric) DPPs and NDPPs define a probability distribution over all 2M subsets of a ground set [M ]. The distribution is parameterized by a kernel matrix L ∈ RM×M and the probability of a subset Y ⊆ [M ] is defined to be Pr(Y ) ∝ det(LY ). For this to define a valid distribution, it must be the case that det(LY ) ≥ 0 for all Y . For symmetric DPPs, the non-negativity requirement is identical to a requirement that L be positive semi-definite (PSD). For nonsymmetric DPPs, there is no such simple correspondence, but prior work such as Gartrell et al. (2019; 2021) has focused on PSD matrices for simplicity. Normalizing and marginalizing. The normalizer of a DPP or NDPP distribution can also be written as a single determinant: ∑ Y⊆[M ] det(LY ) = det(L+ I) (Kulesza & Taskar, 2012, Theorem 2.1). Additionally, the marginal probability of a subset can be written as a determinant: Pr(A ⊆ Y ) = det(KA), for K := I − (L+ I)−1 (Kulesza & Taskar, 2012, Theorem 2.2)*, where K is typically called the marginal kernel. Intuition. The diagonal element Ki,i is the probability that item i is included in a set sampled from the model. The 2-by-2 determinant det(K{i,j}) = Ki,iKj,j −Ki,jKj,j is the probability that both i and j are included in the sample. A symmetric DPP has a symmetric marginal kernel, meaning Ki,j = Kj,i, and hence Ki,iKj,j −Ki,jKj,i ≤ Ki,iKj,j . This implies that the probability of including both i and j in the sampled set cannot be greater than the product of their individual inclusion probabilities. Hence, symmetric DPPs can only encode negative correlations. In contrast, NDPPs can have Ki,j and Kj,i with differing signs, allowing them to also capture positive correlations. 2.1 RELATED WORK Learning. Gartrell et al. (2021) proposes a low-rank kernel decomposition for NDPPs that admits linear-time learning. The decomposition takes the form L := V V ⊤ + B(D − D⊤)B⊤ for *The proofs in Kulesza & Taskar (2012) typically assume a symmetric kernel, but this particular one does not rely on the symmetry. Algorithm 1 Cholesky-based NDPP sampling (Poulson, 2019, Algorithm 1) 1: procedure SAMPLECHOLESKY(K) ▷ marginal kernel factorization Z,W 2: Y ← ∅ Q←W 3: for i = 1 to M do 4: pi ←Ki,i pi ← z⊤i Qzi 5: u← uniform(0, 1) 6: if u ≤ pi then Y ← Y ∪ {i} 7: else pi ← pi − 1 8: KA ←KA − KA,iKi,Api for A := {i+ 1, . . . ,M} Q← Q− Qziz ⊤ i Q pi 9: return Y V ,B ∈ RM×K , and D ∈ RK×K . The V V ⊤ component is a rank-K symmetric matrix, which can model negative correlations between items. The B(D −D⊤)B⊤ component is a rank-K skewsymmetric matrix, which can model positive correlations between items. For compactness of notation, we will write L = ZXZ⊤, where Z = [ V B ] ∈ RM×2K , and X = [ IK 0 0 D−D⊤ ] ∈ R2K×2K . The marginal kernel in this case also has a rank-2K decomposition, as can be shown via application of the Woodbury matrix identity: K := I − (I +L)−1 = ZX ( I2K +Z ⊤ZX )−1 Z⊤. (1) Note that the matrix to be inverted can be computed from Z and X in O(MK2) time, and the inverse itself takes O(K3) time. Thus, K can be computed from L in time O(MK2). We will develop sampling algorithms for this decomposition, as well as an orthogonality-constrained version of it. We use W := X ( I2K +Z ⊤ZX )−1 in what follows so that we can compactly write K = ZWZ⊤. Sampling. While there are a number of exact sampling algorithms for DPPs with symmetric kernels, the only published algorithm that clearly can directly apply to NDPPs is from Poulson (2019) (see Theorem 2 therein). This algorithm begins with an empty set Y = ∅ and iterates through the M items, deciding for each whether or not to include it in Y based on all of the previous inclusion/exclusion decisions. Poulson (2019) shows, via the Cholesky decomposition, that the necessary conditional probabilities can be computed as follows: Pr (j ∈ Y | i ∈ Y ) = Pr({i, j} ⊆ Y ) Pr(i ∈ Y ) = Kj,j − (Kj,iKi,j) /Ki,i, (2) Pr (j ∈ Y | i /∈ Y ) = Pr(j ∈ Y )− Pr({i, j} ⊆ Y ) Pr(i /∈ Y ) = Kj,j − (Kj,iKi,j) / (Ki,i − 1) . (3) Algorithm 1 (left-hand side) gives pseudocode for this Cholesky-based sampling algorithm†. There has also been some recent work on approximate sampling for fixed-size k-NDPPs: Alimohammadi et al. (2021) provide a Markov chain Monte Carlo (MCMC) algorithm and prove that the overall runtime to approximate ε-close total variation distance is bounded by O(M2k3 log(1/(εPr(Y0))), where Pr(Y0) is probability of an initial state Y0. Improving this runtime is an interesting avenue for future work, but for this paper we focus on exact sampling. 3 LINEAR-TIME CHOLESKY-BASED SAMPLING In this section, we show that the O(M3) runtime of the Cholesky-based sampler from Poulson (2019) can be significantly improved when using the low-rank kernel decomposition of Gartrell et al. (2021). First, note that Line 8 of Algorithm 1, where all marginal probabilities are updated via an (M − i)-by-(M − i) matrix subtraction, is the most costly part of the algorithm, making overall time and memory complexities O(M3) and O(M2), respectively. However, when the DPP kernel is given by a low-rank decomposition, we observe that marginal probabilities can be updated by matrix-vector †Cholesky decomposition is defined only for a symmetric positive definite matrix. However, we use the term “Cholesky” from Poulson (2019) to maintain consistency with this work, although Algorithm 1 is valid for nonsymmetric matrices. Algorithm 2 Rejection NDPP sampling (Tree-based sampling) 1: procedure PREPROCESS(V ,B,D) 2: {(σj ,y2j−1,y2j)}K/2j=1 ← YOULADECOMPOSE(B,D)‡ 3: X̂ ← diag ( IK , σ1, σ1, . . . , σK/2, σK/2 ) 4: Z ← [V ,y1, . . . ,yK ] {(λi, zi)}2Ki=1 ← EIGENDECOMPOSE(ZX̂1/2) T ← CONSTRUCTTREE(M, [z1, . . . ,z2K ]⊤) 5: return Z, X̂ return T , {(λi, zi)}2Ki=1 6: procedure SAMPLEREJECT(V ,B,D,Z, X̂) ▷ tree T , eigen pair {(λi, zi)}2Ki=1 of ZX̂Z 7: while true do 8: Y ← SAMPLEDPP(ZX̂Z⊤) Y ← SAMPLEDPP(T , {(λi, zi)}2Ki=1) 9: u← uniform(0, 1) 10: p← det([V V ⊤+B(D−D⊤)B⊤]Y ) det([ZX̂Z⊤]Y ) 11: if u ≤ p then break 12: return Y multiplications of dimension 2K, regardless of M . In more detail, suppose we have the marginal kernel K = ZWZ⊤ as in Eq. (1) and let zj be the j-th row vector in Z. Then, for i ̸= j: Pr (j ∈ Y | i ∈ Y ) = Kj,j − (Kj,iKi,j)/Ki,i = z⊤j ( W − (Wzi)(z ⊤ i W ) z⊤i Wzi ) zj , (4) Pr (j ∈ Y | i /∈ Y ) = z⊤j ( W − (Wzi)(z ⊤ i W ) z⊤i Wzi − 1 ) zj . (5) The conditional probabilities in Eqs. (4) and (5) are of bilinear form, and the zj do not change during sampling. Hence, it is enough to update the 2K-by-2K inner matrix at each iteration, and obtain the marginal probability by multiplying this matrix by zi. The details are shown on the right-hand side of Algorithm 1. The overall time and memory complexities are O(MK2) and O(MK), respectively. 4 SUBLINEAR-TIME REJECTION SAMPLING Although the Cholesky-based sampler runs in time linear in M , even this is too expensive for the large M that are often encountered in real-world datasets. To improve runtime, we consider rejection sampling (Von Neumann, 1963). Let p be the target distribution that we aim to sample, and let q be any distribution whose support corresponds to that of p; we call q the proposal distribution. Assume that there is a universal constant U such that p(x) ≤ Uq(x) for all x. In this setting, rejection sampling draws a sample x from q and accepts it with probability p(x)/(Uq(x)), repeating until an acceptance occurs. The distribution of the resulting samples is p. It is important to choose a good proposal distribution q so that sampling is efficient and the number of rejections is small. 4.1 PROPOSAL DPP CONSTRUCTION Our first goal is to find a proposal DPP with symmetric kernel L̂ that can upper-bound all probabilities of samples from the NDPP with kernel L within a constant factor. To this end, we expand the determinant of a principal submatrix, det(LY ), using the spectral decomposition of the NDPP kernel. Such a decomposition essentially amounts to combining the eigendecomposition of the symmetric part of L with the Youla decomposition (Youla, 1961) of the skew-symmetric part. Specifically, suppose {(σj ,y2j−1,y2j)}K/2j=1 is the Youla decomposition of B(D −D⊤)B⊤ (see Appendix D for more details), that is, B(D −D⊤)B⊤ = K/2∑ j=1 σj ( y2j−1y ⊤ 2j − y2jy⊤2j−1 ) . (6) ‡Pseudo-code of YOULADECOMPOSE is provided in Algorithm 4. See Appendix D. Then we can simply write L = ZXZ⊤, for Z := [V ,y1, . . . ,yK ] ∈ RM×2K , and X := diag ( IK , [ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]) . (7) Now, consider defining a related but symmetric PSD kernel L̂ := ZX̂Z⊤ with X̂ := diag ( IK , σ1, σ1, . . . , σK/2, σK/2 ) . All determinants of the principal submatrices of L̂ = ZX̂Z⊤ upper-bound those of L, as stated below. Theorem 1. For every subset Y ⊆ [M ], it holds that det(LY ) ≤ det(L̂Y ). Moreover, equality holds when the size of Y is equal to the rank of L. Proof sketch: From the Cauchy-Binet formula, the determinants of LY and L̂Y for all Y ⊆ [M ], |Y | ≤ 2K can be represented as det(LY ) = ∑ I⊆[K],|I|=|Y | ∑ J⊆[K],|J|=|Y | det(XI,J) det(ZY,I) det(ZY,J), (8) det(L̂Y ) = ∑ I⊆[2K],|I|=|Y | det(X̂I) det(ZY,I) 2. (9) Many of the terms in Eq. (8) are actually zero due to the block-diagonal structure of X . For example, note that if 1 ∈ I but 1 /∈ J , then there is an all-zeros row in XI,J , making det(XI,J) = 0. We show that each XI,J with nonzero determinant is a block-diagonal matrix with diagonal entries among ±σj , or [ 0 σj −σj 0 ] . With this observation, we can prove that det(XI,J) is upper-bounded by det(X̂I) or det(X̂J). Then, through application of the rearrangement inequality, we can upper-bound the sum of the det(XI,J) det(ZY,I) det(ZY,J) in Eq. (8) with a sum over det(X̂I) det(ZY,I)2. Finally, we show that the number of non-zero terms in Eq. (8) is identical to the number of non-zero terms in Eq. (9). Combining these gives us the desired inequality det(LY ) ≤ det(L̂Y ). The full proof of Theorem 1 is in Appendix E.1. Now, recall that the normalizer of a DPP (or NDPP) with kernel L is det(L + I). The ratio of probability of the NDPP with kernel L to that of a DPP with kernel L̂ is thus: PrL(Y ) PrL̂(Y ) = det(LY )/det(L+ I) det(L̂Y )/det(L̂+ I) ≤ det(L̂+ I) det(L+ I) , where the inequality follows from Theorem 1. This gives us the necessary universal constant U upper-bounding the ratio of the target distribution to the proposal distribution. Hence, given a sample Y drawn from the DPP with kernel L̂, we can use acceptance probability PrL(Y )/(U PrL̂(Y )) = det(LY )/ det(L̂Y ). Pseudo-codes for proposal construction and rejection sampling are given in Algorithm 2. Note that to derive L̂ from L it suffices to run the Youla decomposition of B(D − D⊤)B⊤, because the difference is only in the skew-symmetric part. This decomposition can run in O(MK2) time; more details are provided in Appendix D. Since L̂ is a symmetric PSD matrix, we can apply existing fast DPP sampling algorithms to sample from it. In particular, in the next section we combine a fast tree-based method with rejection sampling. 4.2 SUBLINEAR-TIME TREE-BASED SAMPLING There are several DPP sampling algorithms that run in sublinear time, such as tree-based (Gillenwater et al., 2019) and intermediate (Derezinski et al., 2019) sampling algorithms. Here, we consider applying the former, a tree-based approach, to sample from the proposal distribution defined by L̂. We give some details of the sampling procedure, as in the course of applying it we discovered an optimization that slightly improves on the runtime of prior work. Formally, let {(λi, zi)}2Ki=1 be the eigendecomposition of L̂ and Z := [z1, . . . ,z2K ] ∈ RM×2K . As shown in Kulesza & Taskar (2012, Lemma 2.6), for every Y ⊆ [M ], |Y | ≤ 2K, the probability of Y under DPP with L̂ can be written: PrL̂(Y ) = det(L̂Y ) det(L̂+ I) = ∑ E⊆[2K],|E|=|Y | det(ZY,EZ ⊤ Y,E) ∏ i∈E λi λi + 1 ∏ i/∈E 1 λi + 1 . (10) Algorithm 3 Tree-based DPP sampling (Gillenwater et al., 2019) 1: procedure BRANCH(A,Z) 2: if A = {j} then 3: T .A← {j}, T .Σ← Z⊤j,:Zj,: 4: return T 5: Aℓ, Ar ← Split A in half 6: T .left← BRANCH(Aℓ,Z) 7: T .right← BRANCH(Ar,Z) 8: T .Σ← T .left.Σ+ T .right.Σ 9: return T 10: procedure CONSTRUCTTREE(M , Z) 11: return BRANCH([M ], Z) 12: procedure SAMPLEDPP(T ,Z, {λi}Ki=1) 13: E ← ∅, Y ← ∅, QY ← 0 14: for i = 1, . . . ,K do 15: E ← E ∪ {i} w.p. λi/(λi + 1) 16: for k = 1, . . . , |E| do 17: j ← SAMPLEITEM(T ,QY , E) 18: Y ← Y ∪ {j} 19: QY← I|E|−Z⊤Y,E ( ZY,EZ ⊤ Y,E )−1 ZY,E 20: return Y 21: procedure SAMPLEITEM(T ,QY , E) 22: if T is a leaf then return T .A 23: pℓ ← 〈 T .left.ΣE ,QY 〉 24: pr ← 〈 T .right.ΣE ,QY 〉 25: u← uniform(0, 1) 26: if u ≤ pℓpℓ+pr then 27: return SAMPLEITEM(T .left,QY , E) 28: else 29: return SAMPLEITEM(T .right,QY , E) A matrix of the form Z:,EZ⊤:,E can be a valid marginal kernel for a special type of DPP, called an elementary DPP. Hence, Eq. (10) can be thought of as DPP probabilities expressed as a mixture of elementary DPPs. Based on this mixture view, DPP sampling can be done in two steps: (1) choose an elementary DPP according to its mixture weight, and then (2) sample a subset from the selected elementary DPP. Step (1) can be performed by 2K independent random coin tossings, while step (2) involves computational overhead. The key idea of tree-based sampling is that step (2) can be accelerated by traversing a binary tree structure, which can be done in time logarithmic in M . More specifically, given the marginal kernel K = Z:,EZ⊤:,E , where E is obtained from step (1), we start from the empty set Y = ∅ and repeatedly add an item j to Y with probability: Pr(j ∈ S | Y ⊆ S) = Kj,j −Kj,Y (KY )−1KY,j = Zj,EQY Z⊤j,E = 〈 QY , (Z⊤j,:Zj,:)E 〉 , (11) where S is some final selected subset, and QY := I|E| − Z⊤Y,E ( ZY,EZ ⊤ Y,E )−1 ZY,E . Consider a binary tree whose root includes a ground set [M ]. Every non-leaf node contains a subset A ⊆ [M ] and stores a 2K-by-2K matrix ∑ j∈A Z ⊤ j,:Zj,:. A partition Aℓ and Ar, such that Aℓ∪Ar = A,Aℓ∩Ar = ∅, are passed to its left and right subtree, respectively. The resulting tree has M leaves and each has exactly a single item. Then, one can sample a single item by recursively moving down to the left node with probability: pℓ = ⟨QY ,∑j∈Aℓ(Z⊤j,:Zj,:)E⟩ ⟨QY ,∑j∈A(Zj,:Z⊤j,:)E⟩ , (12) or to the right node with probability 1− pℓ, until reaching a leaf node. An item in the leaf node is chosen with probability according to Eq. (11). Since every subset in the support of an elementary DPP with a rank-k kernel has exactly k items, this process is repeated for |E| iterations. Full descriptions of tree construction and sampling are provided in Algorithm 3. The proposed tree-based rejection sampling for an NDPP is outlined on the right-side of Algorithm 2. The one-time pre-processing step of constructing the tree (CONSTRUCTTREE) requires O(MK2) time. After pre-processing, the procedure SAMPLEDPP involves |E| traversals of a tree of depth O(logM), where in each node a O(|E|2) operation is required. The overall runtime is summarized in Proposition 1 and the proof can be found in Appendix E.2. Proposition 1. The tree-based sampling procedure SAMPLEDPP in Algorithm 3 runs in time O(K + k3 logM + k4), where k is the size of the sampled set§. §Computing pℓ via Eq. (12) improves on Gillenwater et al. (2019)’s O(k4 logM) runtime for this step. 4.3 AVERAGE NUMBER OF REJECTIONS We now return to rejection sampling and focus on the expected number of rejections. The number of rejections of Algorithm 2 is known to be a geometric random variable with mean equal to the constant U used to upper-bound the ratio of the target distribution to the proposal distribution: det(L̂+ I)/ det(L+ I). If all columns in V and B are orthogonal, which we denote V ⊥ B, then the expected number of rejections depends only on the eigenvalues of the skew-symmetric part of the NDPP kernel. Theorem 2. Given an NDPP kernel L = V V ⊤ + B(D −D⊤)B⊤ for V ,B ∈ RM×K ,D ∈ RK×K , consider the proposal kernel L̂ as proposed in Section 4.1. Let {σj}K/2j=1 be the positive eigenvalues obtained from the Youla decomposition of B(D−D⊤)B⊤. If V ⊥ B, then det(L̂+I)det(L+I) =∏K/2 j=1 ( 1 + 2σj σ2j+1 ) ≤ (1 + ω)K/2, where ω = 2K ∑K/2 j=1 2σj σ2j+1 ∈ (0, 1]. Proof sketch: Orthogonality between V and B allows det(L+ I) to be expressed just in terms of the eigenvalues of V V ⊤ and B(D −D⊤)B⊤. Since both L and L̂ share the symmetric part V V ⊤, the ratio of determinants only depends on the skew-symmetric part. A more formal proof appears in Appendix E.3. Assuming we have a kernel where V ⊥ B, we can combine Theorem 2 with the tree-based rejection sampling algorithm (right-side in Algorithm 2) to sample in time O((K+k3 logM+k4)(1+ω)K/2). Hence, we have a sampling algorithm that is sublinear in M , and can be much faster than the Choleskybased algorithm when (1 + ω)K/2 ≪M . In the next section, we introduce a learning scheme with the V ⊥ B constraint, as well as regularization to ensure that ω is small. 5 LEARNING WITH ORTHOGONALITY CONSTRAINTS We aim to learn a NDPP that provides both good predictive performance and a low rejection rate. We parameterize our NDPP kernel matrix L = V V ⊤ +B(D −D⊤)B⊤ by D = diag ([ 0 σ1 0 0 ] , . . . , [ 0 σK/2 0 0 ]) (13) for σj ≥ 0, B⊤B = I , and, motivated by Theorem 2, require V ⊤B = 0¶. We call such orthogonality-constrained NDPPs “ONDPPs”. Notice that if V ⊥ B, then L has the full rank of 2K, since the intersection of the column spaces spanned by V and by B is empty, and thus the full rank available for modeling can be used. Thus, this constraint can also be thought of as simply ensuring that ONDPPs use the full rank available to them. Given example subsets {Y1, . . . , Yn} as training data, learning is done by minimizing the regularized negative log-likelihood: min V ,B,{σj}K/2j=1 − 1 n n∑ i=1 log ( det(LYi) det(L+ I) ) + α M∑ i=1 ∥vi∥22 µi + β M∑ i=1 ∥bi∥22 µi + γ K/2∑ j=1 log ( 1 + 2σj σ2j + 1 ) (14) where α, β, γ > 0 are hyperparameters, µi is the frequency of item i in the training data, and vi and bi represent the rows of V and B, respectively. This objective is very similar to that of Gartrell et al. (2021), except for the orthogonality constraint and the final regularization term. Note that this regularization term corresponds exactly to the logarithm of the average rejection rate, and therefore should help to control the number of rejections. 6 EXPERIMENTS We first show that the orthogonality constraint from Section 5 does not degrade the predictive performance of learned kernels. We then compare the speed of our proposed sampling algorithms. ¶Technical details: To learn NDPP models with the constraint V ⊤B = 0, we project V according to: V ← V − B(B⊤B)−1(B⊤V ). For the B⊤B = I constraint, we apply QR decomposition on B. Note that both operations require O(MK2) time. (Constrained learning and sampling code is provided at https://github.com/insuhan/nonsymmetric-dpp-sampling. We use Pytorch’s linalg.solve to avoid the expense of explicitly computing the (B⊤B)−1 inverse.) Hence, our learning time complexity is identical to that of Gartrell et al. (2021). 6.1 PREDICTIVE PERFORMANCE RESULTS FOR NDPP LEARNING We benchmark various DPP models, including symmetric (Gartrell et al., 2017), nonsymmetric for scalable learning (Gartrell et al., 2021), as well as our ONDPP kernels with and without rejection rate regularization. We use the scalable NDPP models (Gartrell et al., 2021) as a baseline||. The kernel components of each model are learned using five real-world recommendation datasets, which have ground set sizes that range from 3,941 to 1,059,437 items (see Appendix A for more details). Our experimental setup and metrics mirror those of Gartrell et al. (2021). We report the mean percentile rank (MPR) metric for a next-item prediction task, the AUC metric for subset discrimination, and the log-likelihood of the test set; see Appendix B for more details on the experiments and metrics. For all metrics, higher numbers are better. For NDPP models, we additionally report the average rejection rates when they apply to rejection sampling. In Table 2, we observe that the predictive performance of our ONDPP models generally match or sometimes exceed the baseline. This is likely because the orthogonality constraint enables more effective use of the full rank-2K feature space. Moreover, imposing the regularization on rejection rate, as shown in Eq. (14), often leads to dramatically smaller rejection rates, while the impact on predictive performance is generally marginal. These results justify the ONDPP and regularization for fast sampling. Finally, we observe that the learning time of our ONDPP models is typically a bit longer than that of the NDPP models, but still quite reasonable (e.g., the time per iteration for the NDPP takes 27 seconds for the Book dataset, while our ONDPP takes 49.7 seconds). Fig. 1 shows how the regularizer γ affects the test log-likelihood and the average number of rejections. We see that γ degrades predictive performance and reduces the rejection rate when set above a certain threshold; this behavior is seen for many datasets. However, for the Recipe dataset we observed that the test log-likelihood is not very sensitive to γ, likely because all models in our experiments achieve very high performance on this dataset. In general, we observe that γ can be set to a value that results in a small rejection rate, while having minimal impact on predictive performance. 6.2 SAMPLING TIME COMPARISON We benchmark the Cholesky-based sampling algorithm (Algorithm 1) and tree-based rejection sampling algorithm (Algorithm 2) on ONDPPs with both synthetic and real-world data. ||We use the code from https://github.com/cgartrel/scalable-nonsymmetric-DPPs for the NDPP baseline, which is made available under the MIT license. To simplify learning and MAP inference, Gartrell et al. (2021) set B = V in their experiments. However, since we have the V ⊥ B constraint in our ONDPP approach, we cannot set B = V . Hence, for a fair comparison, we do not set B = V for the NDPP baseline in our experiments, and thus the results in Table 2 differ slightly from those published in Gartrell et al. (2021). 10−8 10−6 10−4 10−2 100 regularizer γ 101 104 107 1010 av er ag e # of re je ct io ns (a) 10−6 10−4 10−2 100 regularizer γ −106 −104 −102 −100 −98 te st lo g- lik el ih oo d (b) Figure 1: Average number of rejections and test log-likelihood with different values of the regularizer γ for ONDPPs trained on the UK Retail dataset. Shaded regions are 95% confidence intervals of 10 independent trials. 212 214 216 218 220 ground set size M 101 102 103 sa m pl in g tim e (s ec ) Cholesky-based Rejection (a) 212 214 216 218 220 ground set size M 10−1 101 103 pr ep ro ce ss in g tim e (s ec ) Tree construction Spectral decomposition (b) Figure 2: Wall-clock time (sec) for synthetic data for (a) NDPP sampling algorithms and (b) preprocessing steps for the rejection sampling. Shaded regions are 95% confidence intervals from 100 independent trials. Synthetic datasets. We generate non-uniform random features for V ,B as done by (Han & Gillenwater, 2020). In particular, we first sample x1, . . . ,x100 from N (0, I2K/(2K)), and integers t1, . . . , t100 from Poisson distribution with mean 5, rescaling the integers such that ∑ i ti = M . Next, we draw ti random vectors from N (xi, I2K), and assign the first K-dimensional vectors as the row vectors of V and the latter vectors as those of B. Each entry of D is sampled from N (0, 1). We choose K = 100 and vary M from 212 to 220. Fig. 2(a) illustrates the runtimes of Algorithms 1 and 2. We verify that the rejection sampling time tends to increase sub-linearly with the ground set size M , while the Cholesky-based sampler runs in linear time. In Fig. 2(b), the runtimes of the preprocessing steps for Algorithm 2 (i.e., spectral decomposition and tree construction) are reported. Although the rejection sampler requires these additional processes, they are one-time steps and run much faster than a single run of the Choleksy-based method for M = 220. Real-world datasets. In Table 3, we report the runtimes and speedup of NDPP sampling algorithms for real-world datasets. All NDPP kernels are obtained using learning with orthogonality constraints, with rejection rate regularization as reported in Section 6.1. We observe that the tree-based rejection sampling runs up to 246 times faster than the Cholesky-based algorithm. For larger datasets, we expect that this gap would significantly increase. As with the synthetic experiments, we see that the tree construction pre-processing time is comparable to the time required to draw a single sample via the other methods, and thus the tree-based method is often the best choice for repeated sampling**. 7 CONCLUSION In this work we developed scalable sampling methods for NDPPs. One limitation of our rejection sampler is its practical restriction to the ONDPP subclass. Other opportunities for future work include the extension of our rejection sampling approach to the generation of fixed-size samples (from k-NDPPs), the development of approximate sampling techniques, and the extension of DPP samplers along the lines of Derezinski et al. (2019); Calandriello et al. (2020) to NDPPs. Scalable sampling also opens the door to using NDPPs as building blocks in probabilistic models. **We note that the tree can consume substantial memory, e.g., 169.5 GB for the Book dataset with K = 100. For settings where this scale of memory use is unacceptable, we suggest use of the intermediate sampling algorithm (Calandriello et al., 2020) in place of tree-based sampling. The resulting sampling algorithm may be slower, but the O(M +K) memory cost is substantially lower. 8 ETHICS STATEMENT In general, our work moves in a positive direction by substantially decreasing the computational costs of NDPP sampling. When using our constrained learning method to learn kernels from user data, we recommend employing a technique such as differentially-private SGD (Abadi et al., 2016) to help prevent user data leaks, and adjusting the weights on training examples to balance the impact of sub-groups of users so as to make the final kernel as fair as possible. As far as we are aware, the datasets used in this work do not contain personally identifiable information or offensive content. We were not able to determine if user consent was explicitly obtained by the organizations that constructed these datasets. 9 REPRODUCIBILITY STATEMENT We have made extensive effort to ensure that all algorithmic, theoretical, and experimental contributions described in this work are reproducible. All of the code implementing our constrained learning and sampling algorithms is publicly available ††. The proofs for our theoretical contributions are available in Appendix E. For our experiments, all dataset processing steps, experimental procedures, and hyperparameter settings are described in Appendices A, B, and C, respectively. 10 ACKNOWLEDGEMENTS Amin Karbasi acknowledges funding in direct support of this work from NSF (IIS-1845032) and ONR (N00014-19-1-2406). A FULL DETAILS ON DATASETS We perform experiments on several real-world public datasets composed of subsets: • UK Retail: This dataset (Chen et al., 2012) contains baskets representing transactions from an online retail company that sells all-occasion gifts. We omit baskets with more than 100 items, leaving us with a dataset containing 19,762 baskets drawn from a catalog of M = 3,941 products. Baskets containing more than 100 items are in the long tail of the basket-size distribution, so omitting these is reasonable, and allows us to use a low-rank factorization of the NDPP with K = 100. • Recipe: This dataset (Majumder et al., 2019) contains recipes and food reviews from Food.com (formerly Genius Kitchen)‡‡. Each recipe (“basket”) is composed of a collection of ingredients, resulting in 178,265 recipes and a catalog of 7,993 ingredients. • Instacart: This dataset (Instacart, 2017) contains baskets purchased by Instacart users§§. We omit baskets with more than 100 items, resulting in 3.2 million baskets and a catalog of 49,677 products. • Million Song: This dataset (McFee et al., 2012) contains playlists (“baskets”) of songs from Echo Nest users¶¶. We trim playlists with more than 100 items, leaving 968,674 playlists and a catalog of 371,410 songs. • Book: This dataset (Wan & McAuley, 2018) contains reviews from the Goodreads book review website, including a variety of attributes describing the items***. For each user we build a subset (“basket”) containing the books reviewed by that user. We trim subsets with more than 100 books, resulting in 430,563 subsets and a catalog of 1,059,437 books. As far as we are aware, these datasets do not contain personally identifiable information or offensive content. While the UK Retail dataset is publicly available, we were unable to find a license for it. Also, we were not able to determine if user consent was explicitly obtained by the organizations that constructed these datasets. B FULL DETAILS ON EXPERIMENTAL SETUP AND METRICS We use 300 randomly-selected baskets as a held-out validation set, for tracking convergence during training and for tuning hyperparameters. Another 2000 random baskets are used for testing, and the rest are used for training. Convergence is reached during training when the relative change in validation log-likelihood is below a predetermined threshold. We use PyTorch with Adam (Kingma & Ba, 2015) for optimization. We initialize D from the standard Gaussian distribution N (0, 1), while V and B are initialized from the uniform(0, 1) distribution. Subset expansion task. We use greedy conditioning to do next-item prediction (Gartrell et al., 2021, Section 4.2). We compare methods using a standard recommender system metric: mean percentile rank (MPR) (Hu et al., 2008; Li et al., 2010). MPR of 50 is equivalent to random selection; MPR of 100 means that the model perfectly predicts the next item. See Appendix B.1 for a complete description of the MPR metric. Subset discrimination task. We also test the ability of a model to discriminate observed subsets from randomly generated ones. For each subset in the test set, we generate a subset of the same length by drawing items uniformly at random (and we ensure that the same item is not drawn more than once for a subset). We compute the AUC for the model on these observed and random subsets, where the score for each subset is the log-likelihood that the model assigns to the subset. ‡‡See https://www.kaggle.com/shuyangli94/food-com-recipes-and-user-interactions for the license for this public dataset. §§This public dataset is available for non-commercial use; see https://www.instacart.com/datasets/ grocery-shopping-2017 for the license. ¶¶See http://millionsongdataset.com/faq/ for the license for this public dataset. ***This public dataset is available for academic use only; see https://sites.google.com/eng.ucsd.edu/ ucsdbookgraph/home for the license. B.1 MEAN PERCENTILE RANK We begin our definition of MPR by defining percentile rank (PR). First, given a set J , let pi,J = Pr(J ∪ {i} | J). The percentile rank of an item i given a set J is defined as PRi,J = ∑ i′ ̸∈J 1(pi,J ≥ pi′,J) |Y\J | × 100% where Y\J indicates those elements in the ground set Y that are not found in J . For our evaluation, given a test set Y , we select a random element i ∈ Y and compute PRi,Y \{i}. We then average over the set of all test instances T to compute the mean percentile rank (MPR): MPR = 1 |T | ∑ Y ∈T PRi,Y \{i}. C HYPERPARAMETERS FOR EXPERIMENTS Preventing numerical instabilities: The det(LYi) in Eq. (14) will be zero whenever |Yi| > K, where Yi is an observed subset. To address this in practice we set K to the size of the largest subset observed in the data, K ′, as in Gartrell et al. (2017). However, this does not entirely fix the issue, as there is still a chance that the term will be zero even when |Yi| ≤ K. In this case though, we know that we are not at a maximum, since the value of the objective function is −∞. Numerically, to prevent such singularities, in our implementation we add a small ϵI correction to each LYi when optimizing Eq. (14) (ϵ = 10−5 in our experiments). We perform a grid search using a held-out validation set to select the best-performing hyperparameters for each model and dataset. The hyperparameter settings used for each model and dataset are described below. Symmetric low-rank DPP (Gartrell et al., 2017). For this model, we use K for the number of item feature dimensions for the symmetric component V , and α for the regularization hyperparameter for V . We use the following hyperparameter settings: • UK Retail dataset: K = 100, α = 1. • Recipe dataset: K = 100, α = 0.01 • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 100, α = 0.0001. • Book dataset: K = 100, α = 0.001 Scalable NDPP (Gartrell et al., 2021). As described in Section 2.1, we use K to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component D. α and β are the regularization hyperparameters. We use the following hyperparameter settings: • UK dataset: K = 100, α = 0.01. • Recipe dataset: K = 100, α = β = 0.01. • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 100, α = 0.01. • Book dataset: K = 100, α = β = 0.1 ONDPP. As described in Section 5, we use K to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component C. α, β, and γ are the regularization hyperparameters. We use the following hyperparameter settings: • UK dataset: K = 100, α = β = 0.01, γ = 0.5. • Recipe dataset: K = 100, α = β = 0.01, γ = 0.1. • Instacart dataset: K = 100, α = β = 0.001, γ = 0.001. • Million Song dataset: K = 100, α = β = 0.01, γ = 0.2. • Book dataset: K = 100, α = β = 0.01, γ = 0.1. For all of the above model configurations and datasets, we use a batch size of 800 during training. D YOULA DECOMPOSITION: SPECTRAL DECOMPOSITION FOR SKEW-SYMMETRIC MATRIX We provide some basic facts on the spectral decomposition of a skew-symmetric matrix, and introduce an efficient algorithm for this decomposition when it is given by a low-rank factorization. We write i := √ −1 and vH as the conjugate transpose of v ∈ CM , and denote Re(z) and Im(z) by the real and imaginary parts of a complex number z, respectively. Given B ∈ RM×K and D ∈ RK×K , consider a rank-K skew-symmetric matrix B(D −D⊤)B⊤. Note that all nonzero eigenvalues of a real-valued skew-symmetric matrix are purely imaginary. Denote iσ1,−iσ1, . . . , iσK/2,−iσK/2 by its nonzero eigenvalues where each of σj is real, and a1 + ib1,a1 − ib1, . . .aK/2 + ibK/2,aK/2 − ibK/2 by the corresponding eigenvectors for aj , bj ∈ RM , which come in conjugate pairs. Then, we can write B(D −D⊤)B⊤ = K/2∑ j=1 iσj(aj + ibj)(aj + ibj) H − iσj(aj − ibj)(aj − ibj)H (15) = K/2∑ j=1 2σj(ajb ⊤ j − bja⊤j ) (16) = K/2∑ j=1 [ aj − bj aj + bj ] [ 0 σj −σj 0 ] [ a⊤j − b⊤j a⊤j + b ⊤ j ] . (17) Note that a1 ± b1, . . . ,aK/2 ± bK/2 are real-valued orthonormal vectors, because a1, b1, . . . ,aK/2, bK/2 are orthogonal to each other and ∥aj ± bj∥22 = ∥aj∥ 2 2 + ∥bj∥ 2 2 = 1 for all j. The pair {(σj ,aj − bj ,aj + bj)}K/2j=1 is often called the Youla decomposition (Youla, 1961) of B(D −D⊤)B⊤. To efficiently compute the Youla decomposition of a rank-K matrix, we use the following result. Proposition 2 (Proposition 1, Nakatsukasa (2019)). Given A,B ∈ CM×K , the nonzero eigenvalues of AB⊤ ∈ CM×M and B⊤A ∈ CK×K are identical. In addition, if (λ,v) is an eigenpair of B⊤A with λ ̸= 0, then (λ,Av/ ∥Av∥2) is an eigenpair of AB⊤. From the above proposition, one can first compute (D −D⊤)B⊤B and then apply the eigendecomposition to that K-by-K matrix. Taking the imaginary part of the obtained eigenvalues gives us the σj’s, and multiplying B by the eigenvectors gives us the eigenvectors of B(D −D⊤)B⊤. In addition, this can be done in O(MK2 +K3) time; when M > K it runs much faster than the eigendecomposition of B(D −D⊤)B⊤, which requires O(M3) time. The pseudo-code of the Youla decomposition is provided in Algorithm 4. Algorithm 4 Youla decomposition of low-rank skew-symmetric matrix 1: procedure YOULADECOMPOSITION(B,D) 2: {(ηj , zj), (ηj , zj)}K/2j=1 ← eigendecomposition of (D −D⊤)B⊤B 3: for j = 1, . . . ,K/2 do 4: σj ← Im(ηj) for j = 1, . . . ,K/2 5: y2j−1 ← B (Re(zj)− Im(zj)) 6: y2j ← B (Re(zj) + Im(zj)) 7: yj ← yj/ ∥yj∥ for j = 1, . . . ,K 8: return {(σj ,y2j−1,y2j)}K/2j=1 E PROOFS E.1 PROOF OF THEOREM 1 Theorem 1. For every subset Y ⊆ [M ], it holds that det(LY ) ≤ det(L̂Y ). Moreover, equality holds when the size of Y is equal to the rank of L. Proof of Theorem 1. It is enough to fix Y ⊆ [M ] such that 1 ≤ |Y | ≤ 2K, because the rank of both L and L̂ is up to 2K. Denote k := |Y | and ( [2K] k ) := {I ⊆ [2K]; |I| = k} for k ≤ 2K. We recall the definition of L̂: given V ,B,D such that L = V V ⊤ + B(D −D⊤)B⊤, let {(ρi,vi)}Ki=1 be the eigendecomposition of V V ⊤ and {(σj ,y2j−1,y2j)}K/2j=1 be the Youla decomposition of B(D −D⊤)B⊤. Denote Z := [v1, . . . ,vK ,y1, . . . ,yK ] ∈ RM×2K and X := diag ( ρ, . . . , ρK , [ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]) , X̂ := diag ( ρ1, . . . , ρK , [ σ1 0 0 σ1 ] , . . . , [ σK/2 0 0 σK/2 ]) , so that L = ZXZ⊤ and L̂ = ZX̂Z⊤. Applying the Cauchy-Binet formula twice, we can write the determinant of the principal submatrices of both L and L̂: det(LY ) = ∑ I∈([2K]k ) ∑ J∈([2K]k ) det(XI,J) det(ZY,I) det(ZY,J), (18) det(L̂Y ) = ∑ I∈([2K]k ) ∑ J∈([2K]k ) det(X̂I,J) det(ZY,I) det(ZY,J) = ∑ I∈([2K]k ) det(X̂I) det(ZY,I) 2, (19) where Eq. (19) follows from the fact that X̂ is diagonal, which means that det(X̂I,J) = 0 for I ̸= J . When the size of Y is equal to the rank of L (i.e., k = 2K), the summations in Eqs. (18) and (19) simplify to single terms: det(LY ) = det(X) det(ZY,:)2 and det(L̂Y ) = det(X̂) det(ZY,:)2. Now, observe that the determinants of the full X and X̂ matrices are identical: det(X) = det(X̂) =∏K i=1 ρi ∏K/2 j=1 σ 2 j . Hence, it holds that det(LY ) = det(L̂Y ). This proves the second statement of the theorem. To prove that det(LY ) ≤ det(L̂Y ) for smaller subsets Y , we will use the following: Claim 1. For every I, J ∈ ( [2K] k ) such that det(XI,J) ̸= 0, there exists a (nonempty) collection of subset pairs S(I, J) ⊆ ( [2K] k ) × ( [2K] k ) such that∑ (I′,J′)∈S(I,J) det(XI,J) det(ZY,I) det(ZY,J) ≤ ∑ (I′,J′)∈S(I,J) det(X̂I,I) det(ZY,I) 2. (20) Claim 2. The number of nonzero terms in Eq. (18) is identical to that in Eq. (19). Combining Claim 1 with Claim 2 yields det(LY ) = ∑ I,J∈([2K]k ) det(XI,J) det(ZY,I) det(ZY,J) ≤ ∑ I∈([2K]k ) det(X̂I,I) det(ZY,I) 2 = det(L̂Y ). We conclude the proof of Theorem 1. Below we provide proofs for Claim 1 and Claim 2. Proof of Claim 1. Recall that X is a block-diagonal matrix, where each block is of size either 1-by-1, containing ρi, or 2-by-2, containing both σj and −σj in the form [ 0 σj −σj 0 ] . A submatrix XI,J ∈ Rk×k with rows I and columns J will only have a nonzero determinant if it contains no all-zero row or column. Hence, any XI,J with nonzero determinant will have the following form (or some permutation of this block-diagonal): XI,J = ρp1 · · · 0 ... . . . ... 0 0 . . . ρp|PI,J | ±σq1 · · · 0 ... . . . ... 0 · · · ±σq|QI,J | 0 σr1 −σr1 0 . . . 0 0 σr|RI,J | −σr|RI,J | 0 (21) and we denote P I,J := {p1, . . . , p|P I,J |}, QI,J := {q1, . . . , q|QI,J |}, and RI,J := {r1, . . . , r|RI,J |}. Indices p ∈ P I,J yield a diagonal matrix with entries ρp. For such p, both I and J must contain index p. Indices r ∈ RI,J yield a block-diagonal matrix of the form [ 0 σr −σr 0 ] . For such r, both I and J must contain a pair of indices, (K + 2r − 1,K + 2r). Finally, indices q ∈ QI,J yield a diagonal matrix with entries of ±σq (the sign can be + or −). For such q, I contains K + 2q − 1 or K + 2q, and J must contain the other. Note that there is no intersection between QI,J and RI,J . If QI,J is an empty set (i.e., I = J), then det(XI,J) = det(X̂I,J) and det(XI,J) det(ZY,I) det(ZY,J) = det(X̂I) det(ZY,I) 2. (22) Thus, the terms in Eq. (18) in this case appear in Eq. (19). Now assume that QI,J ̸= ∅ and consider the following set of pairs: S(I, J) := {(I ′, J ′) : P I,J = P I′,J′ , QI,J = QI′,J′ , RI,J = RI′,J′}. In other words, for (I ′, J ′) ∈ S(I, J), the diagonal XI′,J′ contains ρp, [ 0 σr −σr 0 ] exactly as in XI,J . However, the signs of the σr’s may differ from XI,J . Combining this observation with the definition of X̂ , |det(XI′,J′)| = |det(XI,J)| = det(X̂I) = det(X̂I′) = det(X̂J) = det(X̂J′). (23) Therefore, ∑ (I′,J′)∈S(I,J) det(XI′,J′) det(ZY,I′) det(ZY,J′) (24) ≤ ∑ (I′,J′)∈S(I,J) |det(XI′,J′)|det(ZY,I′) det(ZY,J′) (25) = det(X̂I) ∑ (I′,J′)∈S(I,J) det(ZY,I′) det(ZY,J′) (26) ≤ det(X̂I) ∑ (I′,∗)∈S(I,J) det(ZY,I′) 2 (27) = ∑ (I′,∗)∈S(I,J) det(X̂I′) det(ZY,I′) 2 (28) where the third line comes from Eq. (23) and the fourth line follows from the rearrangement inequality. Note that application of this inequality does not change the number of terms in the sum. This completes the proof of Claim 1. Proof of Claim 2. In Eq. (19), observe that det(X̂I) det(ZY,I)2 ̸= 0 if and only if det(X̂I) ̸= 0. Since all ρi’s and σj’s are positive, the number of I ⊆ [2K], |I| = k such that det(X̂I) ̸= 0 is equal to ( 2K k ) . Similarly, the number of nonzero terms in Eq. (18) equals the number of possible choices of I, J ∈ ( [2K] k ) such that det(XI,J) ̸= 0. This can be counted as follows: first choose i items in {ρ1, . . . , ρK} for i = 0, . . . , k; then, choose j items in {[ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]} for j = 0, . . . , ⌊k−i2 ⌋; lastly, choose k − i − 2j of {±σq; q /∈ RI,J}, then choose the sign for each of these (σq or −σq). Combining all of these choices, the total number of nonzero terms is: k∑ i=0 ( K i ) ︸ ︷︷ ︸ choice of ρp ⌊ k−i2 ⌋∑ j=0 ( K/2 j ) ︸ ︷︷ ︸ choice of [ 0 σr −σr 0 ] ( K/2− j k − i− 2j ) 2k−i−2j︸ ︷︷ ︸ choice of ±σq (29) = k∑ i=0 ( K i ) ( K k − i ) (30) = ( 2K k ) (31) where the second line comes from the fact that ( 2n m ) = ∑⌊m2 ⌋ j=0 ( n j )( n−j m−2j ) 2m−2j for any integers n,m ∈ N such that m ≤ 2n (see (1.69) in Quaintance (2010)), and the third line follows from the fact that ∑r i=0 ( m i )( n r−i ) = ( n+m r ) for n,m, r ∈ N (Vandermonde’s identity). Hence, both the number of nonzero terms in Eqs. (18) and (19) is equal to ( 2K k ) . This completes the proof of Claim 2. E.2 PROOF OF PROPOSITION 1 Proposition 1. The tree-based sampling procedure SAMPLEDPP in Algorithm 3 runs in time O(K + k3 logM + k4), where k is the size of the sampled set†††. Proof of Proposition 1. Since computing pℓ takes O(k2) from Eq. (12), and since the binary tree has depth O(logM), SAMPLEITEM in Algorithm 3 runs in O(k2 logM) time. Moreover, the query matrix QY can be updated in O(k3) time as it only requires a k-by-k matrix inversion. Therefore, the overall runtime of the tree-based elementary DPP sampling algorithm (after pre-processing) is O(k3 logM + k4). This improves the previous O(k4 logM) runtime studied in Gillenwater et al. (2019). Combining this with elementary DPP selection (Line 15 in Algorithm 3), we can sample a set in O(K + k3 logM + k4) time. This completes the proof of Proposition 1. E.3 PROOF OF THEOREM 2 Theorem 2. Given an NDPP kernel L = V V ⊤ + B(D −D⊤)B⊤ for V ,B ∈ RM×K ,D ∈ RK×K , consider the proposal kernel L̂ as proposed in Section 4.1. Let {σj}K/2j=1 be the positive eigenvalues obtained from the Youla decomposition of B(D−D⊤)B⊤. If V ⊥ B, then det(L̂+I)det(L+I) =∏K/2 j=1 ( 1 + 2σj σ2j+1 ) ≤ (1 + ω)K/2, where ω = 2K ∑K/2 j=1 2σj σ2j+1 ∈ (0, 1]. Proof of Theorem 2. Since the column spaces of V and B are orthogonal, the corresponding eigenvectors are also orthogonal, i.e., Z⊤Z = I2K . Then, det(L+ I) = det(ZXZ⊤ + I) = det(XZ⊤Z + I2K) = det(X + I2K) (32) = K∏ i=1 (ρi + 1) K/2∏ j=1 det ([ 1 σj −σj 1 ]) (33) = K∏ i=1 (ρi + 1) K/2∏ j=1 (σ2j + 1) (34) †††Computing pℓ via Eq. (12) improves on Gillenwater et al. (2019)’s O(k4 logM) runtime for this step. and similarly det(L̂+ I) = K∏ i=1 (ρi + 1) K/2∏ j=1 (σj + 1) 2. (35) Combining Eqs. (34) and (35), we have that det(L̂+ I) det(L+ I) = K/2∏ j=1 (σj + 1) 2 (σ2j + 1) = K/2∏ j=1 ( 1 + 2σj σ2j + 1 ) ≤ 1 + 2 K K/2∑ j=1 2σj σ2j + 1 K/2 (36) where the inequality holds from the Jensen’s inequality. This completes the proof of Theorem 2.
1. What is the focus of the paper regarding non-symmetrical Determinantal Point Processes (NDPPs)? 2. What are the two main contributions of the paper? 3. How does the rejection-sampling-based method compare to the linear-time Cholesky-decomposition-based sampler in terms of efficiency and scalability? 4. What are some potential applications of NDPPs that the paper's results could enable? 5. Are there any limitations or areas for improvement in the proposed methods or experimental design?
Summary Of The Paper Review
Summary Of The Paper This paper studies the problem of sampling from non-symmetrical Determinantal Point Processes (NDPPs); in particular, this paper focuses on exact sampling for low-rank NDPPs. The main contributions of this paper are: This paper proposes to adapt the Cholesky-decomposition-based sampler for DPPs to a linear-time sampler for low-rank NDPPs (rank of NDPP << size of the ground set) This paper proposes to use rejection sampling to implement a "sub-linear-time" (assuming that the # of rejections are bounded by a small constant) sampler for a subclass of NDPPs called orthogonal NDPPs (ONDPPs), by leveraging an existing sub-linear-time sampler for DPPs. This paper shows that empirically, in terms of modeling real-world datasets, ONDPPs are as effective as the general NDPPs and more importantly, we can efficiently learn ONDPPs in a way such that when it comes to rejection sampling, the # of rejections is bounded by a small number. Review This paper is overall well-written, easy to understand and both the theoretical and empirical results presented in this paper are very exciting. Compared to the linear-time Cholesky-decomposition-based sampler, the rejection-sampling-based method is definitely the main focus of the paper. Though the rejection-sampling method is not strictly sub-linear-time in general, the theoretical result presented in this paper (Theorem 2) directly implies that in practice, we can easily allow efficient sampling by bounding the expected # of rejections in learning NDPPs (ONDPPs); more importantly, the authors conducted comprehensive experiments to show that in learning NDPPs, by imposing the constraints (orthogonality & extra regularization term) we need for efficient sampling, we are not losing the expressive power of NDPPs. Experiments also show that compared to the linear-time sampling algorithm, the rejection-sampling method scales much better on both synthetic and real-world datasets. NDPPs are a strictly more expressive class compared to DPPs but the absence of efficient sampling algorithms puts a major barrier from replacing DPPs by NDPPs in large-scale applications. By proposing an efficient "sub-linear-time" sampling algorithm for NDPPs, this paper makes substantial progress in scaling up NDPPs and opens up new avenues for applying NDPPs to various real-world scenarios. Cons: Compared to the rejection-sampling method, the linear-time algorithm is not as interesting and novel and somewhat deviating from the main story. Authors might want to consider presenting the linear-time algorithm without getting into the technical details. Of course, the experiments that compares the rejection-sampling method against the linear-time algorithm is still important. Section 4.2 is a little bit difficult to follow for those who are not familiar with the tree-based sampling algorithm for DPPs. Authors might want to expand this section a little bit.
ICLR
Title Scalable Sampling for Nonsymmetric Determinantal Point Processes Abstract A determinantal point process (DPP) on a collection of M items is a model, parameterized by a symmetric kernel matrix, that assigns a probability to every subset of those items. Recent work shows that removing the kernel symmetry constraint, yielding nonsymmetric DPPs (NDPPs), can lead to significant predictive performance gains for machine learning applications. However, existing work leaves open the question of scalable NDPP sampling. There is only one known DPP sampling algorithm, based on Cholesky decomposition, that can directly apply to NDPPs as well. Unfortunately, its runtime is cubic in M , and thus does not scale to large item collections. In this work, we first note that this algorithm can be transformed into a linear-time one for kernels with low-rank structure. Furthermore, we develop a scalable sublinear-time rejection sampling algorithm by constructing a novel proposal distribution. Additionally, we show that imposing certain structural constraints on the NDPP kernel enables us to bound the rejection rate in a way that depends only on the kernel rank. In our experiments we compare the speed of all of these samplers for a variety of real-world tasks. 1 INTRODUCTION A determinantal point process (DPP) on M items is a model, parameterized by a symmetric kernel matrix, that assigns a probability to every subset of those items. DPPs have been applied to a wide range of machine learning tasks, including stochastic gradient descent (SGD) (Zhang et al., 2017), reinforcement learning (Osogami & Raymond, 2019; Yang et al., 2020), text summarization (Dupuy & Bach, 2018), coresets (Tremblay et al., 2019), and more. However, a symmetric kernel can only capture negative correlations between items. Recent works (Brunel, 2018; Gartrell et al., 2019) have shown that using a nonsymmetric DPP (NDPP) allows modeling of positive correlations as well, which can lead to significant predictive performance gains. Gartrell et al. (2021) provides scalable NDPP kernel learning and MAP inference algorithms, but leaves open the question of scalable sampling. The only known sampling algorithm for NDPPs is the Cholesky-based approach described in Poulson (2019), which has a runtime of O(M3) and thus does not scale to large item collections. There is a rich body of work on efficient sampling algorithms for (symmetric) DPPs, including recent works such as Derezinski et al. (2019); Poulson (2019); Calandriello et al. (2020). Key distinctions between existing sampling algorithms include whether they are for exact or approximate sampling, whether they assume the DPP kernel has some low-rank K ≪ M , and whether they sample from the space of all 2M subsets or from the restricted space of size-k subsets, so-called k-DPPs. In the context of MAP inference, influential work, including Summa et al. (2014); Chen et al. (2018); Hassani et al. (2019); Ebrahimi et al. (2017); Indyk et al. (2020), proposed efficient algorithms that the approximate (sub)determinant maximization problem and provide rigorous guarantees. In this work we focus on exact sampling for low-rank kernels, and provide scalable algorithms for NDPPs. Our contributions are as follows, with runtime and memory details summarized in Table 1: • Linear-time sampling (Section 3): We show how to transform the O(M3) Choleskydecomposition-based sampler from Poulson (2019) into an O(MK2) sampler for rank-K kernels. • Sublinear-time sampling (Section 4): Using rejection sampling, we show how to leverage existing sublinear-time samplers for symmetric DPPs to implement a sublinear-time sampler for a subclass of NDPPs that we call orthogonal NDPPs (ONDPPs). • Learning with orthogonality constraints (Section 5): We show that the scalable NDPP kernel learning of Gartrell et al. (2021) can be slightly modified to impose an orthogonality constraint, yielding the ONDPP subclass. The constraint allows us to control the rejection sampling algorithm’s rejection rate, ensuring its scalability. Experiments suggest that the predictive performance of the kernels is not degraded by this change. For a common large-scale setting where M is 1 million, our sublinear-time sampler results in runtime that is hundreds of times faster than the linear-time sampler. In the same setting, our linear-time sampler provides runtime that is millions of times faster than the only previously known NDPP sampling algorithm, which has cubic time complexity and is thus impractical in this scenario. 2 BACKGROUND Notation. We use [M ] := {1, . . . ,M} to denote the set of items 1 through M . We use IK to denote the K-by-K identity matrix, and often write I := IM when the dimensionality should be clear from context. Given L ∈ RM×M , we use Li,j to denote the entry in the i-th row and j-th column, and LA,B ∈ R|A|×|B| for the submatrix formed by taking rows A and columns B. We also slightly abuse notation to denote principal submatrices with a single subscript, LA := LA,A. Kernels. As discussed earlier, both (symmetric) DPPs and NDPPs define a probability distribution over all 2M subsets of a ground set [M ]. The distribution is parameterized by a kernel matrix L ∈ RM×M and the probability of a subset Y ⊆ [M ] is defined to be Pr(Y ) ∝ det(LY ). For this to define a valid distribution, it must be the case that det(LY ) ≥ 0 for all Y . For symmetric DPPs, the non-negativity requirement is identical to a requirement that L be positive semi-definite (PSD). For nonsymmetric DPPs, there is no such simple correspondence, but prior work such as Gartrell et al. (2019; 2021) has focused on PSD matrices for simplicity. Normalizing and marginalizing. The normalizer of a DPP or NDPP distribution can also be written as a single determinant: ∑ Y⊆[M ] det(LY ) = det(L+ I) (Kulesza & Taskar, 2012, Theorem 2.1). Additionally, the marginal probability of a subset can be written as a determinant: Pr(A ⊆ Y ) = det(KA), for K := I − (L+ I)−1 (Kulesza & Taskar, 2012, Theorem 2.2)*, where K is typically called the marginal kernel. Intuition. The diagonal element Ki,i is the probability that item i is included in a set sampled from the model. The 2-by-2 determinant det(K{i,j}) = Ki,iKj,j −Ki,jKj,j is the probability that both i and j are included in the sample. A symmetric DPP has a symmetric marginal kernel, meaning Ki,j = Kj,i, and hence Ki,iKj,j −Ki,jKj,i ≤ Ki,iKj,j . This implies that the probability of including both i and j in the sampled set cannot be greater than the product of their individual inclusion probabilities. Hence, symmetric DPPs can only encode negative correlations. In contrast, NDPPs can have Ki,j and Kj,i with differing signs, allowing them to also capture positive correlations. 2.1 RELATED WORK Learning. Gartrell et al. (2021) proposes a low-rank kernel decomposition for NDPPs that admits linear-time learning. The decomposition takes the form L := V V ⊤ + B(D − D⊤)B⊤ for *The proofs in Kulesza & Taskar (2012) typically assume a symmetric kernel, but this particular one does not rely on the symmetry. Algorithm 1 Cholesky-based NDPP sampling (Poulson, 2019, Algorithm 1) 1: procedure SAMPLECHOLESKY(K) ▷ marginal kernel factorization Z,W 2: Y ← ∅ Q←W 3: for i = 1 to M do 4: pi ←Ki,i pi ← z⊤i Qzi 5: u← uniform(0, 1) 6: if u ≤ pi then Y ← Y ∪ {i} 7: else pi ← pi − 1 8: KA ←KA − KA,iKi,Api for A := {i+ 1, . . . ,M} Q← Q− Qziz ⊤ i Q pi 9: return Y V ,B ∈ RM×K , and D ∈ RK×K . The V V ⊤ component is a rank-K symmetric matrix, which can model negative correlations between items. The B(D −D⊤)B⊤ component is a rank-K skewsymmetric matrix, which can model positive correlations between items. For compactness of notation, we will write L = ZXZ⊤, where Z = [ V B ] ∈ RM×2K , and X = [ IK 0 0 D−D⊤ ] ∈ R2K×2K . The marginal kernel in this case also has a rank-2K decomposition, as can be shown via application of the Woodbury matrix identity: K := I − (I +L)−1 = ZX ( I2K +Z ⊤ZX )−1 Z⊤. (1) Note that the matrix to be inverted can be computed from Z and X in O(MK2) time, and the inverse itself takes O(K3) time. Thus, K can be computed from L in time O(MK2). We will develop sampling algorithms for this decomposition, as well as an orthogonality-constrained version of it. We use W := X ( I2K +Z ⊤ZX )−1 in what follows so that we can compactly write K = ZWZ⊤. Sampling. While there are a number of exact sampling algorithms for DPPs with symmetric kernels, the only published algorithm that clearly can directly apply to NDPPs is from Poulson (2019) (see Theorem 2 therein). This algorithm begins with an empty set Y = ∅ and iterates through the M items, deciding for each whether or not to include it in Y based on all of the previous inclusion/exclusion decisions. Poulson (2019) shows, via the Cholesky decomposition, that the necessary conditional probabilities can be computed as follows: Pr (j ∈ Y | i ∈ Y ) = Pr({i, j} ⊆ Y ) Pr(i ∈ Y ) = Kj,j − (Kj,iKi,j) /Ki,i, (2) Pr (j ∈ Y | i /∈ Y ) = Pr(j ∈ Y )− Pr({i, j} ⊆ Y ) Pr(i /∈ Y ) = Kj,j − (Kj,iKi,j) / (Ki,i − 1) . (3) Algorithm 1 (left-hand side) gives pseudocode for this Cholesky-based sampling algorithm†. There has also been some recent work on approximate sampling for fixed-size k-NDPPs: Alimohammadi et al. (2021) provide a Markov chain Monte Carlo (MCMC) algorithm and prove that the overall runtime to approximate ε-close total variation distance is bounded by O(M2k3 log(1/(εPr(Y0))), where Pr(Y0) is probability of an initial state Y0. Improving this runtime is an interesting avenue for future work, but for this paper we focus on exact sampling. 3 LINEAR-TIME CHOLESKY-BASED SAMPLING In this section, we show that the O(M3) runtime of the Cholesky-based sampler from Poulson (2019) can be significantly improved when using the low-rank kernel decomposition of Gartrell et al. (2021). First, note that Line 8 of Algorithm 1, where all marginal probabilities are updated via an (M − i)-by-(M − i) matrix subtraction, is the most costly part of the algorithm, making overall time and memory complexities O(M3) and O(M2), respectively. However, when the DPP kernel is given by a low-rank decomposition, we observe that marginal probabilities can be updated by matrix-vector †Cholesky decomposition is defined only for a symmetric positive definite matrix. However, we use the term “Cholesky” from Poulson (2019) to maintain consistency with this work, although Algorithm 1 is valid for nonsymmetric matrices. Algorithm 2 Rejection NDPP sampling (Tree-based sampling) 1: procedure PREPROCESS(V ,B,D) 2: {(σj ,y2j−1,y2j)}K/2j=1 ← YOULADECOMPOSE(B,D)‡ 3: X̂ ← diag ( IK , σ1, σ1, . . . , σK/2, σK/2 ) 4: Z ← [V ,y1, . . . ,yK ] {(λi, zi)}2Ki=1 ← EIGENDECOMPOSE(ZX̂1/2) T ← CONSTRUCTTREE(M, [z1, . . . ,z2K ]⊤) 5: return Z, X̂ return T , {(λi, zi)}2Ki=1 6: procedure SAMPLEREJECT(V ,B,D,Z, X̂) ▷ tree T , eigen pair {(λi, zi)}2Ki=1 of ZX̂Z 7: while true do 8: Y ← SAMPLEDPP(ZX̂Z⊤) Y ← SAMPLEDPP(T , {(λi, zi)}2Ki=1) 9: u← uniform(0, 1) 10: p← det([V V ⊤+B(D−D⊤)B⊤]Y ) det([ZX̂Z⊤]Y ) 11: if u ≤ p then break 12: return Y multiplications of dimension 2K, regardless of M . In more detail, suppose we have the marginal kernel K = ZWZ⊤ as in Eq. (1) and let zj be the j-th row vector in Z. Then, for i ̸= j: Pr (j ∈ Y | i ∈ Y ) = Kj,j − (Kj,iKi,j)/Ki,i = z⊤j ( W − (Wzi)(z ⊤ i W ) z⊤i Wzi ) zj , (4) Pr (j ∈ Y | i /∈ Y ) = z⊤j ( W − (Wzi)(z ⊤ i W ) z⊤i Wzi − 1 ) zj . (5) The conditional probabilities in Eqs. (4) and (5) are of bilinear form, and the zj do not change during sampling. Hence, it is enough to update the 2K-by-2K inner matrix at each iteration, and obtain the marginal probability by multiplying this matrix by zi. The details are shown on the right-hand side of Algorithm 1. The overall time and memory complexities are O(MK2) and O(MK), respectively. 4 SUBLINEAR-TIME REJECTION SAMPLING Although the Cholesky-based sampler runs in time linear in M , even this is too expensive for the large M that are often encountered in real-world datasets. To improve runtime, we consider rejection sampling (Von Neumann, 1963). Let p be the target distribution that we aim to sample, and let q be any distribution whose support corresponds to that of p; we call q the proposal distribution. Assume that there is a universal constant U such that p(x) ≤ Uq(x) for all x. In this setting, rejection sampling draws a sample x from q and accepts it with probability p(x)/(Uq(x)), repeating until an acceptance occurs. The distribution of the resulting samples is p. It is important to choose a good proposal distribution q so that sampling is efficient and the number of rejections is small. 4.1 PROPOSAL DPP CONSTRUCTION Our first goal is to find a proposal DPP with symmetric kernel L̂ that can upper-bound all probabilities of samples from the NDPP with kernel L within a constant factor. To this end, we expand the determinant of a principal submatrix, det(LY ), using the spectral decomposition of the NDPP kernel. Such a decomposition essentially amounts to combining the eigendecomposition of the symmetric part of L with the Youla decomposition (Youla, 1961) of the skew-symmetric part. Specifically, suppose {(σj ,y2j−1,y2j)}K/2j=1 is the Youla decomposition of B(D −D⊤)B⊤ (see Appendix D for more details), that is, B(D −D⊤)B⊤ = K/2∑ j=1 σj ( y2j−1y ⊤ 2j − y2jy⊤2j−1 ) . (6) ‡Pseudo-code of YOULADECOMPOSE is provided in Algorithm 4. See Appendix D. Then we can simply write L = ZXZ⊤, for Z := [V ,y1, . . . ,yK ] ∈ RM×2K , and X := diag ( IK , [ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]) . (7) Now, consider defining a related but symmetric PSD kernel L̂ := ZX̂Z⊤ with X̂ := diag ( IK , σ1, σ1, . . . , σK/2, σK/2 ) . All determinants of the principal submatrices of L̂ = ZX̂Z⊤ upper-bound those of L, as stated below. Theorem 1. For every subset Y ⊆ [M ], it holds that det(LY ) ≤ det(L̂Y ). Moreover, equality holds when the size of Y is equal to the rank of L. Proof sketch: From the Cauchy-Binet formula, the determinants of LY and L̂Y for all Y ⊆ [M ], |Y | ≤ 2K can be represented as det(LY ) = ∑ I⊆[K],|I|=|Y | ∑ J⊆[K],|J|=|Y | det(XI,J) det(ZY,I) det(ZY,J), (8) det(L̂Y ) = ∑ I⊆[2K],|I|=|Y | det(X̂I) det(ZY,I) 2. (9) Many of the terms in Eq. (8) are actually zero due to the block-diagonal structure of X . For example, note that if 1 ∈ I but 1 /∈ J , then there is an all-zeros row in XI,J , making det(XI,J) = 0. We show that each XI,J with nonzero determinant is a block-diagonal matrix with diagonal entries among ±σj , or [ 0 σj −σj 0 ] . With this observation, we can prove that det(XI,J) is upper-bounded by det(X̂I) or det(X̂J). Then, through application of the rearrangement inequality, we can upper-bound the sum of the det(XI,J) det(ZY,I) det(ZY,J) in Eq. (8) with a sum over det(X̂I) det(ZY,I)2. Finally, we show that the number of non-zero terms in Eq. (8) is identical to the number of non-zero terms in Eq. (9). Combining these gives us the desired inequality det(LY ) ≤ det(L̂Y ). The full proof of Theorem 1 is in Appendix E.1. Now, recall that the normalizer of a DPP (or NDPP) with kernel L is det(L + I). The ratio of probability of the NDPP with kernel L to that of a DPP with kernel L̂ is thus: PrL(Y ) PrL̂(Y ) = det(LY )/det(L+ I) det(L̂Y )/det(L̂+ I) ≤ det(L̂+ I) det(L+ I) , where the inequality follows from Theorem 1. This gives us the necessary universal constant U upper-bounding the ratio of the target distribution to the proposal distribution. Hence, given a sample Y drawn from the DPP with kernel L̂, we can use acceptance probability PrL(Y )/(U PrL̂(Y )) = det(LY )/ det(L̂Y ). Pseudo-codes for proposal construction and rejection sampling are given in Algorithm 2. Note that to derive L̂ from L it suffices to run the Youla decomposition of B(D − D⊤)B⊤, because the difference is only in the skew-symmetric part. This decomposition can run in O(MK2) time; more details are provided in Appendix D. Since L̂ is a symmetric PSD matrix, we can apply existing fast DPP sampling algorithms to sample from it. In particular, in the next section we combine a fast tree-based method with rejection sampling. 4.2 SUBLINEAR-TIME TREE-BASED SAMPLING There are several DPP sampling algorithms that run in sublinear time, such as tree-based (Gillenwater et al., 2019) and intermediate (Derezinski et al., 2019) sampling algorithms. Here, we consider applying the former, a tree-based approach, to sample from the proposal distribution defined by L̂. We give some details of the sampling procedure, as in the course of applying it we discovered an optimization that slightly improves on the runtime of prior work. Formally, let {(λi, zi)}2Ki=1 be the eigendecomposition of L̂ and Z := [z1, . . . ,z2K ] ∈ RM×2K . As shown in Kulesza & Taskar (2012, Lemma 2.6), for every Y ⊆ [M ], |Y | ≤ 2K, the probability of Y under DPP with L̂ can be written: PrL̂(Y ) = det(L̂Y ) det(L̂+ I) = ∑ E⊆[2K],|E|=|Y | det(ZY,EZ ⊤ Y,E) ∏ i∈E λi λi + 1 ∏ i/∈E 1 λi + 1 . (10) Algorithm 3 Tree-based DPP sampling (Gillenwater et al., 2019) 1: procedure BRANCH(A,Z) 2: if A = {j} then 3: T .A← {j}, T .Σ← Z⊤j,:Zj,: 4: return T 5: Aℓ, Ar ← Split A in half 6: T .left← BRANCH(Aℓ,Z) 7: T .right← BRANCH(Ar,Z) 8: T .Σ← T .left.Σ+ T .right.Σ 9: return T 10: procedure CONSTRUCTTREE(M , Z) 11: return BRANCH([M ], Z) 12: procedure SAMPLEDPP(T ,Z, {λi}Ki=1) 13: E ← ∅, Y ← ∅, QY ← 0 14: for i = 1, . . . ,K do 15: E ← E ∪ {i} w.p. λi/(λi + 1) 16: for k = 1, . . . , |E| do 17: j ← SAMPLEITEM(T ,QY , E) 18: Y ← Y ∪ {j} 19: QY← I|E|−Z⊤Y,E ( ZY,EZ ⊤ Y,E )−1 ZY,E 20: return Y 21: procedure SAMPLEITEM(T ,QY , E) 22: if T is a leaf then return T .A 23: pℓ ← 〈 T .left.ΣE ,QY 〉 24: pr ← 〈 T .right.ΣE ,QY 〉 25: u← uniform(0, 1) 26: if u ≤ pℓpℓ+pr then 27: return SAMPLEITEM(T .left,QY , E) 28: else 29: return SAMPLEITEM(T .right,QY , E) A matrix of the form Z:,EZ⊤:,E can be a valid marginal kernel for a special type of DPP, called an elementary DPP. Hence, Eq. (10) can be thought of as DPP probabilities expressed as a mixture of elementary DPPs. Based on this mixture view, DPP sampling can be done in two steps: (1) choose an elementary DPP according to its mixture weight, and then (2) sample a subset from the selected elementary DPP. Step (1) can be performed by 2K independent random coin tossings, while step (2) involves computational overhead. The key idea of tree-based sampling is that step (2) can be accelerated by traversing a binary tree structure, which can be done in time logarithmic in M . More specifically, given the marginal kernel K = Z:,EZ⊤:,E , where E is obtained from step (1), we start from the empty set Y = ∅ and repeatedly add an item j to Y with probability: Pr(j ∈ S | Y ⊆ S) = Kj,j −Kj,Y (KY )−1KY,j = Zj,EQY Z⊤j,E = 〈 QY , (Z⊤j,:Zj,:)E 〉 , (11) where S is some final selected subset, and QY := I|E| − Z⊤Y,E ( ZY,EZ ⊤ Y,E )−1 ZY,E . Consider a binary tree whose root includes a ground set [M ]. Every non-leaf node contains a subset A ⊆ [M ] and stores a 2K-by-2K matrix ∑ j∈A Z ⊤ j,:Zj,:. A partition Aℓ and Ar, such that Aℓ∪Ar = A,Aℓ∩Ar = ∅, are passed to its left and right subtree, respectively. The resulting tree has M leaves and each has exactly a single item. Then, one can sample a single item by recursively moving down to the left node with probability: pℓ = ⟨QY ,∑j∈Aℓ(Z⊤j,:Zj,:)E⟩ ⟨QY ,∑j∈A(Zj,:Z⊤j,:)E⟩ , (12) or to the right node with probability 1− pℓ, until reaching a leaf node. An item in the leaf node is chosen with probability according to Eq. (11). Since every subset in the support of an elementary DPP with a rank-k kernel has exactly k items, this process is repeated for |E| iterations. Full descriptions of tree construction and sampling are provided in Algorithm 3. The proposed tree-based rejection sampling for an NDPP is outlined on the right-side of Algorithm 2. The one-time pre-processing step of constructing the tree (CONSTRUCTTREE) requires O(MK2) time. After pre-processing, the procedure SAMPLEDPP involves |E| traversals of a tree of depth O(logM), where in each node a O(|E|2) operation is required. The overall runtime is summarized in Proposition 1 and the proof can be found in Appendix E.2. Proposition 1. The tree-based sampling procedure SAMPLEDPP in Algorithm 3 runs in time O(K + k3 logM + k4), where k is the size of the sampled set§. §Computing pℓ via Eq. (12) improves on Gillenwater et al. (2019)’s O(k4 logM) runtime for this step. 4.3 AVERAGE NUMBER OF REJECTIONS We now return to rejection sampling and focus on the expected number of rejections. The number of rejections of Algorithm 2 is known to be a geometric random variable with mean equal to the constant U used to upper-bound the ratio of the target distribution to the proposal distribution: det(L̂+ I)/ det(L+ I). If all columns in V and B are orthogonal, which we denote V ⊥ B, then the expected number of rejections depends only on the eigenvalues of the skew-symmetric part of the NDPP kernel. Theorem 2. Given an NDPP kernel L = V V ⊤ + B(D −D⊤)B⊤ for V ,B ∈ RM×K ,D ∈ RK×K , consider the proposal kernel L̂ as proposed in Section 4.1. Let {σj}K/2j=1 be the positive eigenvalues obtained from the Youla decomposition of B(D−D⊤)B⊤. If V ⊥ B, then det(L̂+I)det(L+I) =∏K/2 j=1 ( 1 + 2σj σ2j+1 ) ≤ (1 + ω)K/2, where ω = 2K ∑K/2 j=1 2σj σ2j+1 ∈ (0, 1]. Proof sketch: Orthogonality between V and B allows det(L+ I) to be expressed just in terms of the eigenvalues of V V ⊤ and B(D −D⊤)B⊤. Since both L and L̂ share the symmetric part V V ⊤, the ratio of determinants only depends on the skew-symmetric part. A more formal proof appears in Appendix E.3. Assuming we have a kernel where V ⊥ B, we can combine Theorem 2 with the tree-based rejection sampling algorithm (right-side in Algorithm 2) to sample in time O((K+k3 logM+k4)(1+ω)K/2). Hence, we have a sampling algorithm that is sublinear in M , and can be much faster than the Choleskybased algorithm when (1 + ω)K/2 ≪M . In the next section, we introduce a learning scheme with the V ⊥ B constraint, as well as regularization to ensure that ω is small. 5 LEARNING WITH ORTHOGONALITY CONSTRAINTS We aim to learn a NDPP that provides both good predictive performance and a low rejection rate. We parameterize our NDPP kernel matrix L = V V ⊤ +B(D −D⊤)B⊤ by D = diag ([ 0 σ1 0 0 ] , . . . , [ 0 σK/2 0 0 ]) (13) for σj ≥ 0, B⊤B = I , and, motivated by Theorem 2, require V ⊤B = 0¶. We call such orthogonality-constrained NDPPs “ONDPPs”. Notice that if V ⊥ B, then L has the full rank of 2K, since the intersection of the column spaces spanned by V and by B is empty, and thus the full rank available for modeling can be used. Thus, this constraint can also be thought of as simply ensuring that ONDPPs use the full rank available to them. Given example subsets {Y1, . . . , Yn} as training data, learning is done by minimizing the regularized negative log-likelihood: min V ,B,{σj}K/2j=1 − 1 n n∑ i=1 log ( det(LYi) det(L+ I) ) + α M∑ i=1 ∥vi∥22 µi + β M∑ i=1 ∥bi∥22 µi + γ K/2∑ j=1 log ( 1 + 2σj σ2j + 1 ) (14) where α, β, γ > 0 are hyperparameters, µi is the frequency of item i in the training data, and vi and bi represent the rows of V and B, respectively. This objective is very similar to that of Gartrell et al. (2021), except for the orthogonality constraint and the final regularization term. Note that this regularization term corresponds exactly to the logarithm of the average rejection rate, and therefore should help to control the number of rejections. 6 EXPERIMENTS We first show that the orthogonality constraint from Section 5 does not degrade the predictive performance of learned kernels. We then compare the speed of our proposed sampling algorithms. ¶Technical details: To learn NDPP models with the constraint V ⊤B = 0, we project V according to: V ← V − B(B⊤B)−1(B⊤V ). For the B⊤B = I constraint, we apply QR decomposition on B. Note that both operations require O(MK2) time. (Constrained learning and sampling code is provided at https://github.com/insuhan/nonsymmetric-dpp-sampling. We use Pytorch’s linalg.solve to avoid the expense of explicitly computing the (B⊤B)−1 inverse.) Hence, our learning time complexity is identical to that of Gartrell et al. (2021). 6.1 PREDICTIVE PERFORMANCE RESULTS FOR NDPP LEARNING We benchmark various DPP models, including symmetric (Gartrell et al., 2017), nonsymmetric for scalable learning (Gartrell et al., 2021), as well as our ONDPP kernels with and without rejection rate regularization. We use the scalable NDPP models (Gartrell et al., 2021) as a baseline||. The kernel components of each model are learned using five real-world recommendation datasets, which have ground set sizes that range from 3,941 to 1,059,437 items (see Appendix A for more details). Our experimental setup and metrics mirror those of Gartrell et al. (2021). We report the mean percentile rank (MPR) metric for a next-item prediction task, the AUC metric for subset discrimination, and the log-likelihood of the test set; see Appendix B for more details on the experiments and metrics. For all metrics, higher numbers are better. For NDPP models, we additionally report the average rejection rates when they apply to rejection sampling. In Table 2, we observe that the predictive performance of our ONDPP models generally match or sometimes exceed the baseline. This is likely because the orthogonality constraint enables more effective use of the full rank-2K feature space. Moreover, imposing the regularization on rejection rate, as shown in Eq. (14), often leads to dramatically smaller rejection rates, while the impact on predictive performance is generally marginal. These results justify the ONDPP and regularization for fast sampling. Finally, we observe that the learning time of our ONDPP models is typically a bit longer than that of the NDPP models, but still quite reasonable (e.g., the time per iteration for the NDPP takes 27 seconds for the Book dataset, while our ONDPP takes 49.7 seconds). Fig. 1 shows how the regularizer γ affects the test log-likelihood and the average number of rejections. We see that γ degrades predictive performance and reduces the rejection rate when set above a certain threshold; this behavior is seen for many datasets. However, for the Recipe dataset we observed that the test log-likelihood is not very sensitive to γ, likely because all models in our experiments achieve very high performance on this dataset. In general, we observe that γ can be set to a value that results in a small rejection rate, while having minimal impact on predictive performance. 6.2 SAMPLING TIME COMPARISON We benchmark the Cholesky-based sampling algorithm (Algorithm 1) and tree-based rejection sampling algorithm (Algorithm 2) on ONDPPs with both synthetic and real-world data. ||We use the code from https://github.com/cgartrel/scalable-nonsymmetric-DPPs for the NDPP baseline, which is made available under the MIT license. To simplify learning and MAP inference, Gartrell et al. (2021) set B = V in their experiments. However, since we have the V ⊥ B constraint in our ONDPP approach, we cannot set B = V . Hence, for a fair comparison, we do not set B = V for the NDPP baseline in our experiments, and thus the results in Table 2 differ slightly from those published in Gartrell et al. (2021). 10−8 10−6 10−4 10−2 100 regularizer γ 101 104 107 1010 av er ag e # of re je ct io ns (a) 10−6 10−4 10−2 100 regularizer γ −106 −104 −102 −100 −98 te st lo g- lik el ih oo d (b) Figure 1: Average number of rejections and test log-likelihood with different values of the regularizer γ for ONDPPs trained on the UK Retail dataset. Shaded regions are 95% confidence intervals of 10 independent trials. 212 214 216 218 220 ground set size M 101 102 103 sa m pl in g tim e (s ec ) Cholesky-based Rejection (a) 212 214 216 218 220 ground set size M 10−1 101 103 pr ep ro ce ss in g tim e (s ec ) Tree construction Spectral decomposition (b) Figure 2: Wall-clock time (sec) for synthetic data for (a) NDPP sampling algorithms and (b) preprocessing steps for the rejection sampling. Shaded regions are 95% confidence intervals from 100 independent trials. Synthetic datasets. We generate non-uniform random features for V ,B as done by (Han & Gillenwater, 2020). In particular, we first sample x1, . . . ,x100 from N (0, I2K/(2K)), and integers t1, . . . , t100 from Poisson distribution with mean 5, rescaling the integers such that ∑ i ti = M . Next, we draw ti random vectors from N (xi, I2K), and assign the first K-dimensional vectors as the row vectors of V and the latter vectors as those of B. Each entry of D is sampled from N (0, 1). We choose K = 100 and vary M from 212 to 220. Fig. 2(a) illustrates the runtimes of Algorithms 1 and 2. We verify that the rejection sampling time tends to increase sub-linearly with the ground set size M , while the Cholesky-based sampler runs in linear time. In Fig. 2(b), the runtimes of the preprocessing steps for Algorithm 2 (i.e., spectral decomposition and tree construction) are reported. Although the rejection sampler requires these additional processes, they are one-time steps and run much faster than a single run of the Choleksy-based method for M = 220. Real-world datasets. In Table 3, we report the runtimes and speedup of NDPP sampling algorithms for real-world datasets. All NDPP kernels are obtained using learning with orthogonality constraints, with rejection rate regularization as reported in Section 6.1. We observe that the tree-based rejection sampling runs up to 246 times faster than the Cholesky-based algorithm. For larger datasets, we expect that this gap would significantly increase. As with the synthetic experiments, we see that the tree construction pre-processing time is comparable to the time required to draw a single sample via the other methods, and thus the tree-based method is often the best choice for repeated sampling**. 7 CONCLUSION In this work we developed scalable sampling methods for NDPPs. One limitation of our rejection sampler is its practical restriction to the ONDPP subclass. Other opportunities for future work include the extension of our rejection sampling approach to the generation of fixed-size samples (from k-NDPPs), the development of approximate sampling techniques, and the extension of DPP samplers along the lines of Derezinski et al. (2019); Calandriello et al. (2020) to NDPPs. Scalable sampling also opens the door to using NDPPs as building blocks in probabilistic models. **We note that the tree can consume substantial memory, e.g., 169.5 GB for the Book dataset with K = 100. For settings where this scale of memory use is unacceptable, we suggest use of the intermediate sampling algorithm (Calandriello et al., 2020) in place of tree-based sampling. The resulting sampling algorithm may be slower, but the O(M +K) memory cost is substantially lower. 8 ETHICS STATEMENT In general, our work moves in a positive direction by substantially decreasing the computational costs of NDPP sampling. When using our constrained learning method to learn kernels from user data, we recommend employing a technique such as differentially-private SGD (Abadi et al., 2016) to help prevent user data leaks, and adjusting the weights on training examples to balance the impact of sub-groups of users so as to make the final kernel as fair as possible. As far as we are aware, the datasets used in this work do not contain personally identifiable information or offensive content. We were not able to determine if user consent was explicitly obtained by the organizations that constructed these datasets. 9 REPRODUCIBILITY STATEMENT We have made extensive effort to ensure that all algorithmic, theoretical, and experimental contributions described in this work are reproducible. All of the code implementing our constrained learning and sampling algorithms is publicly available ††. The proofs for our theoretical contributions are available in Appendix E. For our experiments, all dataset processing steps, experimental procedures, and hyperparameter settings are described in Appendices A, B, and C, respectively. 10 ACKNOWLEDGEMENTS Amin Karbasi acknowledges funding in direct support of this work from NSF (IIS-1845032) and ONR (N00014-19-1-2406). A FULL DETAILS ON DATASETS We perform experiments on several real-world public datasets composed of subsets: • UK Retail: This dataset (Chen et al., 2012) contains baskets representing transactions from an online retail company that sells all-occasion gifts. We omit baskets with more than 100 items, leaving us with a dataset containing 19,762 baskets drawn from a catalog of M = 3,941 products. Baskets containing more than 100 items are in the long tail of the basket-size distribution, so omitting these is reasonable, and allows us to use a low-rank factorization of the NDPP with K = 100. • Recipe: This dataset (Majumder et al., 2019) contains recipes and food reviews from Food.com (formerly Genius Kitchen)‡‡. Each recipe (“basket”) is composed of a collection of ingredients, resulting in 178,265 recipes and a catalog of 7,993 ingredients. • Instacart: This dataset (Instacart, 2017) contains baskets purchased by Instacart users§§. We omit baskets with more than 100 items, resulting in 3.2 million baskets and a catalog of 49,677 products. • Million Song: This dataset (McFee et al., 2012) contains playlists (“baskets”) of songs from Echo Nest users¶¶. We trim playlists with more than 100 items, leaving 968,674 playlists and a catalog of 371,410 songs. • Book: This dataset (Wan & McAuley, 2018) contains reviews from the Goodreads book review website, including a variety of attributes describing the items***. For each user we build a subset (“basket”) containing the books reviewed by that user. We trim subsets with more than 100 books, resulting in 430,563 subsets and a catalog of 1,059,437 books. As far as we are aware, these datasets do not contain personally identifiable information or offensive content. While the UK Retail dataset is publicly available, we were unable to find a license for it. Also, we were not able to determine if user consent was explicitly obtained by the organizations that constructed these datasets. B FULL DETAILS ON EXPERIMENTAL SETUP AND METRICS We use 300 randomly-selected baskets as a held-out validation set, for tracking convergence during training and for tuning hyperparameters. Another 2000 random baskets are used for testing, and the rest are used for training. Convergence is reached during training when the relative change in validation log-likelihood is below a predetermined threshold. We use PyTorch with Adam (Kingma & Ba, 2015) for optimization. We initialize D from the standard Gaussian distribution N (0, 1), while V and B are initialized from the uniform(0, 1) distribution. Subset expansion task. We use greedy conditioning to do next-item prediction (Gartrell et al., 2021, Section 4.2). We compare methods using a standard recommender system metric: mean percentile rank (MPR) (Hu et al., 2008; Li et al., 2010). MPR of 50 is equivalent to random selection; MPR of 100 means that the model perfectly predicts the next item. See Appendix B.1 for a complete description of the MPR metric. Subset discrimination task. We also test the ability of a model to discriminate observed subsets from randomly generated ones. For each subset in the test set, we generate a subset of the same length by drawing items uniformly at random (and we ensure that the same item is not drawn more than once for a subset). We compute the AUC for the model on these observed and random subsets, where the score for each subset is the log-likelihood that the model assigns to the subset. ‡‡See https://www.kaggle.com/shuyangli94/food-com-recipes-and-user-interactions for the license for this public dataset. §§This public dataset is available for non-commercial use; see https://www.instacart.com/datasets/ grocery-shopping-2017 for the license. ¶¶See http://millionsongdataset.com/faq/ for the license for this public dataset. ***This public dataset is available for academic use only; see https://sites.google.com/eng.ucsd.edu/ ucsdbookgraph/home for the license. B.1 MEAN PERCENTILE RANK We begin our definition of MPR by defining percentile rank (PR). First, given a set J , let pi,J = Pr(J ∪ {i} | J). The percentile rank of an item i given a set J is defined as PRi,J = ∑ i′ ̸∈J 1(pi,J ≥ pi′,J) |Y\J | × 100% where Y\J indicates those elements in the ground set Y that are not found in J . For our evaluation, given a test set Y , we select a random element i ∈ Y and compute PRi,Y \{i}. We then average over the set of all test instances T to compute the mean percentile rank (MPR): MPR = 1 |T | ∑ Y ∈T PRi,Y \{i}. C HYPERPARAMETERS FOR EXPERIMENTS Preventing numerical instabilities: The det(LYi) in Eq. (14) will be zero whenever |Yi| > K, where Yi is an observed subset. To address this in practice we set K to the size of the largest subset observed in the data, K ′, as in Gartrell et al. (2017). However, this does not entirely fix the issue, as there is still a chance that the term will be zero even when |Yi| ≤ K. In this case though, we know that we are not at a maximum, since the value of the objective function is −∞. Numerically, to prevent such singularities, in our implementation we add a small ϵI correction to each LYi when optimizing Eq. (14) (ϵ = 10−5 in our experiments). We perform a grid search using a held-out validation set to select the best-performing hyperparameters for each model and dataset. The hyperparameter settings used for each model and dataset are described below. Symmetric low-rank DPP (Gartrell et al., 2017). For this model, we use K for the number of item feature dimensions for the symmetric component V , and α for the regularization hyperparameter for V . We use the following hyperparameter settings: • UK Retail dataset: K = 100, α = 1. • Recipe dataset: K = 100, α = 0.01 • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 100, α = 0.0001. • Book dataset: K = 100, α = 0.001 Scalable NDPP (Gartrell et al., 2021). As described in Section 2.1, we use K to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component D. α and β are the regularization hyperparameters. We use the following hyperparameter settings: • UK dataset: K = 100, α = 0.01. • Recipe dataset: K = 100, α = β = 0.01. • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 100, α = 0.01. • Book dataset: K = 100, α = β = 0.1 ONDPP. As described in Section 5, we use K to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component C. α, β, and γ are the regularization hyperparameters. We use the following hyperparameter settings: • UK dataset: K = 100, α = β = 0.01, γ = 0.5. • Recipe dataset: K = 100, α = β = 0.01, γ = 0.1. • Instacart dataset: K = 100, α = β = 0.001, γ = 0.001. • Million Song dataset: K = 100, α = β = 0.01, γ = 0.2. • Book dataset: K = 100, α = β = 0.01, γ = 0.1. For all of the above model configurations and datasets, we use a batch size of 800 during training. D YOULA DECOMPOSITION: SPECTRAL DECOMPOSITION FOR SKEW-SYMMETRIC MATRIX We provide some basic facts on the spectral decomposition of a skew-symmetric matrix, and introduce an efficient algorithm for this decomposition when it is given by a low-rank factorization. We write i := √ −1 and vH as the conjugate transpose of v ∈ CM , and denote Re(z) and Im(z) by the real and imaginary parts of a complex number z, respectively. Given B ∈ RM×K and D ∈ RK×K , consider a rank-K skew-symmetric matrix B(D −D⊤)B⊤. Note that all nonzero eigenvalues of a real-valued skew-symmetric matrix are purely imaginary. Denote iσ1,−iσ1, . . . , iσK/2,−iσK/2 by its nonzero eigenvalues where each of σj is real, and a1 + ib1,a1 − ib1, . . .aK/2 + ibK/2,aK/2 − ibK/2 by the corresponding eigenvectors for aj , bj ∈ RM , which come in conjugate pairs. Then, we can write B(D −D⊤)B⊤ = K/2∑ j=1 iσj(aj + ibj)(aj + ibj) H − iσj(aj − ibj)(aj − ibj)H (15) = K/2∑ j=1 2σj(ajb ⊤ j − bja⊤j ) (16) = K/2∑ j=1 [ aj − bj aj + bj ] [ 0 σj −σj 0 ] [ a⊤j − b⊤j a⊤j + b ⊤ j ] . (17) Note that a1 ± b1, . . . ,aK/2 ± bK/2 are real-valued orthonormal vectors, because a1, b1, . . . ,aK/2, bK/2 are orthogonal to each other and ∥aj ± bj∥22 = ∥aj∥ 2 2 + ∥bj∥ 2 2 = 1 for all j. The pair {(σj ,aj − bj ,aj + bj)}K/2j=1 is often called the Youla decomposition (Youla, 1961) of B(D −D⊤)B⊤. To efficiently compute the Youla decomposition of a rank-K matrix, we use the following result. Proposition 2 (Proposition 1, Nakatsukasa (2019)). Given A,B ∈ CM×K , the nonzero eigenvalues of AB⊤ ∈ CM×M and B⊤A ∈ CK×K are identical. In addition, if (λ,v) is an eigenpair of B⊤A with λ ̸= 0, then (λ,Av/ ∥Av∥2) is an eigenpair of AB⊤. From the above proposition, one can first compute (D −D⊤)B⊤B and then apply the eigendecomposition to that K-by-K matrix. Taking the imaginary part of the obtained eigenvalues gives us the σj’s, and multiplying B by the eigenvectors gives us the eigenvectors of B(D −D⊤)B⊤. In addition, this can be done in O(MK2 +K3) time; when M > K it runs much faster than the eigendecomposition of B(D −D⊤)B⊤, which requires O(M3) time. The pseudo-code of the Youla decomposition is provided in Algorithm 4. Algorithm 4 Youla decomposition of low-rank skew-symmetric matrix 1: procedure YOULADECOMPOSITION(B,D) 2: {(ηj , zj), (ηj , zj)}K/2j=1 ← eigendecomposition of (D −D⊤)B⊤B 3: for j = 1, . . . ,K/2 do 4: σj ← Im(ηj) for j = 1, . . . ,K/2 5: y2j−1 ← B (Re(zj)− Im(zj)) 6: y2j ← B (Re(zj) + Im(zj)) 7: yj ← yj/ ∥yj∥ for j = 1, . . . ,K 8: return {(σj ,y2j−1,y2j)}K/2j=1 E PROOFS E.1 PROOF OF THEOREM 1 Theorem 1. For every subset Y ⊆ [M ], it holds that det(LY ) ≤ det(L̂Y ). Moreover, equality holds when the size of Y is equal to the rank of L. Proof of Theorem 1. It is enough to fix Y ⊆ [M ] such that 1 ≤ |Y | ≤ 2K, because the rank of both L and L̂ is up to 2K. Denote k := |Y | and ( [2K] k ) := {I ⊆ [2K]; |I| = k} for k ≤ 2K. We recall the definition of L̂: given V ,B,D such that L = V V ⊤ + B(D −D⊤)B⊤, let {(ρi,vi)}Ki=1 be the eigendecomposition of V V ⊤ and {(σj ,y2j−1,y2j)}K/2j=1 be the Youla decomposition of B(D −D⊤)B⊤. Denote Z := [v1, . . . ,vK ,y1, . . . ,yK ] ∈ RM×2K and X := diag ( ρ, . . . , ρK , [ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]) , X̂ := diag ( ρ1, . . . , ρK , [ σ1 0 0 σ1 ] , . . . , [ σK/2 0 0 σK/2 ]) , so that L = ZXZ⊤ and L̂ = ZX̂Z⊤. Applying the Cauchy-Binet formula twice, we can write the determinant of the principal submatrices of both L and L̂: det(LY ) = ∑ I∈([2K]k ) ∑ J∈([2K]k ) det(XI,J) det(ZY,I) det(ZY,J), (18) det(L̂Y ) = ∑ I∈([2K]k ) ∑ J∈([2K]k ) det(X̂I,J) det(ZY,I) det(ZY,J) = ∑ I∈([2K]k ) det(X̂I) det(ZY,I) 2, (19) where Eq. (19) follows from the fact that X̂ is diagonal, which means that det(X̂I,J) = 0 for I ̸= J . When the size of Y is equal to the rank of L (i.e., k = 2K), the summations in Eqs. (18) and (19) simplify to single terms: det(LY ) = det(X) det(ZY,:)2 and det(L̂Y ) = det(X̂) det(ZY,:)2. Now, observe that the determinants of the full X and X̂ matrices are identical: det(X) = det(X̂) =∏K i=1 ρi ∏K/2 j=1 σ 2 j . Hence, it holds that det(LY ) = det(L̂Y ). This proves the second statement of the theorem. To prove that det(LY ) ≤ det(L̂Y ) for smaller subsets Y , we will use the following: Claim 1. For every I, J ∈ ( [2K] k ) such that det(XI,J) ̸= 0, there exists a (nonempty) collection of subset pairs S(I, J) ⊆ ( [2K] k ) × ( [2K] k ) such that∑ (I′,J′)∈S(I,J) det(XI,J) det(ZY,I) det(ZY,J) ≤ ∑ (I′,J′)∈S(I,J) det(X̂I,I) det(ZY,I) 2. (20) Claim 2. The number of nonzero terms in Eq. (18) is identical to that in Eq. (19). Combining Claim 1 with Claim 2 yields det(LY ) = ∑ I,J∈([2K]k ) det(XI,J) det(ZY,I) det(ZY,J) ≤ ∑ I∈([2K]k ) det(X̂I,I) det(ZY,I) 2 = det(L̂Y ). We conclude the proof of Theorem 1. Below we provide proofs for Claim 1 and Claim 2. Proof of Claim 1. Recall that X is a block-diagonal matrix, where each block is of size either 1-by-1, containing ρi, or 2-by-2, containing both σj and −σj in the form [ 0 σj −σj 0 ] . A submatrix XI,J ∈ Rk×k with rows I and columns J will only have a nonzero determinant if it contains no all-zero row or column. Hence, any XI,J with nonzero determinant will have the following form (or some permutation of this block-diagonal): XI,J = ρp1 · · · 0 ... . . . ... 0 0 . . . ρp|PI,J | ±σq1 · · · 0 ... . . . ... 0 · · · ±σq|QI,J | 0 σr1 −σr1 0 . . . 0 0 σr|RI,J | −σr|RI,J | 0 (21) and we denote P I,J := {p1, . . . , p|P I,J |}, QI,J := {q1, . . . , q|QI,J |}, and RI,J := {r1, . . . , r|RI,J |}. Indices p ∈ P I,J yield a diagonal matrix with entries ρp. For such p, both I and J must contain index p. Indices r ∈ RI,J yield a block-diagonal matrix of the form [ 0 σr −σr 0 ] . For such r, both I and J must contain a pair of indices, (K + 2r − 1,K + 2r). Finally, indices q ∈ QI,J yield a diagonal matrix with entries of ±σq (the sign can be + or −). For such q, I contains K + 2q − 1 or K + 2q, and J must contain the other. Note that there is no intersection between QI,J and RI,J . If QI,J is an empty set (i.e., I = J), then det(XI,J) = det(X̂I,J) and det(XI,J) det(ZY,I) det(ZY,J) = det(X̂I) det(ZY,I) 2. (22) Thus, the terms in Eq. (18) in this case appear in Eq. (19). Now assume that QI,J ̸= ∅ and consider the following set of pairs: S(I, J) := {(I ′, J ′) : P I,J = P I′,J′ , QI,J = QI′,J′ , RI,J = RI′,J′}. In other words, for (I ′, J ′) ∈ S(I, J), the diagonal XI′,J′ contains ρp, [ 0 σr −σr 0 ] exactly as in XI,J . However, the signs of the σr’s may differ from XI,J . Combining this observation with the definition of X̂ , |det(XI′,J′)| = |det(XI,J)| = det(X̂I) = det(X̂I′) = det(X̂J) = det(X̂J′). (23) Therefore, ∑ (I′,J′)∈S(I,J) det(XI′,J′) det(ZY,I′) det(ZY,J′) (24) ≤ ∑ (I′,J′)∈S(I,J) |det(XI′,J′)|det(ZY,I′) det(ZY,J′) (25) = det(X̂I) ∑ (I′,J′)∈S(I,J) det(ZY,I′) det(ZY,J′) (26) ≤ det(X̂I) ∑ (I′,∗)∈S(I,J) det(ZY,I′) 2 (27) = ∑ (I′,∗)∈S(I,J) det(X̂I′) det(ZY,I′) 2 (28) where the third line comes from Eq. (23) and the fourth line follows from the rearrangement inequality. Note that application of this inequality does not change the number of terms in the sum. This completes the proof of Claim 1. Proof of Claim 2. In Eq. (19), observe that det(X̂I) det(ZY,I)2 ̸= 0 if and only if det(X̂I) ̸= 0. Since all ρi’s and σj’s are positive, the number of I ⊆ [2K], |I| = k such that det(X̂I) ̸= 0 is equal to ( 2K k ) . Similarly, the number of nonzero terms in Eq. (18) equals the number of possible choices of I, J ∈ ( [2K] k ) such that det(XI,J) ̸= 0. This can be counted as follows: first choose i items in {ρ1, . . . , ρK} for i = 0, . . . , k; then, choose j items in {[ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]} for j = 0, . . . , ⌊k−i2 ⌋; lastly, choose k − i − 2j of {±σq; q /∈ RI,J}, then choose the sign for each of these (σq or −σq). Combining all of these choices, the total number of nonzero terms is: k∑ i=0 ( K i ) ︸ ︷︷ ︸ choice of ρp ⌊ k−i2 ⌋∑ j=0 ( K/2 j ) ︸ ︷︷ ︸ choice of [ 0 σr −σr 0 ] ( K/2− j k − i− 2j ) 2k−i−2j︸ ︷︷ ︸ choice of ±σq (29) = k∑ i=0 ( K i ) ( K k − i ) (30) = ( 2K k ) (31) where the second line comes from the fact that ( 2n m ) = ∑⌊m2 ⌋ j=0 ( n j )( n−j m−2j ) 2m−2j for any integers n,m ∈ N such that m ≤ 2n (see (1.69) in Quaintance (2010)), and the third line follows from the fact that ∑r i=0 ( m i )( n r−i ) = ( n+m r ) for n,m, r ∈ N (Vandermonde’s identity). Hence, both the number of nonzero terms in Eqs. (18) and (19) is equal to ( 2K k ) . This completes the proof of Claim 2. E.2 PROOF OF PROPOSITION 1 Proposition 1. The tree-based sampling procedure SAMPLEDPP in Algorithm 3 runs in time O(K + k3 logM + k4), where k is the size of the sampled set†††. Proof of Proposition 1. Since computing pℓ takes O(k2) from Eq. (12), and since the binary tree has depth O(logM), SAMPLEITEM in Algorithm 3 runs in O(k2 logM) time. Moreover, the query matrix QY can be updated in O(k3) time as it only requires a k-by-k matrix inversion. Therefore, the overall runtime of the tree-based elementary DPP sampling algorithm (after pre-processing) is O(k3 logM + k4). This improves the previous O(k4 logM) runtime studied in Gillenwater et al. (2019). Combining this with elementary DPP selection (Line 15 in Algorithm 3), we can sample a set in O(K + k3 logM + k4) time. This completes the proof of Proposition 1. E.3 PROOF OF THEOREM 2 Theorem 2. Given an NDPP kernel L = V V ⊤ + B(D −D⊤)B⊤ for V ,B ∈ RM×K ,D ∈ RK×K , consider the proposal kernel L̂ as proposed in Section 4.1. Let {σj}K/2j=1 be the positive eigenvalues obtained from the Youla decomposition of B(D−D⊤)B⊤. If V ⊥ B, then det(L̂+I)det(L+I) =∏K/2 j=1 ( 1 + 2σj σ2j+1 ) ≤ (1 + ω)K/2, where ω = 2K ∑K/2 j=1 2σj σ2j+1 ∈ (0, 1]. Proof of Theorem 2. Since the column spaces of V and B are orthogonal, the corresponding eigenvectors are also orthogonal, i.e., Z⊤Z = I2K . Then, det(L+ I) = det(ZXZ⊤ + I) = det(XZ⊤Z + I2K) = det(X + I2K) (32) = K∏ i=1 (ρi + 1) K/2∏ j=1 det ([ 1 σj −σj 1 ]) (33) = K∏ i=1 (ρi + 1) K/2∏ j=1 (σ2j + 1) (34) †††Computing pℓ via Eq. (12) improves on Gillenwater et al. (2019)’s O(k4 logM) runtime for this step. and similarly det(L̂+ I) = K∏ i=1 (ρi + 1) K/2∏ j=1 (σj + 1) 2. (35) Combining Eqs. (34) and (35), we have that det(L̂+ I) det(L+ I) = K/2∏ j=1 (σj + 1) 2 (σ2j + 1) = K/2∏ j=1 ( 1 + 2σj σ2j + 1 ) ≤ 1 + 2 K K/2∑ j=1 2σj σ2j + 1 K/2 (36) where the inequality holds from the Jensen’s inequality. This completes the proof of Theorem 2.
1. What is the focus of the paper regarding Nonsymmetric Determinantal Point Processes (NDPPs)? 2. What are the strengths of the proposed algorithms compared to previous works? 3. How does the reviewer assess the clarity and readability of the paper? 4. Are there any concerns or suggestions regarding the notation and derivations used in the paper? 5. How does the reviewer view the significance and potential impact of the paper's contributions to practical applications and future research on NDPPs?
Summary Of The Paper Review
Summary Of The Paper The paper provides efficient (linear time in item set size) algorithms for exact sampling from Nonsymmetric Determinantal Point Processes (NDPPs) using a new NDPP decomposition which was introduced recently in an ICLR 2021 Oral (Gartrell et.al) and also a learning algorithm to learn NDPP kernels which are more amenable to some of their sampling algorithms. They also provide empirical results comparing their learned kernels with previous works and compare times for their various sampling algorithms. Review The paper is well written, clear, and interesting to read. They are the first to give efficient algorithms for exact sampling from NDPPs (the previous sampling algorithm takes time cubic in ground set size, which is impractical for real-world data). Their work can lead to more applications of NDPPs in practical settings. It is also timely and adds to the growing literature on NDPPs. Some comments: For equations 2 and 3 (and similar probabilities elsewhere in the paper which only involve singleton sets), it might help the reader if you use Pr ( i ∈ Y | j ∈ Y ) rather than Pr ( i ⊆ Y | j ⊆ Y ) . Also, for these specific equations, you mention that Poulson (2019) shows them via the Cholesky decomposition but that seems overkill. For these specific equations (2 and 3), you can just derive equation 2 by computing P r ( i , j ⊆ Y ) P r ( j ⊆ Y ) = det ( K i , j ) det ( K j ) and equation 3 also by P r ( i ∈ Y , j ∉ Y ) P r ( j ∈ Y ) = P r ( i ∈ Y ) − P r ( i , j ⊆ Y ) P r ( j ∈ Y ) = det ( K i ) − det ( K i , j ) det ( K j ) . It seems like Poulson (2019) uses LU decomposition (in Proposition 3 and 5) because their propositions are more general and apply to disjoint subsets (possibly having more than one element). Also, I'm not very familiar with these factorizations but seems like LU decomposition is different from Cholesky decomposition (at least from my understanding reading the wiki for Cholesky decomposition). Experts might be irritated by this (or maybe it's fine in which case, please let me know).
ICLR
Title Scalable Sampling for Nonsymmetric Determinantal Point Processes Abstract A determinantal point process (DPP) on a collection of M items is a model, parameterized by a symmetric kernel matrix, that assigns a probability to every subset of those items. Recent work shows that removing the kernel symmetry constraint, yielding nonsymmetric DPPs (NDPPs), can lead to significant predictive performance gains for machine learning applications. However, existing work leaves open the question of scalable NDPP sampling. There is only one known DPP sampling algorithm, based on Cholesky decomposition, that can directly apply to NDPPs as well. Unfortunately, its runtime is cubic in M , and thus does not scale to large item collections. In this work, we first note that this algorithm can be transformed into a linear-time one for kernels with low-rank structure. Furthermore, we develop a scalable sublinear-time rejection sampling algorithm by constructing a novel proposal distribution. Additionally, we show that imposing certain structural constraints on the NDPP kernel enables us to bound the rejection rate in a way that depends only on the kernel rank. In our experiments we compare the speed of all of these samplers for a variety of real-world tasks. 1 INTRODUCTION A determinantal point process (DPP) on M items is a model, parameterized by a symmetric kernel matrix, that assigns a probability to every subset of those items. DPPs have been applied to a wide range of machine learning tasks, including stochastic gradient descent (SGD) (Zhang et al., 2017), reinforcement learning (Osogami & Raymond, 2019; Yang et al., 2020), text summarization (Dupuy & Bach, 2018), coresets (Tremblay et al., 2019), and more. However, a symmetric kernel can only capture negative correlations between items. Recent works (Brunel, 2018; Gartrell et al., 2019) have shown that using a nonsymmetric DPP (NDPP) allows modeling of positive correlations as well, which can lead to significant predictive performance gains. Gartrell et al. (2021) provides scalable NDPP kernel learning and MAP inference algorithms, but leaves open the question of scalable sampling. The only known sampling algorithm for NDPPs is the Cholesky-based approach described in Poulson (2019), which has a runtime of O(M3) and thus does not scale to large item collections. There is a rich body of work on efficient sampling algorithms for (symmetric) DPPs, including recent works such as Derezinski et al. (2019); Poulson (2019); Calandriello et al. (2020). Key distinctions between existing sampling algorithms include whether they are for exact or approximate sampling, whether they assume the DPP kernel has some low-rank K ≪ M , and whether they sample from the space of all 2M subsets or from the restricted space of size-k subsets, so-called k-DPPs. In the context of MAP inference, influential work, including Summa et al. (2014); Chen et al. (2018); Hassani et al. (2019); Ebrahimi et al. (2017); Indyk et al. (2020), proposed efficient algorithms that the approximate (sub)determinant maximization problem and provide rigorous guarantees. In this work we focus on exact sampling for low-rank kernels, and provide scalable algorithms for NDPPs. Our contributions are as follows, with runtime and memory details summarized in Table 1: • Linear-time sampling (Section 3): We show how to transform the O(M3) Choleskydecomposition-based sampler from Poulson (2019) into an O(MK2) sampler for rank-K kernels. • Sublinear-time sampling (Section 4): Using rejection sampling, we show how to leverage existing sublinear-time samplers for symmetric DPPs to implement a sublinear-time sampler for a subclass of NDPPs that we call orthogonal NDPPs (ONDPPs). • Learning with orthogonality constraints (Section 5): We show that the scalable NDPP kernel learning of Gartrell et al. (2021) can be slightly modified to impose an orthogonality constraint, yielding the ONDPP subclass. The constraint allows us to control the rejection sampling algorithm’s rejection rate, ensuring its scalability. Experiments suggest that the predictive performance of the kernels is not degraded by this change. For a common large-scale setting where M is 1 million, our sublinear-time sampler results in runtime that is hundreds of times faster than the linear-time sampler. In the same setting, our linear-time sampler provides runtime that is millions of times faster than the only previously known NDPP sampling algorithm, which has cubic time complexity and is thus impractical in this scenario. 2 BACKGROUND Notation. We use [M ] := {1, . . . ,M} to denote the set of items 1 through M . We use IK to denote the K-by-K identity matrix, and often write I := IM when the dimensionality should be clear from context. Given L ∈ RM×M , we use Li,j to denote the entry in the i-th row and j-th column, and LA,B ∈ R|A|×|B| for the submatrix formed by taking rows A and columns B. We also slightly abuse notation to denote principal submatrices with a single subscript, LA := LA,A. Kernels. As discussed earlier, both (symmetric) DPPs and NDPPs define a probability distribution over all 2M subsets of a ground set [M ]. The distribution is parameterized by a kernel matrix L ∈ RM×M and the probability of a subset Y ⊆ [M ] is defined to be Pr(Y ) ∝ det(LY ). For this to define a valid distribution, it must be the case that det(LY ) ≥ 0 for all Y . For symmetric DPPs, the non-negativity requirement is identical to a requirement that L be positive semi-definite (PSD). For nonsymmetric DPPs, there is no such simple correspondence, but prior work such as Gartrell et al. (2019; 2021) has focused on PSD matrices for simplicity. Normalizing and marginalizing. The normalizer of a DPP or NDPP distribution can also be written as a single determinant: ∑ Y⊆[M ] det(LY ) = det(L+ I) (Kulesza & Taskar, 2012, Theorem 2.1). Additionally, the marginal probability of a subset can be written as a determinant: Pr(A ⊆ Y ) = det(KA), for K := I − (L+ I)−1 (Kulesza & Taskar, 2012, Theorem 2.2)*, where K is typically called the marginal kernel. Intuition. The diagonal element Ki,i is the probability that item i is included in a set sampled from the model. The 2-by-2 determinant det(K{i,j}) = Ki,iKj,j −Ki,jKj,j is the probability that both i and j are included in the sample. A symmetric DPP has a symmetric marginal kernel, meaning Ki,j = Kj,i, and hence Ki,iKj,j −Ki,jKj,i ≤ Ki,iKj,j . This implies that the probability of including both i and j in the sampled set cannot be greater than the product of their individual inclusion probabilities. Hence, symmetric DPPs can only encode negative correlations. In contrast, NDPPs can have Ki,j and Kj,i with differing signs, allowing them to also capture positive correlations. 2.1 RELATED WORK Learning. Gartrell et al. (2021) proposes a low-rank kernel decomposition for NDPPs that admits linear-time learning. The decomposition takes the form L := V V ⊤ + B(D − D⊤)B⊤ for *The proofs in Kulesza & Taskar (2012) typically assume a symmetric kernel, but this particular one does not rely on the symmetry. Algorithm 1 Cholesky-based NDPP sampling (Poulson, 2019, Algorithm 1) 1: procedure SAMPLECHOLESKY(K) ▷ marginal kernel factorization Z,W 2: Y ← ∅ Q←W 3: for i = 1 to M do 4: pi ←Ki,i pi ← z⊤i Qzi 5: u← uniform(0, 1) 6: if u ≤ pi then Y ← Y ∪ {i} 7: else pi ← pi − 1 8: KA ←KA − KA,iKi,Api for A := {i+ 1, . . . ,M} Q← Q− Qziz ⊤ i Q pi 9: return Y V ,B ∈ RM×K , and D ∈ RK×K . The V V ⊤ component is a rank-K symmetric matrix, which can model negative correlations between items. The B(D −D⊤)B⊤ component is a rank-K skewsymmetric matrix, which can model positive correlations between items. For compactness of notation, we will write L = ZXZ⊤, where Z = [ V B ] ∈ RM×2K , and X = [ IK 0 0 D−D⊤ ] ∈ R2K×2K . The marginal kernel in this case also has a rank-2K decomposition, as can be shown via application of the Woodbury matrix identity: K := I − (I +L)−1 = ZX ( I2K +Z ⊤ZX )−1 Z⊤. (1) Note that the matrix to be inverted can be computed from Z and X in O(MK2) time, and the inverse itself takes O(K3) time. Thus, K can be computed from L in time O(MK2). We will develop sampling algorithms for this decomposition, as well as an orthogonality-constrained version of it. We use W := X ( I2K +Z ⊤ZX )−1 in what follows so that we can compactly write K = ZWZ⊤. Sampling. While there are a number of exact sampling algorithms for DPPs with symmetric kernels, the only published algorithm that clearly can directly apply to NDPPs is from Poulson (2019) (see Theorem 2 therein). This algorithm begins with an empty set Y = ∅ and iterates through the M items, deciding for each whether or not to include it in Y based on all of the previous inclusion/exclusion decisions. Poulson (2019) shows, via the Cholesky decomposition, that the necessary conditional probabilities can be computed as follows: Pr (j ∈ Y | i ∈ Y ) = Pr({i, j} ⊆ Y ) Pr(i ∈ Y ) = Kj,j − (Kj,iKi,j) /Ki,i, (2) Pr (j ∈ Y | i /∈ Y ) = Pr(j ∈ Y )− Pr({i, j} ⊆ Y ) Pr(i /∈ Y ) = Kj,j − (Kj,iKi,j) / (Ki,i − 1) . (3) Algorithm 1 (left-hand side) gives pseudocode for this Cholesky-based sampling algorithm†. There has also been some recent work on approximate sampling for fixed-size k-NDPPs: Alimohammadi et al. (2021) provide a Markov chain Monte Carlo (MCMC) algorithm and prove that the overall runtime to approximate ε-close total variation distance is bounded by O(M2k3 log(1/(εPr(Y0))), where Pr(Y0) is probability of an initial state Y0. Improving this runtime is an interesting avenue for future work, but for this paper we focus on exact sampling. 3 LINEAR-TIME CHOLESKY-BASED SAMPLING In this section, we show that the O(M3) runtime of the Cholesky-based sampler from Poulson (2019) can be significantly improved when using the low-rank kernel decomposition of Gartrell et al. (2021). First, note that Line 8 of Algorithm 1, where all marginal probabilities are updated via an (M − i)-by-(M − i) matrix subtraction, is the most costly part of the algorithm, making overall time and memory complexities O(M3) and O(M2), respectively. However, when the DPP kernel is given by a low-rank decomposition, we observe that marginal probabilities can be updated by matrix-vector †Cholesky decomposition is defined only for a symmetric positive definite matrix. However, we use the term “Cholesky” from Poulson (2019) to maintain consistency with this work, although Algorithm 1 is valid for nonsymmetric matrices. Algorithm 2 Rejection NDPP sampling (Tree-based sampling) 1: procedure PREPROCESS(V ,B,D) 2: {(σj ,y2j−1,y2j)}K/2j=1 ← YOULADECOMPOSE(B,D)‡ 3: X̂ ← diag ( IK , σ1, σ1, . . . , σK/2, σK/2 ) 4: Z ← [V ,y1, . . . ,yK ] {(λi, zi)}2Ki=1 ← EIGENDECOMPOSE(ZX̂1/2) T ← CONSTRUCTTREE(M, [z1, . . . ,z2K ]⊤) 5: return Z, X̂ return T , {(λi, zi)}2Ki=1 6: procedure SAMPLEREJECT(V ,B,D,Z, X̂) ▷ tree T , eigen pair {(λi, zi)}2Ki=1 of ZX̂Z 7: while true do 8: Y ← SAMPLEDPP(ZX̂Z⊤) Y ← SAMPLEDPP(T , {(λi, zi)}2Ki=1) 9: u← uniform(0, 1) 10: p← det([V V ⊤+B(D−D⊤)B⊤]Y ) det([ZX̂Z⊤]Y ) 11: if u ≤ p then break 12: return Y multiplications of dimension 2K, regardless of M . In more detail, suppose we have the marginal kernel K = ZWZ⊤ as in Eq. (1) and let zj be the j-th row vector in Z. Then, for i ̸= j: Pr (j ∈ Y | i ∈ Y ) = Kj,j − (Kj,iKi,j)/Ki,i = z⊤j ( W − (Wzi)(z ⊤ i W ) z⊤i Wzi ) zj , (4) Pr (j ∈ Y | i /∈ Y ) = z⊤j ( W − (Wzi)(z ⊤ i W ) z⊤i Wzi − 1 ) zj . (5) The conditional probabilities in Eqs. (4) and (5) are of bilinear form, and the zj do not change during sampling. Hence, it is enough to update the 2K-by-2K inner matrix at each iteration, and obtain the marginal probability by multiplying this matrix by zi. The details are shown on the right-hand side of Algorithm 1. The overall time and memory complexities are O(MK2) and O(MK), respectively. 4 SUBLINEAR-TIME REJECTION SAMPLING Although the Cholesky-based sampler runs in time linear in M , even this is too expensive for the large M that are often encountered in real-world datasets. To improve runtime, we consider rejection sampling (Von Neumann, 1963). Let p be the target distribution that we aim to sample, and let q be any distribution whose support corresponds to that of p; we call q the proposal distribution. Assume that there is a universal constant U such that p(x) ≤ Uq(x) for all x. In this setting, rejection sampling draws a sample x from q and accepts it with probability p(x)/(Uq(x)), repeating until an acceptance occurs. The distribution of the resulting samples is p. It is important to choose a good proposal distribution q so that sampling is efficient and the number of rejections is small. 4.1 PROPOSAL DPP CONSTRUCTION Our first goal is to find a proposal DPP with symmetric kernel L̂ that can upper-bound all probabilities of samples from the NDPP with kernel L within a constant factor. To this end, we expand the determinant of a principal submatrix, det(LY ), using the spectral decomposition of the NDPP kernel. Such a decomposition essentially amounts to combining the eigendecomposition of the symmetric part of L with the Youla decomposition (Youla, 1961) of the skew-symmetric part. Specifically, suppose {(σj ,y2j−1,y2j)}K/2j=1 is the Youla decomposition of B(D −D⊤)B⊤ (see Appendix D for more details), that is, B(D −D⊤)B⊤ = K/2∑ j=1 σj ( y2j−1y ⊤ 2j − y2jy⊤2j−1 ) . (6) ‡Pseudo-code of YOULADECOMPOSE is provided in Algorithm 4. See Appendix D. Then we can simply write L = ZXZ⊤, for Z := [V ,y1, . . . ,yK ] ∈ RM×2K , and X := diag ( IK , [ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]) . (7) Now, consider defining a related but symmetric PSD kernel L̂ := ZX̂Z⊤ with X̂ := diag ( IK , σ1, σ1, . . . , σK/2, σK/2 ) . All determinants of the principal submatrices of L̂ = ZX̂Z⊤ upper-bound those of L, as stated below. Theorem 1. For every subset Y ⊆ [M ], it holds that det(LY ) ≤ det(L̂Y ). Moreover, equality holds when the size of Y is equal to the rank of L. Proof sketch: From the Cauchy-Binet formula, the determinants of LY and L̂Y for all Y ⊆ [M ], |Y | ≤ 2K can be represented as det(LY ) = ∑ I⊆[K],|I|=|Y | ∑ J⊆[K],|J|=|Y | det(XI,J) det(ZY,I) det(ZY,J), (8) det(L̂Y ) = ∑ I⊆[2K],|I|=|Y | det(X̂I) det(ZY,I) 2. (9) Many of the terms in Eq. (8) are actually zero due to the block-diagonal structure of X . For example, note that if 1 ∈ I but 1 /∈ J , then there is an all-zeros row in XI,J , making det(XI,J) = 0. We show that each XI,J with nonzero determinant is a block-diagonal matrix with diagonal entries among ±σj , or [ 0 σj −σj 0 ] . With this observation, we can prove that det(XI,J) is upper-bounded by det(X̂I) or det(X̂J). Then, through application of the rearrangement inequality, we can upper-bound the sum of the det(XI,J) det(ZY,I) det(ZY,J) in Eq. (8) with a sum over det(X̂I) det(ZY,I)2. Finally, we show that the number of non-zero terms in Eq. (8) is identical to the number of non-zero terms in Eq. (9). Combining these gives us the desired inequality det(LY ) ≤ det(L̂Y ). The full proof of Theorem 1 is in Appendix E.1. Now, recall that the normalizer of a DPP (or NDPP) with kernel L is det(L + I). The ratio of probability of the NDPP with kernel L to that of a DPP with kernel L̂ is thus: PrL(Y ) PrL̂(Y ) = det(LY )/det(L+ I) det(L̂Y )/det(L̂+ I) ≤ det(L̂+ I) det(L+ I) , where the inequality follows from Theorem 1. This gives us the necessary universal constant U upper-bounding the ratio of the target distribution to the proposal distribution. Hence, given a sample Y drawn from the DPP with kernel L̂, we can use acceptance probability PrL(Y )/(U PrL̂(Y )) = det(LY )/ det(L̂Y ). Pseudo-codes for proposal construction and rejection sampling are given in Algorithm 2. Note that to derive L̂ from L it suffices to run the Youla decomposition of B(D − D⊤)B⊤, because the difference is only in the skew-symmetric part. This decomposition can run in O(MK2) time; more details are provided in Appendix D. Since L̂ is a symmetric PSD matrix, we can apply existing fast DPP sampling algorithms to sample from it. In particular, in the next section we combine a fast tree-based method with rejection sampling. 4.2 SUBLINEAR-TIME TREE-BASED SAMPLING There are several DPP sampling algorithms that run in sublinear time, such as tree-based (Gillenwater et al., 2019) and intermediate (Derezinski et al., 2019) sampling algorithms. Here, we consider applying the former, a tree-based approach, to sample from the proposal distribution defined by L̂. We give some details of the sampling procedure, as in the course of applying it we discovered an optimization that slightly improves on the runtime of prior work. Formally, let {(λi, zi)}2Ki=1 be the eigendecomposition of L̂ and Z := [z1, . . . ,z2K ] ∈ RM×2K . As shown in Kulesza & Taskar (2012, Lemma 2.6), for every Y ⊆ [M ], |Y | ≤ 2K, the probability of Y under DPP with L̂ can be written: PrL̂(Y ) = det(L̂Y ) det(L̂+ I) = ∑ E⊆[2K],|E|=|Y | det(ZY,EZ ⊤ Y,E) ∏ i∈E λi λi + 1 ∏ i/∈E 1 λi + 1 . (10) Algorithm 3 Tree-based DPP sampling (Gillenwater et al., 2019) 1: procedure BRANCH(A,Z) 2: if A = {j} then 3: T .A← {j}, T .Σ← Z⊤j,:Zj,: 4: return T 5: Aℓ, Ar ← Split A in half 6: T .left← BRANCH(Aℓ,Z) 7: T .right← BRANCH(Ar,Z) 8: T .Σ← T .left.Σ+ T .right.Σ 9: return T 10: procedure CONSTRUCTTREE(M , Z) 11: return BRANCH([M ], Z) 12: procedure SAMPLEDPP(T ,Z, {λi}Ki=1) 13: E ← ∅, Y ← ∅, QY ← 0 14: for i = 1, . . . ,K do 15: E ← E ∪ {i} w.p. λi/(λi + 1) 16: for k = 1, . . . , |E| do 17: j ← SAMPLEITEM(T ,QY , E) 18: Y ← Y ∪ {j} 19: QY← I|E|−Z⊤Y,E ( ZY,EZ ⊤ Y,E )−1 ZY,E 20: return Y 21: procedure SAMPLEITEM(T ,QY , E) 22: if T is a leaf then return T .A 23: pℓ ← 〈 T .left.ΣE ,QY 〉 24: pr ← 〈 T .right.ΣE ,QY 〉 25: u← uniform(0, 1) 26: if u ≤ pℓpℓ+pr then 27: return SAMPLEITEM(T .left,QY , E) 28: else 29: return SAMPLEITEM(T .right,QY , E) A matrix of the form Z:,EZ⊤:,E can be a valid marginal kernel for a special type of DPP, called an elementary DPP. Hence, Eq. (10) can be thought of as DPP probabilities expressed as a mixture of elementary DPPs. Based on this mixture view, DPP sampling can be done in two steps: (1) choose an elementary DPP according to its mixture weight, and then (2) sample a subset from the selected elementary DPP. Step (1) can be performed by 2K independent random coin tossings, while step (2) involves computational overhead. The key idea of tree-based sampling is that step (2) can be accelerated by traversing a binary tree structure, which can be done in time logarithmic in M . More specifically, given the marginal kernel K = Z:,EZ⊤:,E , where E is obtained from step (1), we start from the empty set Y = ∅ and repeatedly add an item j to Y with probability: Pr(j ∈ S | Y ⊆ S) = Kj,j −Kj,Y (KY )−1KY,j = Zj,EQY Z⊤j,E = 〈 QY , (Z⊤j,:Zj,:)E 〉 , (11) where S is some final selected subset, and QY := I|E| − Z⊤Y,E ( ZY,EZ ⊤ Y,E )−1 ZY,E . Consider a binary tree whose root includes a ground set [M ]. Every non-leaf node contains a subset A ⊆ [M ] and stores a 2K-by-2K matrix ∑ j∈A Z ⊤ j,:Zj,:. A partition Aℓ and Ar, such that Aℓ∪Ar = A,Aℓ∩Ar = ∅, are passed to its left and right subtree, respectively. The resulting tree has M leaves and each has exactly a single item. Then, one can sample a single item by recursively moving down to the left node with probability: pℓ = ⟨QY ,∑j∈Aℓ(Z⊤j,:Zj,:)E⟩ ⟨QY ,∑j∈A(Zj,:Z⊤j,:)E⟩ , (12) or to the right node with probability 1− pℓ, until reaching a leaf node. An item in the leaf node is chosen with probability according to Eq. (11). Since every subset in the support of an elementary DPP with a rank-k kernel has exactly k items, this process is repeated for |E| iterations. Full descriptions of tree construction and sampling are provided in Algorithm 3. The proposed tree-based rejection sampling for an NDPP is outlined on the right-side of Algorithm 2. The one-time pre-processing step of constructing the tree (CONSTRUCTTREE) requires O(MK2) time. After pre-processing, the procedure SAMPLEDPP involves |E| traversals of a tree of depth O(logM), where in each node a O(|E|2) operation is required. The overall runtime is summarized in Proposition 1 and the proof can be found in Appendix E.2. Proposition 1. The tree-based sampling procedure SAMPLEDPP in Algorithm 3 runs in time O(K + k3 logM + k4), where k is the size of the sampled set§. §Computing pℓ via Eq. (12) improves on Gillenwater et al. (2019)’s O(k4 logM) runtime for this step. 4.3 AVERAGE NUMBER OF REJECTIONS We now return to rejection sampling and focus on the expected number of rejections. The number of rejections of Algorithm 2 is known to be a geometric random variable with mean equal to the constant U used to upper-bound the ratio of the target distribution to the proposal distribution: det(L̂+ I)/ det(L+ I). If all columns in V and B are orthogonal, which we denote V ⊥ B, then the expected number of rejections depends only on the eigenvalues of the skew-symmetric part of the NDPP kernel. Theorem 2. Given an NDPP kernel L = V V ⊤ + B(D −D⊤)B⊤ for V ,B ∈ RM×K ,D ∈ RK×K , consider the proposal kernel L̂ as proposed in Section 4.1. Let {σj}K/2j=1 be the positive eigenvalues obtained from the Youla decomposition of B(D−D⊤)B⊤. If V ⊥ B, then det(L̂+I)det(L+I) =∏K/2 j=1 ( 1 + 2σj σ2j+1 ) ≤ (1 + ω)K/2, where ω = 2K ∑K/2 j=1 2σj σ2j+1 ∈ (0, 1]. Proof sketch: Orthogonality between V and B allows det(L+ I) to be expressed just in terms of the eigenvalues of V V ⊤ and B(D −D⊤)B⊤. Since both L and L̂ share the symmetric part V V ⊤, the ratio of determinants only depends on the skew-symmetric part. A more formal proof appears in Appendix E.3. Assuming we have a kernel where V ⊥ B, we can combine Theorem 2 with the tree-based rejection sampling algorithm (right-side in Algorithm 2) to sample in time O((K+k3 logM+k4)(1+ω)K/2). Hence, we have a sampling algorithm that is sublinear in M , and can be much faster than the Choleskybased algorithm when (1 + ω)K/2 ≪M . In the next section, we introduce a learning scheme with the V ⊥ B constraint, as well as regularization to ensure that ω is small. 5 LEARNING WITH ORTHOGONALITY CONSTRAINTS We aim to learn a NDPP that provides both good predictive performance and a low rejection rate. We parameterize our NDPP kernel matrix L = V V ⊤ +B(D −D⊤)B⊤ by D = diag ([ 0 σ1 0 0 ] , . . . , [ 0 σK/2 0 0 ]) (13) for σj ≥ 0, B⊤B = I , and, motivated by Theorem 2, require V ⊤B = 0¶. We call such orthogonality-constrained NDPPs “ONDPPs”. Notice that if V ⊥ B, then L has the full rank of 2K, since the intersection of the column spaces spanned by V and by B is empty, and thus the full rank available for modeling can be used. Thus, this constraint can also be thought of as simply ensuring that ONDPPs use the full rank available to them. Given example subsets {Y1, . . . , Yn} as training data, learning is done by minimizing the regularized negative log-likelihood: min V ,B,{σj}K/2j=1 − 1 n n∑ i=1 log ( det(LYi) det(L+ I) ) + α M∑ i=1 ∥vi∥22 µi + β M∑ i=1 ∥bi∥22 µi + γ K/2∑ j=1 log ( 1 + 2σj σ2j + 1 ) (14) where α, β, γ > 0 are hyperparameters, µi is the frequency of item i in the training data, and vi and bi represent the rows of V and B, respectively. This objective is very similar to that of Gartrell et al. (2021), except for the orthogonality constraint and the final regularization term. Note that this regularization term corresponds exactly to the logarithm of the average rejection rate, and therefore should help to control the number of rejections. 6 EXPERIMENTS We first show that the orthogonality constraint from Section 5 does not degrade the predictive performance of learned kernels. We then compare the speed of our proposed sampling algorithms. ¶Technical details: To learn NDPP models with the constraint V ⊤B = 0, we project V according to: V ← V − B(B⊤B)−1(B⊤V ). For the B⊤B = I constraint, we apply QR decomposition on B. Note that both operations require O(MK2) time. (Constrained learning and sampling code is provided at https://github.com/insuhan/nonsymmetric-dpp-sampling. We use Pytorch’s linalg.solve to avoid the expense of explicitly computing the (B⊤B)−1 inverse.) Hence, our learning time complexity is identical to that of Gartrell et al. (2021). 6.1 PREDICTIVE PERFORMANCE RESULTS FOR NDPP LEARNING We benchmark various DPP models, including symmetric (Gartrell et al., 2017), nonsymmetric for scalable learning (Gartrell et al., 2021), as well as our ONDPP kernels with and without rejection rate regularization. We use the scalable NDPP models (Gartrell et al., 2021) as a baseline||. The kernel components of each model are learned using five real-world recommendation datasets, which have ground set sizes that range from 3,941 to 1,059,437 items (see Appendix A for more details). Our experimental setup and metrics mirror those of Gartrell et al. (2021). We report the mean percentile rank (MPR) metric for a next-item prediction task, the AUC metric for subset discrimination, and the log-likelihood of the test set; see Appendix B for more details on the experiments and metrics. For all metrics, higher numbers are better. For NDPP models, we additionally report the average rejection rates when they apply to rejection sampling. In Table 2, we observe that the predictive performance of our ONDPP models generally match or sometimes exceed the baseline. This is likely because the orthogonality constraint enables more effective use of the full rank-2K feature space. Moreover, imposing the regularization on rejection rate, as shown in Eq. (14), often leads to dramatically smaller rejection rates, while the impact on predictive performance is generally marginal. These results justify the ONDPP and regularization for fast sampling. Finally, we observe that the learning time of our ONDPP models is typically a bit longer than that of the NDPP models, but still quite reasonable (e.g., the time per iteration for the NDPP takes 27 seconds for the Book dataset, while our ONDPP takes 49.7 seconds). Fig. 1 shows how the regularizer γ affects the test log-likelihood and the average number of rejections. We see that γ degrades predictive performance and reduces the rejection rate when set above a certain threshold; this behavior is seen for many datasets. However, for the Recipe dataset we observed that the test log-likelihood is not very sensitive to γ, likely because all models in our experiments achieve very high performance on this dataset. In general, we observe that γ can be set to a value that results in a small rejection rate, while having minimal impact on predictive performance. 6.2 SAMPLING TIME COMPARISON We benchmark the Cholesky-based sampling algorithm (Algorithm 1) and tree-based rejection sampling algorithm (Algorithm 2) on ONDPPs with both synthetic and real-world data. ||We use the code from https://github.com/cgartrel/scalable-nonsymmetric-DPPs for the NDPP baseline, which is made available under the MIT license. To simplify learning and MAP inference, Gartrell et al. (2021) set B = V in their experiments. However, since we have the V ⊥ B constraint in our ONDPP approach, we cannot set B = V . Hence, for a fair comparison, we do not set B = V for the NDPP baseline in our experiments, and thus the results in Table 2 differ slightly from those published in Gartrell et al. (2021). 10−8 10−6 10−4 10−2 100 regularizer γ 101 104 107 1010 av er ag e # of re je ct io ns (a) 10−6 10−4 10−2 100 regularizer γ −106 −104 −102 −100 −98 te st lo g- lik el ih oo d (b) Figure 1: Average number of rejections and test log-likelihood with different values of the regularizer γ for ONDPPs trained on the UK Retail dataset. Shaded regions are 95% confidence intervals of 10 independent trials. 212 214 216 218 220 ground set size M 101 102 103 sa m pl in g tim e (s ec ) Cholesky-based Rejection (a) 212 214 216 218 220 ground set size M 10−1 101 103 pr ep ro ce ss in g tim e (s ec ) Tree construction Spectral decomposition (b) Figure 2: Wall-clock time (sec) for synthetic data for (a) NDPP sampling algorithms and (b) preprocessing steps for the rejection sampling. Shaded regions are 95% confidence intervals from 100 independent trials. Synthetic datasets. We generate non-uniform random features for V ,B as done by (Han & Gillenwater, 2020). In particular, we first sample x1, . . . ,x100 from N (0, I2K/(2K)), and integers t1, . . . , t100 from Poisson distribution with mean 5, rescaling the integers such that ∑ i ti = M . Next, we draw ti random vectors from N (xi, I2K), and assign the first K-dimensional vectors as the row vectors of V and the latter vectors as those of B. Each entry of D is sampled from N (0, 1). We choose K = 100 and vary M from 212 to 220. Fig. 2(a) illustrates the runtimes of Algorithms 1 and 2. We verify that the rejection sampling time tends to increase sub-linearly with the ground set size M , while the Cholesky-based sampler runs in linear time. In Fig. 2(b), the runtimes of the preprocessing steps for Algorithm 2 (i.e., spectral decomposition and tree construction) are reported. Although the rejection sampler requires these additional processes, they are one-time steps and run much faster than a single run of the Choleksy-based method for M = 220. Real-world datasets. In Table 3, we report the runtimes and speedup of NDPP sampling algorithms for real-world datasets. All NDPP kernels are obtained using learning with orthogonality constraints, with rejection rate regularization as reported in Section 6.1. We observe that the tree-based rejection sampling runs up to 246 times faster than the Cholesky-based algorithm. For larger datasets, we expect that this gap would significantly increase. As with the synthetic experiments, we see that the tree construction pre-processing time is comparable to the time required to draw a single sample via the other methods, and thus the tree-based method is often the best choice for repeated sampling**. 7 CONCLUSION In this work we developed scalable sampling methods for NDPPs. One limitation of our rejection sampler is its practical restriction to the ONDPP subclass. Other opportunities for future work include the extension of our rejection sampling approach to the generation of fixed-size samples (from k-NDPPs), the development of approximate sampling techniques, and the extension of DPP samplers along the lines of Derezinski et al. (2019); Calandriello et al. (2020) to NDPPs. Scalable sampling also opens the door to using NDPPs as building blocks in probabilistic models. **We note that the tree can consume substantial memory, e.g., 169.5 GB for the Book dataset with K = 100. For settings where this scale of memory use is unacceptable, we suggest use of the intermediate sampling algorithm (Calandriello et al., 2020) in place of tree-based sampling. The resulting sampling algorithm may be slower, but the O(M +K) memory cost is substantially lower. 8 ETHICS STATEMENT In general, our work moves in a positive direction by substantially decreasing the computational costs of NDPP sampling. When using our constrained learning method to learn kernels from user data, we recommend employing a technique such as differentially-private SGD (Abadi et al., 2016) to help prevent user data leaks, and adjusting the weights on training examples to balance the impact of sub-groups of users so as to make the final kernel as fair as possible. As far as we are aware, the datasets used in this work do not contain personally identifiable information or offensive content. We were not able to determine if user consent was explicitly obtained by the organizations that constructed these datasets. 9 REPRODUCIBILITY STATEMENT We have made extensive effort to ensure that all algorithmic, theoretical, and experimental contributions described in this work are reproducible. All of the code implementing our constrained learning and sampling algorithms is publicly available ††. The proofs for our theoretical contributions are available in Appendix E. For our experiments, all dataset processing steps, experimental procedures, and hyperparameter settings are described in Appendices A, B, and C, respectively. 10 ACKNOWLEDGEMENTS Amin Karbasi acknowledges funding in direct support of this work from NSF (IIS-1845032) and ONR (N00014-19-1-2406). A FULL DETAILS ON DATASETS We perform experiments on several real-world public datasets composed of subsets: • UK Retail: This dataset (Chen et al., 2012) contains baskets representing transactions from an online retail company that sells all-occasion gifts. We omit baskets with more than 100 items, leaving us with a dataset containing 19,762 baskets drawn from a catalog of M = 3,941 products. Baskets containing more than 100 items are in the long tail of the basket-size distribution, so omitting these is reasonable, and allows us to use a low-rank factorization of the NDPP with K = 100. • Recipe: This dataset (Majumder et al., 2019) contains recipes and food reviews from Food.com (formerly Genius Kitchen)‡‡. Each recipe (“basket”) is composed of a collection of ingredients, resulting in 178,265 recipes and a catalog of 7,993 ingredients. • Instacart: This dataset (Instacart, 2017) contains baskets purchased by Instacart users§§. We omit baskets with more than 100 items, resulting in 3.2 million baskets and a catalog of 49,677 products. • Million Song: This dataset (McFee et al., 2012) contains playlists (“baskets”) of songs from Echo Nest users¶¶. We trim playlists with more than 100 items, leaving 968,674 playlists and a catalog of 371,410 songs. • Book: This dataset (Wan & McAuley, 2018) contains reviews from the Goodreads book review website, including a variety of attributes describing the items***. For each user we build a subset (“basket”) containing the books reviewed by that user. We trim subsets with more than 100 books, resulting in 430,563 subsets and a catalog of 1,059,437 books. As far as we are aware, these datasets do not contain personally identifiable information or offensive content. While the UK Retail dataset is publicly available, we were unable to find a license for it. Also, we were not able to determine if user consent was explicitly obtained by the organizations that constructed these datasets. B FULL DETAILS ON EXPERIMENTAL SETUP AND METRICS We use 300 randomly-selected baskets as a held-out validation set, for tracking convergence during training and for tuning hyperparameters. Another 2000 random baskets are used for testing, and the rest are used for training. Convergence is reached during training when the relative change in validation log-likelihood is below a predetermined threshold. We use PyTorch with Adam (Kingma & Ba, 2015) for optimization. We initialize D from the standard Gaussian distribution N (0, 1), while V and B are initialized from the uniform(0, 1) distribution. Subset expansion task. We use greedy conditioning to do next-item prediction (Gartrell et al., 2021, Section 4.2). We compare methods using a standard recommender system metric: mean percentile rank (MPR) (Hu et al., 2008; Li et al., 2010). MPR of 50 is equivalent to random selection; MPR of 100 means that the model perfectly predicts the next item. See Appendix B.1 for a complete description of the MPR metric. Subset discrimination task. We also test the ability of a model to discriminate observed subsets from randomly generated ones. For each subset in the test set, we generate a subset of the same length by drawing items uniformly at random (and we ensure that the same item is not drawn more than once for a subset). We compute the AUC for the model on these observed and random subsets, where the score for each subset is the log-likelihood that the model assigns to the subset. ‡‡See https://www.kaggle.com/shuyangli94/food-com-recipes-and-user-interactions for the license for this public dataset. §§This public dataset is available for non-commercial use; see https://www.instacart.com/datasets/ grocery-shopping-2017 for the license. ¶¶See http://millionsongdataset.com/faq/ for the license for this public dataset. ***This public dataset is available for academic use only; see https://sites.google.com/eng.ucsd.edu/ ucsdbookgraph/home for the license. B.1 MEAN PERCENTILE RANK We begin our definition of MPR by defining percentile rank (PR). First, given a set J , let pi,J = Pr(J ∪ {i} | J). The percentile rank of an item i given a set J is defined as PRi,J = ∑ i′ ̸∈J 1(pi,J ≥ pi′,J) |Y\J | × 100% where Y\J indicates those elements in the ground set Y that are not found in J . For our evaluation, given a test set Y , we select a random element i ∈ Y and compute PRi,Y \{i}. We then average over the set of all test instances T to compute the mean percentile rank (MPR): MPR = 1 |T | ∑ Y ∈T PRi,Y \{i}. C HYPERPARAMETERS FOR EXPERIMENTS Preventing numerical instabilities: The det(LYi) in Eq. (14) will be zero whenever |Yi| > K, where Yi is an observed subset. To address this in practice we set K to the size of the largest subset observed in the data, K ′, as in Gartrell et al. (2017). However, this does not entirely fix the issue, as there is still a chance that the term will be zero even when |Yi| ≤ K. In this case though, we know that we are not at a maximum, since the value of the objective function is −∞. Numerically, to prevent such singularities, in our implementation we add a small ϵI correction to each LYi when optimizing Eq. (14) (ϵ = 10−5 in our experiments). We perform a grid search using a held-out validation set to select the best-performing hyperparameters for each model and dataset. The hyperparameter settings used for each model and dataset are described below. Symmetric low-rank DPP (Gartrell et al., 2017). For this model, we use K for the number of item feature dimensions for the symmetric component V , and α for the regularization hyperparameter for V . We use the following hyperparameter settings: • UK Retail dataset: K = 100, α = 1. • Recipe dataset: K = 100, α = 0.01 • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 100, α = 0.0001. • Book dataset: K = 100, α = 0.001 Scalable NDPP (Gartrell et al., 2021). As described in Section 2.1, we use K to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component D. α and β are the regularization hyperparameters. We use the following hyperparameter settings: • UK dataset: K = 100, α = 0.01. • Recipe dataset: K = 100, α = β = 0.01. • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 100, α = 0.01. • Book dataset: K = 100, α = β = 0.1 ONDPP. As described in Section 5, we use K to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component C. α, β, and γ are the regularization hyperparameters. We use the following hyperparameter settings: • UK dataset: K = 100, α = β = 0.01, γ = 0.5. • Recipe dataset: K = 100, α = β = 0.01, γ = 0.1. • Instacart dataset: K = 100, α = β = 0.001, γ = 0.001. • Million Song dataset: K = 100, α = β = 0.01, γ = 0.2. • Book dataset: K = 100, α = β = 0.01, γ = 0.1. For all of the above model configurations and datasets, we use a batch size of 800 during training. D YOULA DECOMPOSITION: SPECTRAL DECOMPOSITION FOR SKEW-SYMMETRIC MATRIX We provide some basic facts on the spectral decomposition of a skew-symmetric matrix, and introduce an efficient algorithm for this decomposition when it is given by a low-rank factorization. We write i := √ −1 and vH as the conjugate transpose of v ∈ CM , and denote Re(z) and Im(z) by the real and imaginary parts of a complex number z, respectively. Given B ∈ RM×K and D ∈ RK×K , consider a rank-K skew-symmetric matrix B(D −D⊤)B⊤. Note that all nonzero eigenvalues of a real-valued skew-symmetric matrix are purely imaginary. Denote iσ1,−iσ1, . . . , iσK/2,−iσK/2 by its nonzero eigenvalues where each of σj is real, and a1 + ib1,a1 − ib1, . . .aK/2 + ibK/2,aK/2 − ibK/2 by the corresponding eigenvectors for aj , bj ∈ RM , which come in conjugate pairs. Then, we can write B(D −D⊤)B⊤ = K/2∑ j=1 iσj(aj + ibj)(aj + ibj) H − iσj(aj − ibj)(aj − ibj)H (15) = K/2∑ j=1 2σj(ajb ⊤ j − bja⊤j ) (16) = K/2∑ j=1 [ aj − bj aj + bj ] [ 0 σj −σj 0 ] [ a⊤j − b⊤j a⊤j + b ⊤ j ] . (17) Note that a1 ± b1, . . . ,aK/2 ± bK/2 are real-valued orthonormal vectors, because a1, b1, . . . ,aK/2, bK/2 are orthogonal to each other and ∥aj ± bj∥22 = ∥aj∥ 2 2 + ∥bj∥ 2 2 = 1 for all j. The pair {(σj ,aj − bj ,aj + bj)}K/2j=1 is often called the Youla decomposition (Youla, 1961) of B(D −D⊤)B⊤. To efficiently compute the Youla decomposition of a rank-K matrix, we use the following result. Proposition 2 (Proposition 1, Nakatsukasa (2019)). Given A,B ∈ CM×K , the nonzero eigenvalues of AB⊤ ∈ CM×M and B⊤A ∈ CK×K are identical. In addition, if (λ,v) is an eigenpair of B⊤A with λ ̸= 0, then (λ,Av/ ∥Av∥2) is an eigenpair of AB⊤. From the above proposition, one can first compute (D −D⊤)B⊤B and then apply the eigendecomposition to that K-by-K matrix. Taking the imaginary part of the obtained eigenvalues gives us the σj’s, and multiplying B by the eigenvectors gives us the eigenvectors of B(D −D⊤)B⊤. In addition, this can be done in O(MK2 +K3) time; when M > K it runs much faster than the eigendecomposition of B(D −D⊤)B⊤, which requires O(M3) time. The pseudo-code of the Youla decomposition is provided in Algorithm 4. Algorithm 4 Youla decomposition of low-rank skew-symmetric matrix 1: procedure YOULADECOMPOSITION(B,D) 2: {(ηj , zj), (ηj , zj)}K/2j=1 ← eigendecomposition of (D −D⊤)B⊤B 3: for j = 1, . . . ,K/2 do 4: σj ← Im(ηj) for j = 1, . . . ,K/2 5: y2j−1 ← B (Re(zj)− Im(zj)) 6: y2j ← B (Re(zj) + Im(zj)) 7: yj ← yj/ ∥yj∥ for j = 1, . . . ,K 8: return {(σj ,y2j−1,y2j)}K/2j=1 E PROOFS E.1 PROOF OF THEOREM 1 Theorem 1. For every subset Y ⊆ [M ], it holds that det(LY ) ≤ det(L̂Y ). Moreover, equality holds when the size of Y is equal to the rank of L. Proof of Theorem 1. It is enough to fix Y ⊆ [M ] such that 1 ≤ |Y | ≤ 2K, because the rank of both L and L̂ is up to 2K. Denote k := |Y | and ( [2K] k ) := {I ⊆ [2K]; |I| = k} for k ≤ 2K. We recall the definition of L̂: given V ,B,D such that L = V V ⊤ + B(D −D⊤)B⊤, let {(ρi,vi)}Ki=1 be the eigendecomposition of V V ⊤ and {(σj ,y2j−1,y2j)}K/2j=1 be the Youla decomposition of B(D −D⊤)B⊤. Denote Z := [v1, . . . ,vK ,y1, . . . ,yK ] ∈ RM×2K and X := diag ( ρ, . . . , ρK , [ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]) , X̂ := diag ( ρ1, . . . , ρK , [ σ1 0 0 σ1 ] , . . . , [ σK/2 0 0 σK/2 ]) , so that L = ZXZ⊤ and L̂ = ZX̂Z⊤. Applying the Cauchy-Binet formula twice, we can write the determinant of the principal submatrices of both L and L̂: det(LY ) = ∑ I∈([2K]k ) ∑ J∈([2K]k ) det(XI,J) det(ZY,I) det(ZY,J), (18) det(L̂Y ) = ∑ I∈([2K]k ) ∑ J∈([2K]k ) det(X̂I,J) det(ZY,I) det(ZY,J) = ∑ I∈([2K]k ) det(X̂I) det(ZY,I) 2, (19) where Eq. (19) follows from the fact that X̂ is diagonal, which means that det(X̂I,J) = 0 for I ̸= J . When the size of Y is equal to the rank of L (i.e., k = 2K), the summations in Eqs. (18) and (19) simplify to single terms: det(LY ) = det(X) det(ZY,:)2 and det(L̂Y ) = det(X̂) det(ZY,:)2. Now, observe that the determinants of the full X and X̂ matrices are identical: det(X) = det(X̂) =∏K i=1 ρi ∏K/2 j=1 σ 2 j . Hence, it holds that det(LY ) = det(L̂Y ). This proves the second statement of the theorem. To prove that det(LY ) ≤ det(L̂Y ) for smaller subsets Y , we will use the following: Claim 1. For every I, J ∈ ( [2K] k ) such that det(XI,J) ̸= 0, there exists a (nonempty) collection of subset pairs S(I, J) ⊆ ( [2K] k ) × ( [2K] k ) such that∑ (I′,J′)∈S(I,J) det(XI,J) det(ZY,I) det(ZY,J) ≤ ∑ (I′,J′)∈S(I,J) det(X̂I,I) det(ZY,I) 2. (20) Claim 2. The number of nonzero terms in Eq. (18) is identical to that in Eq. (19). Combining Claim 1 with Claim 2 yields det(LY ) = ∑ I,J∈([2K]k ) det(XI,J) det(ZY,I) det(ZY,J) ≤ ∑ I∈([2K]k ) det(X̂I,I) det(ZY,I) 2 = det(L̂Y ). We conclude the proof of Theorem 1. Below we provide proofs for Claim 1 and Claim 2. Proof of Claim 1. Recall that X is a block-diagonal matrix, where each block is of size either 1-by-1, containing ρi, or 2-by-2, containing both σj and −σj in the form [ 0 σj −σj 0 ] . A submatrix XI,J ∈ Rk×k with rows I and columns J will only have a nonzero determinant if it contains no all-zero row or column. Hence, any XI,J with nonzero determinant will have the following form (or some permutation of this block-diagonal): XI,J = ρp1 · · · 0 ... . . . ... 0 0 . . . ρp|PI,J | ±σq1 · · · 0 ... . . . ... 0 · · · ±σq|QI,J | 0 σr1 −σr1 0 . . . 0 0 σr|RI,J | −σr|RI,J | 0 (21) and we denote P I,J := {p1, . . . , p|P I,J |}, QI,J := {q1, . . . , q|QI,J |}, and RI,J := {r1, . . . , r|RI,J |}. Indices p ∈ P I,J yield a diagonal matrix with entries ρp. For such p, both I and J must contain index p. Indices r ∈ RI,J yield a block-diagonal matrix of the form [ 0 σr −σr 0 ] . For such r, both I and J must contain a pair of indices, (K + 2r − 1,K + 2r). Finally, indices q ∈ QI,J yield a diagonal matrix with entries of ±σq (the sign can be + or −). For such q, I contains K + 2q − 1 or K + 2q, and J must contain the other. Note that there is no intersection between QI,J and RI,J . If QI,J is an empty set (i.e., I = J), then det(XI,J) = det(X̂I,J) and det(XI,J) det(ZY,I) det(ZY,J) = det(X̂I) det(ZY,I) 2. (22) Thus, the terms in Eq. (18) in this case appear in Eq. (19). Now assume that QI,J ̸= ∅ and consider the following set of pairs: S(I, J) := {(I ′, J ′) : P I,J = P I′,J′ , QI,J = QI′,J′ , RI,J = RI′,J′}. In other words, for (I ′, J ′) ∈ S(I, J), the diagonal XI′,J′ contains ρp, [ 0 σr −σr 0 ] exactly as in XI,J . However, the signs of the σr’s may differ from XI,J . Combining this observation with the definition of X̂ , |det(XI′,J′)| = |det(XI,J)| = det(X̂I) = det(X̂I′) = det(X̂J) = det(X̂J′). (23) Therefore, ∑ (I′,J′)∈S(I,J) det(XI′,J′) det(ZY,I′) det(ZY,J′) (24) ≤ ∑ (I′,J′)∈S(I,J) |det(XI′,J′)|det(ZY,I′) det(ZY,J′) (25) = det(X̂I) ∑ (I′,J′)∈S(I,J) det(ZY,I′) det(ZY,J′) (26) ≤ det(X̂I) ∑ (I′,∗)∈S(I,J) det(ZY,I′) 2 (27) = ∑ (I′,∗)∈S(I,J) det(X̂I′) det(ZY,I′) 2 (28) where the third line comes from Eq. (23) and the fourth line follows from the rearrangement inequality. Note that application of this inequality does not change the number of terms in the sum. This completes the proof of Claim 1. Proof of Claim 2. In Eq. (19), observe that det(X̂I) det(ZY,I)2 ̸= 0 if and only if det(X̂I) ̸= 0. Since all ρi’s and σj’s are positive, the number of I ⊆ [2K], |I| = k such that det(X̂I) ̸= 0 is equal to ( 2K k ) . Similarly, the number of nonzero terms in Eq. (18) equals the number of possible choices of I, J ∈ ( [2K] k ) such that det(XI,J) ̸= 0. This can be counted as follows: first choose i items in {ρ1, . . . , ρK} for i = 0, . . . , k; then, choose j items in {[ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]} for j = 0, . . . , ⌊k−i2 ⌋; lastly, choose k − i − 2j of {±σq; q /∈ RI,J}, then choose the sign for each of these (σq or −σq). Combining all of these choices, the total number of nonzero terms is: k∑ i=0 ( K i ) ︸ ︷︷ ︸ choice of ρp ⌊ k−i2 ⌋∑ j=0 ( K/2 j ) ︸ ︷︷ ︸ choice of [ 0 σr −σr 0 ] ( K/2− j k − i− 2j ) 2k−i−2j︸ ︷︷ ︸ choice of ±σq (29) = k∑ i=0 ( K i ) ( K k − i ) (30) = ( 2K k ) (31) where the second line comes from the fact that ( 2n m ) = ∑⌊m2 ⌋ j=0 ( n j )( n−j m−2j ) 2m−2j for any integers n,m ∈ N such that m ≤ 2n (see (1.69) in Quaintance (2010)), and the third line follows from the fact that ∑r i=0 ( m i )( n r−i ) = ( n+m r ) for n,m, r ∈ N (Vandermonde’s identity). Hence, both the number of nonzero terms in Eqs. (18) and (19) is equal to ( 2K k ) . This completes the proof of Claim 2. E.2 PROOF OF PROPOSITION 1 Proposition 1. The tree-based sampling procedure SAMPLEDPP in Algorithm 3 runs in time O(K + k3 logM + k4), where k is the size of the sampled set†††. Proof of Proposition 1. Since computing pℓ takes O(k2) from Eq. (12), and since the binary tree has depth O(logM), SAMPLEITEM in Algorithm 3 runs in O(k2 logM) time. Moreover, the query matrix QY can be updated in O(k3) time as it only requires a k-by-k matrix inversion. Therefore, the overall runtime of the tree-based elementary DPP sampling algorithm (after pre-processing) is O(k3 logM + k4). This improves the previous O(k4 logM) runtime studied in Gillenwater et al. (2019). Combining this with elementary DPP selection (Line 15 in Algorithm 3), we can sample a set in O(K + k3 logM + k4) time. This completes the proof of Proposition 1. E.3 PROOF OF THEOREM 2 Theorem 2. Given an NDPP kernel L = V V ⊤ + B(D −D⊤)B⊤ for V ,B ∈ RM×K ,D ∈ RK×K , consider the proposal kernel L̂ as proposed in Section 4.1. Let {σj}K/2j=1 be the positive eigenvalues obtained from the Youla decomposition of B(D−D⊤)B⊤. If V ⊥ B, then det(L̂+I)det(L+I) =∏K/2 j=1 ( 1 + 2σj σ2j+1 ) ≤ (1 + ω)K/2, where ω = 2K ∑K/2 j=1 2σj σ2j+1 ∈ (0, 1]. Proof of Theorem 2. Since the column spaces of V and B are orthogonal, the corresponding eigenvectors are also orthogonal, i.e., Z⊤Z = I2K . Then, det(L+ I) = det(ZXZ⊤ + I) = det(XZ⊤Z + I2K) = det(X + I2K) (32) = K∏ i=1 (ρi + 1) K/2∏ j=1 det ([ 1 σj −σj 1 ]) (33) = K∏ i=1 (ρi + 1) K/2∏ j=1 (σ2j + 1) (34) †††Computing pℓ via Eq. (12) improves on Gillenwater et al. (2019)’s O(k4 logM) runtime for this step. and similarly det(L̂+ I) = K∏ i=1 (ρi + 1) K/2∏ j=1 (σj + 1) 2. (35) Combining Eqs. (34) and (35), we have that det(L̂+ I) det(L+ I) = K/2∏ j=1 (σj + 1) 2 (σ2j + 1) = K/2∏ j=1 ( 1 + 2σj σ2j + 1 ) ≤ 1 + 2 K K/2∑ j=1 2σj σ2j + 1 K/2 (36) where the inequality holds from the Jensen’s inequality. This completes the proof of Theorem 2.
1. What are the main contributions of the paper regarding scalable sampling methods for NDPPs? 2. What are the strengths of the proposed algorithms, particularly in terms of efficiency and predictive performance? 3. Do you have any questions or concerns regarding the assumptions and limitations of the paper's approaches? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any minor issues or suggestions for improvement that the reviewer would like to bring to the authors' attention?
Summary Of The Paper Review
Summary Of The Paper This paper studies scalable sampling methods for NDPPs: By assuming low-rank structure of the kernel, the paper proposes a linear-time sampling method, which is faster than previous sampling algorithm with cubic runtime for general kernel. Furthermore, this paper develops a scalable sub-linear-time rejection sampling algorithm for a subclass of low-rank NDPPs that are called ONDPPs. Through experiments, it is shown that the predictive performance of ONDPPs is not degraded compared to NDPPs. By adding a regularization term in the optimization, the rejection probability is greatly reduced. Review This paper proposes two scalable sampling algorithms for NDPPs, one for low-rank kernel, and the other for low rank orthogonal kernel. The efficiency of the proposed algorithms are verified through experiments, and the predictive performance is not degrade. This paper is well written, and the studies could benefit relevant researches on NDPPs. I have some minor comments which should be easy to fix or answer: For Equ (4) and Equ (5), please give a definition of z j . In section 4.1, the eigendecomposition of V V T is not used in Algorithm 2. Could we remove it to avoid the confusion? In section 4.1, the dimensions of Z and X do not match when K is odd. In section 5, could you elaborate on why if V ⊥̸ B , L will have rank < 2 K ? What happens if V = V 1 + B where V 1 ⊥ B ?
ICLR
Title Scalable Sampling for Nonsymmetric Determinantal Point Processes Abstract A determinantal point process (DPP) on a collection of M items is a model, parameterized by a symmetric kernel matrix, that assigns a probability to every subset of those items. Recent work shows that removing the kernel symmetry constraint, yielding nonsymmetric DPPs (NDPPs), can lead to significant predictive performance gains for machine learning applications. However, existing work leaves open the question of scalable NDPP sampling. There is only one known DPP sampling algorithm, based on Cholesky decomposition, that can directly apply to NDPPs as well. Unfortunately, its runtime is cubic in M , and thus does not scale to large item collections. In this work, we first note that this algorithm can be transformed into a linear-time one for kernels with low-rank structure. Furthermore, we develop a scalable sublinear-time rejection sampling algorithm by constructing a novel proposal distribution. Additionally, we show that imposing certain structural constraints on the NDPP kernel enables us to bound the rejection rate in a way that depends only on the kernel rank. In our experiments we compare the speed of all of these samplers for a variety of real-world tasks. 1 INTRODUCTION A determinantal point process (DPP) on M items is a model, parameterized by a symmetric kernel matrix, that assigns a probability to every subset of those items. DPPs have been applied to a wide range of machine learning tasks, including stochastic gradient descent (SGD) (Zhang et al., 2017), reinforcement learning (Osogami & Raymond, 2019; Yang et al., 2020), text summarization (Dupuy & Bach, 2018), coresets (Tremblay et al., 2019), and more. However, a symmetric kernel can only capture negative correlations between items. Recent works (Brunel, 2018; Gartrell et al., 2019) have shown that using a nonsymmetric DPP (NDPP) allows modeling of positive correlations as well, which can lead to significant predictive performance gains. Gartrell et al. (2021) provides scalable NDPP kernel learning and MAP inference algorithms, but leaves open the question of scalable sampling. The only known sampling algorithm for NDPPs is the Cholesky-based approach described in Poulson (2019), which has a runtime of O(M3) and thus does not scale to large item collections. There is a rich body of work on efficient sampling algorithms for (symmetric) DPPs, including recent works such as Derezinski et al. (2019); Poulson (2019); Calandriello et al. (2020). Key distinctions between existing sampling algorithms include whether they are for exact or approximate sampling, whether they assume the DPP kernel has some low-rank K ≪ M , and whether they sample from the space of all 2M subsets or from the restricted space of size-k subsets, so-called k-DPPs. In the context of MAP inference, influential work, including Summa et al. (2014); Chen et al. (2018); Hassani et al. (2019); Ebrahimi et al. (2017); Indyk et al. (2020), proposed efficient algorithms that the approximate (sub)determinant maximization problem and provide rigorous guarantees. In this work we focus on exact sampling for low-rank kernels, and provide scalable algorithms for NDPPs. Our contributions are as follows, with runtime and memory details summarized in Table 1: • Linear-time sampling (Section 3): We show how to transform the O(M3) Choleskydecomposition-based sampler from Poulson (2019) into an O(MK2) sampler for rank-K kernels. • Sublinear-time sampling (Section 4): Using rejection sampling, we show how to leverage existing sublinear-time samplers for symmetric DPPs to implement a sublinear-time sampler for a subclass of NDPPs that we call orthogonal NDPPs (ONDPPs). • Learning with orthogonality constraints (Section 5): We show that the scalable NDPP kernel learning of Gartrell et al. (2021) can be slightly modified to impose an orthogonality constraint, yielding the ONDPP subclass. The constraint allows us to control the rejection sampling algorithm’s rejection rate, ensuring its scalability. Experiments suggest that the predictive performance of the kernels is not degraded by this change. For a common large-scale setting where M is 1 million, our sublinear-time sampler results in runtime that is hundreds of times faster than the linear-time sampler. In the same setting, our linear-time sampler provides runtime that is millions of times faster than the only previously known NDPP sampling algorithm, which has cubic time complexity and is thus impractical in this scenario. 2 BACKGROUND Notation. We use [M ] := {1, . . . ,M} to denote the set of items 1 through M . We use IK to denote the K-by-K identity matrix, and often write I := IM when the dimensionality should be clear from context. Given L ∈ RM×M , we use Li,j to denote the entry in the i-th row and j-th column, and LA,B ∈ R|A|×|B| for the submatrix formed by taking rows A and columns B. We also slightly abuse notation to denote principal submatrices with a single subscript, LA := LA,A. Kernels. As discussed earlier, both (symmetric) DPPs and NDPPs define a probability distribution over all 2M subsets of a ground set [M ]. The distribution is parameterized by a kernel matrix L ∈ RM×M and the probability of a subset Y ⊆ [M ] is defined to be Pr(Y ) ∝ det(LY ). For this to define a valid distribution, it must be the case that det(LY ) ≥ 0 for all Y . For symmetric DPPs, the non-negativity requirement is identical to a requirement that L be positive semi-definite (PSD). For nonsymmetric DPPs, there is no such simple correspondence, but prior work such as Gartrell et al. (2019; 2021) has focused on PSD matrices for simplicity. Normalizing and marginalizing. The normalizer of a DPP or NDPP distribution can also be written as a single determinant: ∑ Y⊆[M ] det(LY ) = det(L+ I) (Kulesza & Taskar, 2012, Theorem 2.1). Additionally, the marginal probability of a subset can be written as a determinant: Pr(A ⊆ Y ) = det(KA), for K := I − (L+ I)−1 (Kulesza & Taskar, 2012, Theorem 2.2)*, where K is typically called the marginal kernel. Intuition. The diagonal element Ki,i is the probability that item i is included in a set sampled from the model. The 2-by-2 determinant det(K{i,j}) = Ki,iKj,j −Ki,jKj,j is the probability that both i and j are included in the sample. A symmetric DPP has a symmetric marginal kernel, meaning Ki,j = Kj,i, and hence Ki,iKj,j −Ki,jKj,i ≤ Ki,iKj,j . This implies that the probability of including both i and j in the sampled set cannot be greater than the product of their individual inclusion probabilities. Hence, symmetric DPPs can only encode negative correlations. In contrast, NDPPs can have Ki,j and Kj,i with differing signs, allowing them to also capture positive correlations. 2.1 RELATED WORK Learning. Gartrell et al. (2021) proposes a low-rank kernel decomposition for NDPPs that admits linear-time learning. The decomposition takes the form L := V V ⊤ + B(D − D⊤)B⊤ for *The proofs in Kulesza & Taskar (2012) typically assume a symmetric kernel, but this particular one does not rely on the symmetry. Algorithm 1 Cholesky-based NDPP sampling (Poulson, 2019, Algorithm 1) 1: procedure SAMPLECHOLESKY(K) ▷ marginal kernel factorization Z,W 2: Y ← ∅ Q←W 3: for i = 1 to M do 4: pi ←Ki,i pi ← z⊤i Qzi 5: u← uniform(0, 1) 6: if u ≤ pi then Y ← Y ∪ {i} 7: else pi ← pi − 1 8: KA ←KA − KA,iKi,Api for A := {i+ 1, . . . ,M} Q← Q− Qziz ⊤ i Q pi 9: return Y V ,B ∈ RM×K , and D ∈ RK×K . The V V ⊤ component is a rank-K symmetric matrix, which can model negative correlations between items. The B(D −D⊤)B⊤ component is a rank-K skewsymmetric matrix, which can model positive correlations between items. For compactness of notation, we will write L = ZXZ⊤, where Z = [ V B ] ∈ RM×2K , and X = [ IK 0 0 D−D⊤ ] ∈ R2K×2K . The marginal kernel in this case also has a rank-2K decomposition, as can be shown via application of the Woodbury matrix identity: K := I − (I +L)−1 = ZX ( I2K +Z ⊤ZX )−1 Z⊤. (1) Note that the matrix to be inverted can be computed from Z and X in O(MK2) time, and the inverse itself takes O(K3) time. Thus, K can be computed from L in time O(MK2). We will develop sampling algorithms for this decomposition, as well as an orthogonality-constrained version of it. We use W := X ( I2K +Z ⊤ZX )−1 in what follows so that we can compactly write K = ZWZ⊤. Sampling. While there are a number of exact sampling algorithms for DPPs with symmetric kernels, the only published algorithm that clearly can directly apply to NDPPs is from Poulson (2019) (see Theorem 2 therein). This algorithm begins with an empty set Y = ∅ and iterates through the M items, deciding for each whether or not to include it in Y based on all of the previous inclusion/exclusion decisions. Poulson (2019) shows, via the Cholesky decomposition, that the necessary conditional probabilities can be computed as follows: Pr (j ∈ Y | i ∈ Y ) = Pr({i, j} ⊆ Y ) Pr(i ∈ Y ) = Kj,j − (Kj,iKi,j) /Ki,i, (2) Pr (j ∈ Y | i /∈ Y ) = Pr(j ∈ Y )− Pr({i, j} ⊆ Y ) Pr(i /∈ Y ) = Kj,j − (Kj,iKi,j) / (Ki,i − 1) . (3) Algorithm 1 (left-hand side) gives pseudocode for this Cholesky-based sampling algorithm†. There has also been some recent work on approximate sampling for fixed-size k-NDPPs: Alimohammadi et al. (2021) provide a Markov chain Monte Carlo (MCMC) algorithm and prove that the overall runtime to approximate ε-close total variation distance is bounded by O(M2k3 log(1/(εPr(Y0))), where Pr(Y0) is probability of an initial state Y0. Improving this runtime is an interesting avenue for future work, but for this paper we focus on exact sampling. 3 LINEAR-TIME CHOLESKY-BASED SAMPLING In this section, we show that the O(M3) runtime of the Cholesky-based sampler from Poulson (2019) can be significantly improved when using the low-rank kernel decomposition of Gartrell et al. (2021). First, note that Line 8 of Algorithm 1, where all marginal probabilities are updated via an (M − i)-by-(M − i) matrix subtraction, is the most costly part of the algorithm, making overall time and memory complexities O(M3) and O(M2), respectively. However, when the DPP kernel is given by a low-rank decomposition, we observe that marginal probabilities can be updated by matrix-vector †Cholesky decomposition is defined only for a symmetric positive definite matrix. However, we use the term “Cholesky” from Poulson (2019) to maintain consistency with this work, although Algorithm 1 is valid for nonsymmetric matrices. Algorithm 2 Rejection NDPP sampling (Tree-based sampling) 1: procedure PREPROCESS(V ,B,D) 2: {(σj ,y2j−1,y2j)}K/2j=1 ← YOULADECOMPOSE(B,D)‡ 3: X̂ ← diag ( IK , σ1, σ1, . . . , σK/2, σK/2 ) 4: Z ← [V ,y1, . . . ,yK ] {(λi, zi)}2Ki=1 ← EIGENDECOMPOSE(ZX̂1/2) T ← CONSTRUCTTREE(M, [z1, . . . ,z2K ]⊤) 5: return Z, X̂ return T , {(λi, zi)}2Ki=1 6: procedure SAMPLEREJECT(V ,B,D,Z, X̂) ▷ tree T , eigen pair {(λi, zi)}2Ki=1 of ZX̂Z 7: while true do 8: Y ← SAMPLEDPP(ZX̂Z⊤) Y ← SAMPLEDPP(T , {(λi, zi)}2Ki=1) 9: u← uniform(0, 1) 10: p← det([V V ⊤+B(D−D⊤)B⊤]Y ) det([ZX̂Z⊤]Y ) 11: if u ≤ p then break 12: return Y multiplications of dimension 2K, regardless of M . In more detail, suppose we have the marginal kernel K = ZWZ⊤ as in Eq. (1) and let zj be the j-th row vector in Z. Then, for i ̸= j: Pr (j ∈ Y | i ∈ Y ) = Kj,j − (Kj,iKi,j)/Ki,i = z⊤j ( W − (Wzi)(z ⊤ i W ) z⊤i Wzi ) zj , (4) Pr (j ∈ Y | i /∈ Y ) = z⊤j ( W − (Wzi)(z ⊤ i W ) z⊤i Wzi − 1 ) zj . (5) The conditional probabilities in Eqs. (4) and (5) are of bilinear form, and the zj do not change during sampling. Hence, it is enough to update the 2K-by-2K inner matrix at each iteration, and obtain the marginal probability by multiplying this matrix by zi. The details are shown on the right-hand side of Algorithm 1. The overall time and memory complexities are O(MK2) and O(MK), respectively. 4 SUBLINEAR-TIME REJECTION SAMPLING Although the Cholesky-based sampler runs in time linear in M , even this is too expensive for the large M that are often encountered in real-world datasets. To improve runtime, we consider rejection sampling (Von Neumann, 1963). Let p be the target distribution that we aim to sample, and let q be any distribution whose support corresponds to that of p; we call q the proposal distribution. Assume that there is a universal constant U such that p(x) ≤ Uq(x) for all x. In this setting, rejection sampling draws a sample x from q and accepts it with probability p(x)/(Uq(x)), repeating until an acceptance occurs. The distribution of the resulting samples is p. It is important to choose a good proposal distribution q so that sampling is efficient and the number of rejections is small. 4.1 PROPOSAL DPP CONSTRUCTION Our first goal is to find a proposal DPP with symmetric kernel L̂ that can upper-bound all probabilities of samples from the NDPP with kernel L within a constant factor. To this end, we expand the determinant of a principal submatrix, det(LY ), using the spectral decomposition of the NDPP kernel. Such a decomposition essentially amounts to combining the eigendecomposition of the symmetric part of L with the Youla decomposition (Youla, 1961) of the skew-symmetric part. Specifically, suppose {(σj ,y2j−1,y2j)}K/2j=1 is the Youla decomposition of B(D −D⊤)B⊤ (see Appendix D for more details), that is, B(D −D⊤)B⊤ = K/2∑ j=1 σj ( y2j−1y ⊤ 2j − y2jy⊤2j−1 ) . (6) ‡Pseudo-code of YOULADECOMPOSE is provided in Algorithm 4. See Appendix D. Then we can simply write L = ZXZ⊤, for Z := [V ,y1, . . . ,yK ] ∈ RM×2K , and X := diag ( IK , [ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]) . (7) Now, consider defining a related but symmetric PSD kernel L̂ := ZX̂Z⊤ with X̂ := diag ( IK , σ1, σ1, . . . , σK/2, σK/2 ) . All determinants of the principal submatrices of L̂ = ZX̂Z⊤ upper-bound those of L, as stated below. Theorem 1. For every subset Y ⊆ [M ], it holds that det(LY ) ≤ det(L̂Y ). Moreover, equality holds when the size of Y is equal to the rank of L. Proof sketch: From the Cauchy-Binet formula, the determinants of LY and L̂Y for all Y ⊆ [M ], |Y | ≤ 2K can be represented as det(LY ) = ∑ I⊆[K],|I|=|Y | ∑ J⊆[K],|J|=|Y | det(XI,J) det(ZY,I) det(ZY,J), (8) det(L̂Y ) = ∑ I⊆[2K],|I|=|Y | det(X̂I) det(ZY,I) 2. (9) Many of the terms in Eq. (8) are actually zero due to the block-diagonal structure of X . For example, note that if 1 ∈ I but 1 /∈ J , then there is an all-zeros row in XI,J , making det(XI,J) = 0. We show that each XI,J with nonzero determinant is a block-diagonal matrix with diagonal entries among ±σj , or [ 0 σj −σj 0 ] . With this observation, we can prove that det(XI,J) is upper-bounded by det(X̂I) or det(X̂J). Then, through application of the rearrangement inequality, we can upper-bound the sum of the det(XI,J) det(ZY,I) det(ZY,J) in Eq. (8) with a sum over det(X̂I) det(ZY,I)2. Finally, we show that the number of non-zero terms in Eq. (8) is identical to the number of non-zero terms in Eq. (9). Combining these gives us the desired inequality det(LY ) ≤ det(L̂Y ). The full proof of Theorem 1 is in Appendix E.1. Now, recall that the normalizer of a DPP (or NDPP) with kernel L is det(L + I). The ratio of probability of the NDPP with kernel L to that of a DPP with kernel L̂ is thus: PrL(Y ) PrL̂(Y ) = det(LY )/det(L+ I) det(L̂Y )/det(L̂+ I) ≤ det(L̂+ I) det(L+ I) , where the inequality follows from Theorem 1. This gives us the necessary universal constant U upper-bounding the ratio of the target distribution to the proposal distribution. Hence, given a sample Y drawn from the DPP with kernel L̂, we can use acceptance probability PrL(Y )/(U PrL̂(Y )) = det(LY )/ det(L̂Y ). Pseudo-codes for proposal construction and rejection sampling are given in Algorithm 2. Note that to derive L̂ from L it suffices to run the Youla decomposition of B(D − D⊤)B⊤, because the difference is only in the skew-symmetric part. This decomposition can run in O(MK2) time; more details are provided in Appendix D. Since L̂ is a symmetric PSD matrix, we can apply existing fast DPP sampling algorithms to sample from it. In particular, in the next section we combine a fast tree-based method with rejection sampling. 4.2 SUBLINEAR-TIME TREE-BASED SAMPLING There are several DPP sampling algorithms that run in sublinear time, such as tree-based (Gillenwater et al., 2019) and intermediate (Derezinski et al., 2019) sampling algorithms. Here, we consider applying the former, a tree-based approach, to sample from the proposal distribution defined by L̂. We give some details of the sampling procedure, as in the course of applying it we discovered an optimization that slightly improves on the runtime of prior work. Formally, let {(λi, zi)}2Ki=1 be the eigendecomposition of L̂ and Z := [z1, . . . ,z2K ] ∈ RM×2K . As shown in Kulesza & Taskar (2012, Lemma 2.6), for every Y ⊆ [M ], |Y | ≤ 2K, the probability of Y under DPP with L̂ can be written: PrL̂(Y ) = det(L̂Y ) det(L̂+ I) = ∑ E⊆[2K],|E|=|Y | det(ZY,EZ ⊤ Y,E) ∏ i∈E λi λi + 1 ∏ i/∈E 1 λi + 1 . (10) Algorithm 3 Tree-based DPP sampling (Gillenwater et al., 2019) 1: procedure BRANCH(A,Z) 2: if A = {j} then 3: T .A← {j}, T .Σ← Z⊤j,:Zj,: 4: return T 5: Aℓ, Ar ← Split A in half 6: T .left← BRANCH(Aℓ,Z) 7: T .right← BRANCH(Ar,Z) 8: T .Σ← T .left.Σ+ T .right.Σ 9: return T 10: procedure CONSTRUCTTREE(M , Z) 11: return BRANCH([M ], Z) 12: procedure SAMPLEDPP(T ,Z, {λi}Ki=1) 13: E ← ∅, Y ← ∅, QY ← 0 14: for i = 1, . . . ,K do 15: E ← E ∪ {i} w.p. λi/(λi + 1) 16: for k = 1, . . . , |E| do 17: j ← SAMPLEITEM(T ,QY , E) 18: Y ← Y ∪ {j} 19: QY← I|E|−Z⊤Y,E ( ZY,EZ ⊤ Y,E )−1 ZY,E 20: return Y 21: procedure SAMPLEITEM(T ,QY , E) 22: if T is a leaf then return T .A 23: pℓ ← 〈 T .left.ΣE ,QY 〉 24: pr ← 〈 T .right.ΣE ,QY 〉 25: u← uniform(0, 1) 26: if u ≤ pℓpℓ+pr then 27: return SAMPLEITEM(T .left,QY , E) 28: else 29: return SAMPLEITEM(T .right,QY , E) A matrix of the form Z:,EZ⊤:,E can be a valid marginal kernel for a special type of DPP, called an elementary DPP. Hence, Eq. (10) can be thought of as DPP probabilities expressed as a mixture of elementary DPPs. Based on this mixture view, DPP sampling can be done in two steps: (1) choose an elementary DPP according to its mixture weight, and then (2) sample a subset from the selected elementary DPP. Step (1) can be performed by 2K independent random coin tossings, while step (2) involves computational overhead. The key idea of tree-based sampling is that step (2) can be accelerated by traversing a binary tree structure, which can be done in time logarithmic in M . More specifically, given the marginal kernel K = Z:,EZ⊤:,E , where E is obtained from step (1), we start from the empty set Y = ∅ and repeatedly add an item j to Y with probability: Pr(j ∈ S | Y ⊆ S) = Kj,j −Kj,Y (KY )−1KY,j = Zj,EQY Z⊤j,E = 〈 QY , (Z⊤j,:Zj,:)E 〉 , (11) where S is some final selected subset, and QY := I|E| − Z⊤Y,E ( ZY,EZ ⊤ Y,E )−1 ZY,E . Consider a binary tree whose root includes a ground set [M ]. Every non-leaf node contains a subset A ⊆ [M ] and stores a 2K-by-2K matrix ∑ j∈A Z ⊤ j,:Zj,:. A partition Aℓ and Ar, such that Aℓ∪Ar = A,Aℓ∩Ar = ∅, are passed to its left and right subtree, respectively. The resulting tree has M leaves and each has exactly a single item. Then, one can sample a single item by recursively moving down to the left node with probability: pℓ = ⟨QY ,∑j∈Aℓ(Z⊤j,:Zj,:)E⟩ ⟨QY ,∑j∈A(Zj,:Z⊤j,:)E⟩ , (12) or to the right node with probability 1− pℓ, until reaching a leaf node. An item in the leaf node is chosen with probability according to Eq. (11). Since every subset in the support of an elementary DPP with a rank-k kernel has exactly k items, this process is repeated for |E| iterations. Full descriptions of tree construction and sampling are provided in Algorithm 3. The proposed tree-based rejection sampling for an NDPP is outlined on the right-side of Algorithm 2. The one-time pre-processing step of constructing the tree (CONSTRUCTTREE) requires O(MK2) time. After pre-processing, the procedure SAMPLEDPP involves |E| traversals of a tree of depth O(logM), where in each node a O(|E|2) operation is required. The overall runtime is summarized in Proposition 1 and the proof can be found in Appendix E.2. Proposition 1. The tree-based sampling procedure SAMPLEDPP in Algorithm 3 runs in time O(K + k3 logM + k4), where k is the size of the sampled set§. §Computing pℓ via Eq. (12) improves on Gillenwater et al. (2019)’s O(k4 logM) runtime for this step. 4.3 AVERAGE NUMBER OF REJECTIONS We now return to rejection sampling and focus on the expected number of rejections. The number of rejections of Algorithm 2 is known to be a geometric random variable with mean equal to the constant U used to upper-bound the ratio of the target distribution to the proposal distribution: det(L̂+ I)/ det(L+ I). If all columns in V and B are orthogonal, which we denote V ⊥ B, then the expected number of rejections depends only on the eigenvalues of the skew-symmetric part of the NDPP kernel. Theorem 2. Given an NDPP kernel L = V V ⊤ + B(D −D⊤)B⊤ for V ,B ∈ RM×K ,D ∈ RK×K , consider the proposal kernel L̂ as proposed in Section 4.1. Let {σj}K/2j=1 be the positive eigenvalues obtained from the Youla decomposition of B(D−D⊤)B⊤. If V ⊥ B, then det(L̂+I)det(L+I) =∏K/2 j=1 ( 1 + 2σj σ2j+1 ) ≤ (1 + ω)K/2, where ω = 2K ∑K/2 j=1 2σj σ2j+1 ∈ (0, 1]. Proof sketch: Orthogonality between V and B allows det(L+ I) to be expressed just in terms of the eigenvalues of V V ⊤ and B(D −D⊤)B⊤. Since both L and L̂ share the symmetric part V V ⊤, the ratio of determinants only depends on the skew-symmetric part. A more formal proof appears in Appendix E.3. Assuming we have a kernel where V ⊥ B, we can combine Theorem 2 with the tree-based rejection sampling algorithm (right-side in Algorithm 2) to sample in time O((K+k3 logM+k4)(1+ω)K/2). Hence, we have a sampling algorithm that is sublinear in M , and can be much faster than the Choleskybased algorithm when (1 + ω)K/2 ≪M . In the next section, we introduce a learning scheme with the V ⊥ B constraint, as well as regularization to ensure that ω is small. 5 LEARNING WITH ORTHOGONALITY CONSTRAINTS We aim to learn a NDPP that provides both good predictive performance and a low rejection rate. We parameterize our NDPP kernel matrix L = V V ⊤ +B(D −D⊤)B⊤ by D = diag ([ 0 σ1 0 0 ] , . . . , [ 0 σK/2 0 0 ]) (13) for σj ≥ 0, B⊤B = I , and, motivated by Theorem 2, require V ⊤B = 0¶. We call such orthogonality-constrained NDPPs “ONDPPs”. Notice that if V ⊥ B, then L has the full rank of 2K, since the intersection of the column spaces spanned by V and by B is empty, and thus the full rank available for modeling can be used. Thus, this constraint can also be thought of as simply ensuring that ONDPPs use the full rank available to them. Given example subsets {Y1, . . . , Yn} as training data, learning is done by minimizing the regularized negative log-likelihood: min V ,B,{σj}K/2j=1 − 1 n n∑ i=1 log ( det(LYi) det(L+ I) ) + α M∑ i=1 ∥vi∥22 µi + β M∑ i=1 ∥bi∥22 µi + γ K/2∑ j=1 log ( 1 + 2σj σ2j + 1 ) (14) where α, β, γ > 0 are hyperparameters, µi is the frequency of item i in the training data, and vi and bi represent the rows of V and B, respectively. This objective is very similar to that of Gartrell et al. (2021), except for the orthogonality constraint and the final regularization term. Note that this regularization term corresponds exactly to the logarithm of the average rejection rate, and therefore should help to control the number of rejections. 6 EXPERIMENTS We first show that the orthogonality constraint from Section 5 does not degrade the predictive performance of learned kernels. We then compare the speed of our proposed sampling algorithms. ¶Technical details: To learn NDPP models with the constraint V ⊤B = 0, we project V according to: V ← V − B(B⊤B)−1(B⊤V ). For the B⊤B = I constraint, we apply QR decomposition on B. Note that both operations require O(MK2) time. (Constrained learning and sampling code is provided at https://github.com/insuhan/nonsymmetric-dpp-sampling. We use Pytorch’s linalg.solve to avoid the expense of explicitly computing the (B⊤B)−1 inverse.) Hence, our learning time complexity is identical to that of Gartrell et al. (2021). 6.1 PREDICTIVE PERFORMANCE RESULTS FOR NDPP LEARNING We benchmark various DPP models, including symmetric (Gartrell et al., 2017), nonsymmetric for scalable learning (Gartrell et al., 2021), as well as our ONDPP kernels with and without rejection rate regularization. We use the scalable NDPP models (Gartrell et al., 2021) as a baseline||. The kernel components of each model are learned using five real-world recommendation datasets, which have ground set sizes that range from 3,941 to 1,059,437 items (see Appendix A for more details). Our experimental setup and metrics mirror those of Gartrell et al. (2021). We report the mean percentile rank (MPR) metric for a next-item prediction task, the AUC metric for subset discrimination, and the log-likelihood of the test set; see Appendix B for more details on the experiments and metrics. For all metrics, higher numbers are better. For NDPP models, we additionally report the average rejection rates when they apply to rejection sampling. In Table 2, we observe that the predictive performance of our ONDPP models generally match or sometimes exceed the baseline. This is likely because the orthogonality constraint enables more effective use of the full rank-2K feature space. Moreover, imposing the regularization on rejection rate, as shown in Eq. (14), often leads to dramatically smaller rejection rates, while the impact on predictive performance is generally marginal. These results justify the ONDPP and regularization for fast sampling. Finally, we observe that the learning time of our ONDPP models is typically a bit longer than that of the NDPP models, but still quite reasonable (e.g., the time per iteration for the NDPP takes 27 seconds for the Book dataset, while our ONDPP takes 49.7 seconds). Fig. 1 shows how the regularizer γ affects the test log-likelihood and the average number of rejections. We see that γ degrades predictive performance and reduces the rejection rate when set above a certain threshold; this behavior is seen for many datasets. However, for the Recipe dataset we observed that the test log-likelihood is not very sensitive to γ, likely because all models in our experiments achieve very high performance on this dataset. In general, we observe that γ can be set to a value that results in a small rejection rate, while having minimal impact on predictive performance. 6.2 SAMPLING TIME COMPARISON We benchmark the Cholesky-based sampling algorithm (Algorithm 1) and tree-based rejection sampling algorithm (Algorithm 2) on ONDPPs with both synthetic and real-world data. ||We use the code from https://github.com/cgartrel/scalable-nonsymmetric-DPPs for the NDPP baseline, which is made available under the MIT license. To simplify learning and MAP inference, Gartrell et al. (2021) set B = V in their experiments. However, since we have the V ⊥ B constraint in our ONDPP approach, we cannot set B = V . Hence, for a fair comparison, we do not set B = V for the NDPP baseline in our experiments, and thus the results in Table 2 differ slightly from those published in Gartrell et al. (2021). 10−8 10−6 10−4 10−2 100 regularizer γ 101 104 107 1010 av er ag e # of re je ct io ns (a) 10−6 10−4 10−2 100 regularizer γ −106 −104 −102 −100 −98 te st lo g- lik el ih oo d (b) Figure 1: Average number of rejections and test log-likelihood with different values of the regularizer γ for ONDPPs trained on the UK Retail dataset. Shaded regions are 95% confidence intervals of 10 independent trials. 212 214 216 218 220 ground set size M 101 102 103 sa m pl in g tim e (s ec ) Cholesky-based Rejection (a) 212 214 216 218 220 ground set size M 10−1 101 103 pr ep ro ce ss in g tim e (s ec ) Tree construction Spectral decomposition (b) Figure 2: Wall-clock time (sec) for synthetic data for (a) NDPP sampling algorithms and (b) preprocessing steps for the rejection sampling. Shaded regions are 95% confidence intervals from 100 independent trials. Synthetic datasets. We generate non-uniform random features for V ,B as done by (Han & Gillenwater, 2020). In particular, we first sample x1, . . . ,x100 from N (0, I2K/(2K)), and integers t1, . . . , t100 from Poisson distribution with mean 5, rescaling the integers such that ∑ i ti = M . Next, we draw ti random vectors from N (xi, I2K), and assign the first K-dimensional vectors as the row vectors of V and the latter vectors as those of B. Each entry of D is sampled from N (0, 1). We choose K = 100 and vary M from 212 to 220. Fig. 2(a) illustrates the runtimes of Algorithms 1 and 2. We verify that the rejection sampling time tends to increase sub-linearly with the ground set size M , while the Cholesky-based sampler runs in linear time. In Fig. 2(b), the runtimes of the preprocessing steps for Algorithm 2 (i.e., spectral decomposition and tree construction) are reported. Although the rejection sampler requires these additional processes, they are one-time steps and run much faster than a single run of the Choleksy-based method for M = 220. Real-world datasets. In Table 3, we report the runtimes and speedup of NDPP sampling algorithms for real-world datasets. All NDPP kernels are obtained using learning with orthogonality constraints, with rejection rate regularization as reported in Section 6.1. We observe that the tree-based rejection sampling runs up to 246 times faster than the Cholesky-based algorithm. For larger datasets, we expect that this gap would significantly increase. As with the synthetic experiments, we see that the tree construction pre-processing time is comparable to the time required to draw a single sample via the other methods, and thus the tree-based method is often the best choice for repeated sampling**. 7 CONCLUSION In this work we developed scalable sampling methods for NDPPs. One limitation of our rejection sampler is its practical restriction to the ONDPP subclass. Other opportunities for future work include the extension of our rejection sampling approach to the generation of fixed-size samples (from k-NDPPs), the development of approximate sampling techniques, and the extension of DPP samplers along the lines of Derezinski et al. (2019); Calandriello et al. (2020) to NDPPs. Scalable sampling also opens the door to using NDPPs as building blocks in probabilistic models. **We note that the tree can consume substantial memory, e.g., 169.5 GB for the Book dataset with K = 100. For settings where this scale of memory use is unacceptable, we suggest use of the intermediate sampling algorithm (Calandriello et al., 2020) in place of tree-based sampling. The resulting sampling algorithm may be slower, but the O(M +K) memory cost is substantially lower. 8 ETHICS STATEMENT In general, our work moves in a positive direction by substantially decreasing the computational costs of NDPP sampling. When using our constrained learning method to learn kernels from user data, we recommend employing a technique such as differentially-private SGD (Abadi et al., 2016) to help prevent user data leaks, and adjusting the weights on training examples to balance the impact of sub-groups of users so as to make the final kernel as fair as possible. As far as we are aware, the datasets used in this work do not contain personally identifiable information or offensive content. We were not able to determine if user consent was explicitly obtained by the organizations that constructed these datasets. 9 REPRODUCIBILITY STATEMENT We have made extensive effort to ensure that all algorithmic, theoretical, and experimental contributions described in this work are reproducible. All of the code implementing our constrained learning and sampling algorithms is publicly available ††. The proofs for our theoretical contributions are available in Appendix E. For our experiments, all dataset processing steps, experimental procedures, and hyperparameter settings are described in Appendices A, B, and C, respectively. 10 ACKNOWLEDGEMENTS Amin Karbasi acknowledges funding in direct support of this work from NSF (IIS-1845032) and ONR (N00014-19-1-2406). A FULL DETAILS ON DATASETS We perform experiments on several real-world public datasets composed of subsets: • UK Retail: This dataset (Chen et al., 2012) contains baskets representing transactions from an online retail company that sells all-occasion gifts. We omit baskets with more than 100 items, leaving us with a dataset containing 19,762 baskets drawn from a catalog of M = 3,941 products. Baskets containing more than 100 items are in the long tail of the basket-size distribution, so omitting these is reasonable, and allows us to use a low-rank factorization of the NDPP with K = 100. • Recipe: This dataset (Majumder et al., 2019) contains recipes and food reviews from Food.com (formerly Genius Kitchen)‡‡. Each recipe (“basket”) is composed of a collection of ingredients, resulting in 178,265 recipes and a catalog of 7,993 ingredients. • Instacart: This dataset (Instacart, 2017) contains baskets purchased by Instacart users§§. We omit baskets with more than 100 items, resulting in 3.2 million baskets and a catalog of 49,677 products. • Million Song: This dataset (McFee et al., 2012) contains playlists (“baskets”) of songs from Echo Nest users¶¶. We trim playlists with more than 100 items, leaving 968,674 playlists and a catalog of 371,410 songs. • Book: This dataset (Wan & McAuley, 2018) contains reviews from the Goodreads book review website, including a variety of attributes describing the items***. For each user we build a subset (“basket”) containing the books reviewed by that user. We trim subsets with more than 100 books, resulting in 430,563 subsets and a catalog of 1,059,437 books. As far as we are aware, these datasets do not contain personally identifiable information or offensive content. While the UK Retail dataset is publicly available, we were unable to find a license for it. Also, we were not able to determine if user consent was explicitly obtained by the organizations that constructed these datasets. B FULL DETAILS ON EXPERIMENTAL SETUP AND METRICS We use 300 randomly-selected baskets as a held-out validation set, for tracking convergence during training and for tuning hyperparameters. Another 2000 random baskets are used for testing, and the rest are used for training. Convergence is reached during training when the relative change in validation log-likelihood is below a predetermined threshold. We use PyTorch with Adam (Kingma & Ba, 2015) for optimization. We initialize D from the standard Gaussian distribution N (0, 1), while V and B are initialized from the uniform(0, 1) distribution. Subset expansion task. We use greedy conditioning to do next-item prediction (Gartrell et al., 2021, Section 4.2). We compare methods using a standard recommender system metric: mean percentile rank (MPR) (Hu et al., 2008; Li et al., 2010). MPR of 50 is equivalent to random selection; MPR of 100 means that the model perfectly predicts the next item. See Appendix B.1 for a complete description of the MPR metric. Subset discrimination task. We also test the ability of a model to discriminate observed subsets from randomly generated ones. For each subset in the test set, we generate a subset of the same length by drawing items uniformly at random (and we ensure that the same item is not drawn more than once for a subset). We compute the AUC for the model on these observed and random subsets, where the score for each subset is the log-likelihood that the model assigns to the subset. ‡‡See https://www.kaggle.com/shuyangli94/food-com-recipes-and-user-interactions for the license for this public dataset. §§This public dataset is available for non-commercial use; see https://www.instacart.com/datasets/ grocery-shopping-2017 for the license. ¶¶See http://millionsongdataset.com/faq/ for the license for this public dataset. ***This public dataset is available for academic use only; see https://sites.google.com/eng.ucsd.edu/ ucsdbookgraph/home for the license. B.1 MEAN PERCENTILE RANK We begin our definition of MPR by defining percentile rank (PR). First, given a set J , let pi,J = Pr(J ∪ {i} | J). The percentile rank of an item i given a set J is defined as PRi,J = ∑ i′ ̸∈J 1(pi,J ≥ pi′,J) |Y\J | × 100% where Y\J indicates those elements in the ground set Y that are not found in J . For our evaluation, given a test set Y , we select a random element i ∈ Y and compute PRi,Y \{i}. We then average over the set of all test instances T to compute the mean percentile rank (MPR): MPR = 1 |T | ∑ Y ∈T PRi,Y \{i}. C HYPERPARAMETERS FOR EXPERIMENTS Preventing numerical instabilities: The det(LYi) in Eq. (14) will be zero whenever |Yi| > K, where Yi is an observed subset. To address this in practice we set K to the size of the largest subset observed in the data, K ′, as in Gartrell et al. (2017). However, this does not entirely fix the issue, as there is still a chance that the term will be zero even when |Yi| ≤ K. In this case though, we know that we are not at a maximum, since the value of the objective function is −∞. Numerically, to prevent such singularities, in our implementation we add a small ϵI correction to each LYi when optimizing Eq. (14) (ϵ = 10−5 in our experiments). We perform a grid search using a held-out validation set to select the best-performing hyperparameters for each model and dataset. The hyperparameter settings used for each model and dataset are described below. Symmetric low-rank DPP (Gartrell et al., 2017). For this model, we use K for the number of item feature dimensions for the symmetric component V , and α for the regularization hyperparameter for V . We use the following hyperparameter settings: • UK Retail dataset: K = 100, α = 1. • Recipe dataset: K = 100, α = 0.01 • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 100, α = 0.0001. • Book dataset: K = 100, α = 0.001 Scalable NDPP (Gartrell et al., 2021). As described in Section 2.1, we use K to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component D. α and β are the regularization hyperparameters. We use the following hyperparameter settings: • UK dataset: K = 100, α = 0.01. • Recipe dataset: K = 100, α = β = 0.01. • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 100, α = 0.01. • Book dataset: K = 100, α = β = 0.1 ONDPP. As described in Section 5, we use K to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component C. α, β, and γ are the regularization hyperparameters. We use the following hyperparameter settings: • UK dataset: K = 100, α = β = 0.01, γ = 0.5. • Recipe dataset: K = 100, α = β = 0.01, γ = 0.1. • Instacart dataset: K = 100, α = β = 0.001, γ = 0.001. • Million Song dataset: K = 100, α = β = 0.01, γ = 0.2. • Book dataset: K = 100, α = β = 0.01, γ = 0.1. For all of the above model configurations and datasets, we use a batch size of 800 during training. D YOULA DECOMPOSITION: SPECTRAL DECOMPOSITION FOR SKEW-SYMMETRIC MATRIX We provide some basic facts on the spectral decomposition of a skew-symmetric matrix, and introduce an efficient algorithm for this decomposition when it is given by a low-rank factorization. We write i := √ −1 and vH as the conjugate transpose of v ∈ CM , and denote Re(z) and Im(z) by the real and imaginary parts of a complex number z, respectively. Given B ∈ RM×K and D ∈ RK×K , consider a rank-K skew-symmetric matrix B(D −D⊤)B⊤. Note that all nonzero eigenvalues of a real-valued skew-symmetric matrix are purely imaginary. Denote iσ1,−iσ1, . . . , iσK/2,−iσK/2 by its nonzero eigenvalues where each of σj is real, and a1 + ib1,a1 − ib1, . . .aK/2 + ibK/2,aK/2 − ibK/2 by the corresponding eigenvectors for aj , bj ∈ RM , which come in conjugate pairs. Then, we can write B(D −D⊤)B⊤ = K/2∑ j=1 iσj(aj + ibj)(aj + ibj) H − iσj(aj − ibj)(aj − ibj)H (15) = K/2∑ j=1 2σj(ajb ⊤ j − bja⊤j ) (16) = K/2∑ j=1 [ aj − bj aj + bj ] [ 0 σj −σj 0 ] [ a⊤j − b⊤j a⊤j + b ⊤ j ] . (17) Note that a1 ± b1, . . . ,aK/2 ± bK/2 are real-valued orthonormal vectors, because a1, b1, . . . ,aK/2, bK/2 are orthogonal to each other and ∥aj ± bj∥22 = ∥aj∥ 2 2 + ∥bj∥ 2 2 = 1 for all j. The pair {(σj ,aj − bj ,aj + bj)}K/2j=1 is often called the Youla decomposition (Youla, 1961) of B(D −D⊤)B⊤. To efficiently compute the Youla decomposition of a rank-K matrix, we use the following result. Proposition 2 (Proposition 1, Nakatsukasa (2019)). Given A,B ∈ CM×K , the nonzero eigenvalues of AB⊤ ∈ CM×M and B⊤A ∈ CK×K are identical. In addition, if (λ,v) is an eigenpair of B⊤A with λ ̸= 0, then (λ,Av/ ∥Av∥2) is an eigenpair of AB⊤. From the above proposition, one can first compute (D −D⊤)B⊤B and then apply the eigendecomposition to that K-by-K matrix. Taking the imaginary part of the obtained eigenvalues gives us the σj’s, and multiplying B by the eigenvectors gives us the eigenvectors of B(D −D⊤)B⊤. In addition, this can be done in O(MK2 +K3) time; when M > K it runs much faster than the eigendecomposition of B(D −D⊤)B⊤, which requires O(M3) time. The pseudo-code of the Youla decomposition is provided in Algorithm 4. Algorithm 4 Youla decomposition of low-rank skew-symmetric matrix 1: procedure YOULADECOMPOSITION(B,D) 2: {(ηj , zj), (ηj , zj)}K/2j=1 ← eigendecomposition of (D −D⊤)B⊤B 3: for j = 1, . . . ,K/2 do 4: σj ← Im(ηj) for j = 1, . . . ,K/2 5: y2j−1 ← B (Re(zj)− Im(zj)) 6: y2j ← B (Re(zj) + Im(zj)) 7: yj ← yj/ ∥yj∥ for j = 1, . . . ,K 8: return {(σj ,y2j−1,y2j)}K/2j=1 E PROOFS E.1 PROOF OF THEOREM 1 Theorem 1. For every subset Y ⊆ [M ], it holds that det(LY ) ≤ det(L̂Y ). Moreover, equality holds when the size of Y is equal to the rank of L. Proof of Theorem 1. It is enough to fix Y ⊆ [M ] such that 1 ≤ |Y | ≤ 2K, because the rank of both L and L̂ is up to 2K. Denote k := |Y | and ( [2K] k ) := {I ⊆ [2K]; |I| = k} for k ≤ 2K. We recall the definition of L̂: given V ,B,D such that L = V V ⊤ + B(D −D⊤)B⊤, let {(ρi,vi)}Ki=1 be the eigendecomposition of V V ⊤ and {(σj ,y2j−1,y2j)}K/2j=1 be the Youla decomposition of B(D −D⊤)B⊤. Denote Z := [v1, . . . ,vK ,y1, . . . ,yK ] ∈ RM×2K and X := diag ( ρ, . . . , ρK , [ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]) , X̂ := diag ( ρ1, . . . , ρK , [ σ1 0 0 σ1 ] , . . . , [ σK/2 0 0 σK/2 ]) , so that L = ZXZ⊤ and L̂ = ZX̂Z⊤. Applying the Cauchy-Binet formula twice, we can write the determinant of the principal submatrices of both L and L̂: det(LY ) = ∑ I∈([2K]k ) ∑ J∈([2K]k ) det(XI,J) det(ZY,I) det(ZY,J), (18) det(L̂Y ) = ∑ I∈([2K]k ) ∑ J∈([2K]k ) det(X̂I,J) det(ZY,I) det(ZY,J) = ∑ I∈([2K]k ) det(X̂I) det(ZY,I) 2, (19) where Eq. (19) follows from the fact that X̂ is diagonal, which means that det(X̂I,J) = 0 for I ̸= J . When the size of Y is equal to the rank of L (i.e., k = 2K), the summations in Eqs. (18) and (19) simplify to single terms: det(LY ) = det(X) det(ZY,:)2 and det(L̂Y ) = det(X̂) det(ZY,:)2. Now, observe that the determinants of the full X and X̂ matrices are identical: det(X) = det(X̂) =∏K i=1 ρi ∏K/2 j=1 σ 2 j . Hence, it holds that det(LY ) = det(L̂Y ). This proves the second statement of the theorem. To prove that det(LY ) ≤ det(L̂Y ) for smaller subsets Y , we will use the following: Claim 1. For every I, J ∈ ( [2K] k ) such that det(XI,J) ̸= 0, there exists a (nonempty) collection of subset pairs S(I, J) ⊆ ( [2K] k ) × ( [2K] k ) such that∑ (I′,J′)∈S(I,J) det(XI,J) det(ZY,I) det(ZY,J) ≤ ∑ (I′,J′)∈S(I,J) det(X̂I,I) det(ZY,I) 2. (20) Claim 2. The number of nonzero terms in Eq. (18) is identical to that in Eq. (19). Combining Claim 1 with Claim 2 yields det(LY ) = ∑ I,J∈([2K]k ) det(XI,J) det(ZY,I) det(ZY,J) ≤ ∑ I∈([2K]k ) det(X̂I,I) det(ZY,I) 2 = det(L̂Y ). We conclude the proof of Theorem 1. Below we provide proofs for Claim 1 and Claim 2. Proof of Claim 1. Recall that X is a block-diagonal matrix, where each block is of size either 1-by-1, containing ρi, or 2-by-2, containing both σj and −σj in the form [ 0 σj −σj 0 ] . A submatrix XI,J ∈ Rk×k with rows I and columns J will only have a nonzero determinant if it contains no all-zero row or column. Hence, any XI,J with nonzero determinant will have the following form (or some permutation of this block-diagonal): XI,J = ρp1 · · · 0 ... . . . ... 0 0 . . . ρp|PI,J | ±σq1 · · · 0 ... . . . ... 0 · · · ±σq|QI,J | 0 σr1 −σr1 0 . . . 0 0 σr|RI,J | −σr|RI,J | 0 (21) and we denote P I,J := {p1, . . . , p|P I,J |}, QI,J := {q1, . . . , q|QI,J |}, and RI,J := {r1, . . . , r|RI,J |}. Indices p ∈ P I,J yield a diagonal matrix with entries ρp. For such p, both I and J must contain index p. Indices r ∈ RI,J yield a block-diagonal matrix of the form [ 0 σr −σr 0 ] . For such r, both I and J must contain a pair of indices, (K + 2r − 1,K + 2r). Finally, indices q ∈ QI,J yield a diagonal matrix with entries of ±σq (the sign can be + or −). For such q, I contains K + 2q − 1 or K + 2q, and J must contain the other. Note that there is no intersection between QI,J and RI,J . If QI,J is an empty set (i.e., I = J), then det(XI,J) = det(X̂I,J) and det(XI,J) det(ZY,I) det(ZY,J) = det(X̂I) det(ZY,I) 2. (22) Thus, the terms in Eq. (18) in this case appear in Eq. (19). Now assume that QI,J ̸= ∅ and consider the following set of pairs: S(I, J) := {(I ′, J ′) : P I,J = P I′,J′ , QI,J = QI′,J′ , RI,J = RI′,J′}. In other words, for (I ′, J ′) ∈ S(I, J), the diagonal XI′,J′ contains ρp, [ 0 σr −σr 0 ] exactly as in XI,J . However, the signs of the σr’s may differ from XI,J . Combining this observation with the definition of X̂ , |det(XI′,J′)| = |det(XI,J)| = det(X̂I) = det(X̂I′) = det(X̂J) = det(X̂J′). (23) Therefore, ∑ (I′,J′)∈S(I,J) det(XI′,J′) det(ZY,I′) det(ZY,J′) (24) ≤ ∑ (I′,J′)∈S(I,J) |det(XI′,J′)|det(ZY,I′) det(ZY,J′) (25) = det(X̂I) ∑ (I′,J′)∈S(I,J) det(ZY,I′) det(ZY,J′) (26) ≤ det(X̂I) ∑ (I′,∗)∈S(I,J) det(ZY,I′) 2 (27) = ∑ (I′,∗)∈S(I,J) det(X̂I′) det(ZY,I′) 2 (28) where the third line comes from Eq. (23) and the fourth line follows from the rearrangement inequality. Note that application of this inequality does not change the number of terms in the sum. This completes the proof of Claim 1. Proof of Claim 2. In Eq. (19), observe that det(X̂I) det(ZY,I)2 ̸= 0 if and only if det(X̂I) ̸= 0. Since all ρi’s and σj’s are positive, the number of I ⊆ [2K], |I| = k such that det(X̂I) ̸= 0 is equal to ( 2K k ) . Similarly, the number of nonzero terms in Eq. (18) equals the number of possible choices of I, J ∈ ( [2K] k ) such that det(XI,J) ̸= 0. This can be counted as follows: first choose i items in {ρ1, . . . , ρK} for i = 0, . . . , k; then, choose j items in {[ 0 σ1 −σ1 0 ] , . . . , [ 0 σK/2 −σK/2 0 ]} for j = 0, . . . , ⌊k−i2 ⌋; lastly, choose k − i − 2j of {±σq; q /∈ RI,J}, then choose the sign for each of these (σq or −σq). Combining all of these choices, the total number of nonzero terms is: k∑ i=0 ( K i ) ︸ ︷︷ ︸ choice of ρp ⌊ k−i2 ⌋∑ j=0 ( K/2 j ) ︸ ︷︷ ︸ choice of [ 0 σr −σr 0 ] ( K/2− j k − i− 2j ) 2k−i−2j︸ ︷︷ ︸ choice of ±σq (29) = k∑ i=0 ( K i ) ( K k − i ) (30) = ( 2K k ) (31) where the second line comes from the fact that ( 2n m ) = ∑⌊m2 ⌋ j=0 ( n j )( n−j m−2j ) 2m−2j for any integers n,m ∈ N such that m ≤ 2n (see (1.69) in Quaintance (2010)), and the third line follows from the fact that ∑r i=0 ( m i )( n r−i ) = ( n+m r ) for n,m, r ∈ N (Vandermonde’s identity). Hence, both the number of nonzero terms in Eqs. (18) and (19) is equal to ( 2K k ) . This completes the proof of Claim 2. E.2 PROOF OF PROPOSITION 1 Proposition 1. The tree-based sampling procedure SAMPLEDPP in Algorithm 3 runs in time O(K + k3 logM + k4), where k is the size of the sampled set†††. Proof of Proposition 1. Since computing pℓ takes O(k2) from Eq. (12), and since the binary tree has depth O(logM), SAMPLEITEM in Algorithm 3 runs in O(k2 logM) time. Moreover, the query matrix QY can be updated in O(k3) time as it only requires a k-by-k matrix inversion. Therefore, the overall runtime of the tree-based elementary DPP sampling algorithm (after pre-processing) is O(k3 logM + k4). This improves the previous O(k4 logM) runtime studied in Gillenwater et al. (2019). Combining this with elementary DPP selection (Line 15 in Algorithm 3), we can sample a set in O(K + k3 logM + k4) time. This completes the proof of Proposition 1. E.3 PROOF OF THEOREM 2 Theorem 2. Given an NDPP kernel L = V V ⊤ + B(D −D⊤)B⊤ for V ,B ∈ RM×K ,D ∈ RK×K , consider the proposal kernel L̂ as proposed in Section 4.1. Let {σj}K/2j=1 be the positive eigenvalues obtained from the Youla decomposition of B(D−D⊤)B⊤. If V ⊥ B, then det(L̂+I)det(L+I) =∏K/2 j=1 ( 1 + 2σj σ2j+1 ) ≤ (1 + ω)K/2, where ω = 2K ∑K/2 j=1 2σj σ2j+1 ∈ (0, 1]. Proof of Theorem 2. Since the column spaces of V and B are orthogonal, the corresponding eigenvectors are also orthogonal, i.e., Z⊤Z = I2K . Then, det(L+ I) = det(ZXZ⊤ + I) = det(XZ⊤Z + I2K) = det(X + I2K) (32) = K∏ i=1 (ρi + 1) K/2∏ j=1 det ([ 1 σj −σj 1 ]) (33) = K∏ i=1 (ρi + 1) K/2∏ j=1 (σ2j + 1) (34) †††Computing pℓ via Eq. (12) improves on Gillenwater et al. (2019)’s O(k4 logM) runtime for this step. and similarly det(L̂+ I) = K∏ i=1 (ρi + 1) K/2∏ j=1 (σj + 1) 2. (35) Combining Eqs. (34) and (35), we have that det(L̂+ I) det(L+ I) = K/2∏ j=1 (σj + 1) 2 (σ2j + 1) = K/2∏ j=1 ( 1 + 2σj σ2j + 1 ) ≤ 1 + 2 K K/2∑ j=1 2σj σ2j + 1 K/2 (36) where the inequality holds from the Jensen’s inequality. This completes the proof of Theorem 2.
1. What is the focus of the paper regarding DPPs and NDPPS? 2. What are the strengths of the proposed algorithm in terms of complexity and sampling efficiency? 3. Do you have any concerns or suggestions regarding the proof of the main theorem? 4. How does the reviewer assess the novelty and impact of the proposed method in the context of DPPs and NDPPS? 5. Are there any limitations or potential improvements regarding the simplicity and generalizability of the proposed method?
Summary Of The Paper Review
Summary Of The Paper This paper gives an exact sampling algorithm to sample from DPPs based on non-symmetrical, low-rank, kernel matrices of the form L = V V ⊤ + B ( D − D ⊤ ) B ⊤ where V and B are of size M by K and D is square, non-symmetric and of size K by K . The marginal kernel of such a DPP may be written in the form K = Z W Z ⊤ with Z a M by 2 K matrix and W a square matrix of size 2 K that is not symmetric. The authors then: discuss in Section 2 how Poulson's algorithm is adapted to sample from such NDPPS in Section 3: show that, as L is low-rank, the generic O ( M 3 ) cost of Poulson's algorithm can be reduced to O ( M K 2 as updates of the marginal kernel can be efficiently done by only updating the inner W matrix in Section 4: give a sublinear-time rejection-sampling based algorithm adapting the tree-based algorithm of Gillenwater et al. In this section, they: propose a well-adapted, DPP based on a symmetric kernel L ^ that is easy-to-sample (it is a DPP based on a symmetric PSD kernel and can be sampled from many different fast existing algorithms), and for which any set Y verifies d e t ( L Y ) ≤ d e t ( L ^ Y ) where the upper-bound is actually attained (Thm 1). In order to exactly control the rejection rate of their proposed algorithm (that is equal to d e t ( L ^ + I ) / d e t ( L + I ) , the authors simplify the model by supposing that V and B are orthogonal to each other (Thm 2) in passing, the authors observe that the complexity of Gillenwater et al's algorithm can in fact be easily improved by a factor k where k is the size of the sampled set in Sections 5 and 6, experiments are provided comparing their samping method with the state-of-the-art Review The paper is well-written. Even though it is a somewhat straightforward mix of ideas from Gartrell et al. (motivating such NDPPS), Gillenwater et al. (sublinear sampling algorithm of symmetric DPPs), and Poulson (Cholesky-based sampling of DPPs and NDPPs); I believe that the results are interesting enough for acceptance. The main contribution of the paper is, to my eyes, to have found a symmetric DPP that is well-suited for rejection sampling. The proof that ∀ Y , d e t ( L Y ) ≤ d e t ( L ^ Y ) , essential for the rejection sampling framework to work, is an interesting contribution (and could be used in other works on DPPs as well). In passing, I find the proof hard to follow and suggest to the authors to do their best to find ways of simplifying it as much as possible for the camera-ready version. The simpler the proof, the more easily it could be transferred to other scenarios, increasing the potential impact of the paper. As for Theorem 2, the orthogonality constraint is frustrating and I would imagine that further efforts could lift this constraint and obtain a bound that depends on how orthogonal both subspaces are (and recovering the current bound when they are indeed orthogonal). On the other hand, ONDPPs can indeed be motivated for learning as they indeed ensure that the full "available rank" is put to contribution. The empirical results tend to validate this intuitive argument.
ICLR
Title Context Dependent Modulation of Activation Function Abstract We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks. 1 INTRODUCTION Artificial neural networks (ANNs), such as convolutional neural networks (CNNs) (LeCun et al., 1998) and long short-term memory (LSTM) cells (Hochreiter & Schmidhuber, 1997), have incredible capabilities and are applied in a variety of applications including computer vision, natural language analysis, and speech recognition among others. Historically, the development of ANNs (e.g., network architectures and learning algorithms) has benefited significantly from collaborations with Psych-Neuro communities (Churchland & Sejnowski, 1988; Hebb, 1949; Hinton et al., 1984; Hopfield, 1982; McCulloch & Pitts, 1943; Turing, 1950; Hassabis et al., 2017; Elman, 1990; Hopfield & Tank, 1986; Jordan, 1997; Hassabis et al., 2017). The information processing capabilities of traditional ANN nodes are rather rigid when compared to the plasticity of real neurons. A typical traditional ANN node linearly integrate its input signals and run the integration through a transformation called an activation function, which simply takes in a scalar value and outputs another. Of the most popular Activation Functions are sigmoid (Mikolov et al., 2010), tanh (Kalman & Kwasny, 1992) and ReLU (Nair & Hinton, 2010). Researchers have shown that it could be beneficial to deploy layer-/node- specific activation functions in a deep ANN (Chen & Chang, 1996; Solazzi & Uncini, 2000; Goh & Mandic, 2003; He et al., 2015; Agostinelli et al., 2014). However, each ANN node is traditionally stuck with a fixed activation function once trained. Therefore, the same input integration will always produce the same output. This fails to replicate the amazing capability of individual biological neurons to conduct complex nonlinear mappings from inputs to outputs (Antic et al., 2010; Hassabis et al., 2017; Marblestone et al., 2016). In this study, we propose one new modification to ANN architectures by adding a new type of node, termed modulators, to modulate the activation sensitivity of the ANN nodes targeted by modulators (see Figures 1-3 for examples). In one possible setting, a modulator and its target ANN nodes share the same inputs. The modulator maps the input into a modulation signal, which is fed into each target node. Each target node multiples its input integration by the modulator signal prior to transformation by its traditional activation function. Examples of neuronal principles that may be captured by our new modification include intrinsic excitability, diverse firing modes, type 1 and type 2 forms of firing rate integration, activity dependent facilitation and depression and, most notably, neuromodulation (Marder et al., 1996; Sherman, 2001; Ward, 2003; Ringrose & Paro, 2004). Our modulator is relevant to the attention mechanism (Larochelle & Hinton, 2010; Mnih et al., 2014), which dynamically restricts information pathways and has been found to be very useful in practice. Attention mechanisms apply the attention weights, which are calculated in run-time, to the outputs of ANN nodes or LSTM cells. Notably, the gating mechanism in a Simple LSTM cell can also be viewed as a dynamical information modifier. A gate takes the input of the LSTM cell and outputs gating signals for filtering the outputs of its target ANN nodes in the same LSTM cell. A similar gating mechanism was proposed in the Gated Linear Unit (Dauphin et al., 2016) for CNNs. Different from the attention and gating mechanisms, which are applied to the outputs of the target nodes, our modulation mechanism adjusts the sensitivities of the target ANN nodes in run-time by changing the slopes of the corresponding activation functions. Hence, the modulator can also be used as a complement to the attention and gate mechanisms. Below we will explain our modulator mechanism in detail. Experimentation shows that the modulation mechanism can help achieve better test stability and higher test performance using easy to implement and significantly simpler models. Finally, we conclude the paper with discussions on the relevance to the properties of actual neurons. 2 METHODS We designed two modulation mechanisms, one for CNNs and the other for LSTMs. In modulating CNNs, our modulator (see Figure 1) is a layer-specific one that is best compared to the biological phenomenon of neuromodulation. Each CNN layer before activation has one modulator, which shares the input ~x with other CNN nodes in the same layer (Figure 1Left). The modulator (Figure 1Right) of the lth CNN layer calculates a scalar modulation signal as sl = τl(~wTl ~x), where τl(·) is the activation function of the lth modulator, and feeds sl to every other CNN node in the same layer. The kth modulated CNN node in the lth layer linearly integrates its inputs as a traditional ANN nodes vl,k = ~wTl,k~x and modulates the integration to get ul,k = sl · vl,k prior to its traditional activation step ϕl,k(·). The final output is ol,k = ϕl,k(τl(~wTl ~x) · ~wTl,k~x). The above modulation mechanism is slightly modified to expand Densely Connected CNNs (Iandola et al., 2014)(see Figure 2). A modulator is added to each dense block layer to modulate the outputs of its convolution nodes. Given a specific input, the modulator outputs a scalar modulation signal that is multiplied to the scalar outputs of the target convolution nodes in the same layer. In addition to the Cellgate, there are three modifying gates (Forget, Input, and Output) in a traditional LSTM cell. Each gate is a full layer of ANN nodes. Each of ANN node in a gate uses sigmoid to transform the integration of the input into regulation signals. The traditional LSTM cell transforms the input integration to an intermediate output (i.e., C̃t in Figure 3). The Forget gate regulates what is removed from the old cell state (i.e., t−1 in Figure 3), and the Input gate what in C̃t is added to obtain the new cell state (i.e., t). The new cell state is transformed and then regulated by the output gate to become part of the input of the next time point. In modulating LSTM (see Figure 3), for the purpose of easier implementation, we create a new ”modulation gate” (the round dash rectangle in Figure 3) for node-specific sensitivity-adjustment which is most analogous to neuronal facilitation and depression. Different from a conventional LSTM that calculates C̃t = ϕ(Wc[~xt,~ht−1]), a modulated LSTM calculates C̃t = ϕ(τ(WM [~xt,~ht−1]) · (Wc[~xt,~ht−1])). In the above designs, both a multi-layer CNN and single-layer LSTM had multiple modulator nodes within each model. A generalization to the above designs is to allow a modulator to take the outputs from other CNN layers or those of the LSTM cell at other time points as the inputs. 3 EXPERIMENTAL RESULTS 3.1 MODULATED CNNS In our experiments with CNNs, the activation functions of the traditional CNN nodes was ReLU, with our modulator nodes using a sigmoid. We tested six total settings: a vanilla CNN vs a modulated vanilla CNN, a vanilla DenseNet vs a modulated DenseNet, and a vanilla DenseNet-lite vs a modulated DenseNet-lite. The vanilla CNN has 2 convolution blocks, each of which contains two sequential convolution layers, a pooling layer, and a dropout layer. A fully connected layer of 512 nodes is appended at the very end of the model. The convolution layers in the first block have 32 filters with a size of 3x3 while the convolution layers in the second block have 64 filters with a size of 3x3. We apply a dropout of 0.25 to each block. The vanilla DenseNet used the structure (40 in depth and 12 in growth-rate) reported in the original DenseNet paper (Iandola et al., 2014) and a dropout of 0.5 is used in our experiment. The vanilla DenseNet-lite has a similar structure to the vanilla DenseNet, however, uses a smaller growth-rate of 10 instead of 12 in the original configuration, which results in 28% fewer parameters. The modulators are added to the vanilla CNN, the vanilla DenseNet, and the vanilla DenseNet-lite in the way described in Figures 1 and 2 to obtain their modulated versions, respectively. Table 1 summarizes the numbers of the parameters in the above models to indicate their complexities. The modulated networks have slightly more parameters than their vanilla versions do. All the experiments were run for 150 epochs on 4 NVIDIA Titan Xp GPUs with a mini-batch size of 128. CIFAR-10 dataset (Krizhevsky & Hinton, 2009) was used in this experiment. CIFAR-10 consists of colored images at a resolution of 32x32 pixels. The training and test set are containing 50000 and 10000 images respectively. We held 20% of the training data for validation and applied data augmentation of shifting and mirroring on the training data. All the CNN models are trained using the Adam (Kingma & Ba, 2014) optimization method with a learning rate of 1e-3 and shrinks by a factor of 10 at 50% and 80% of the training progress. As shown in Figure 4, the vanilla CNN model begins to overfit after 80 training epochs. Although the modulated CNN model is slightly more complex, it is less prone to overfitting and excels its vanilla counterpart by a large margin (see Table 2). Modulation also significantly helps DenseNets in training, validation, and test. The modulated DenseNet/DenseNet-lite models consistently outperform their vanilla counterparts by a noticeable margin (see Figures 5(a) and 5(b)) during training. The validation and test results of the modulated DenseNet/DenseNet-lite models are also better than those of their vanilla counterparts. It is not surprising that the vanilla DenseNet-lite model underperforms the vanilla DenseNet model. Interestingly, despite having 28% fewer parameters than the vanilla DenseNet model, the modulated DenseNet-lite model outperforms the vanilla DenseNet model (see the dash orange curve vs the solid blue curve in Figure 5(b) and Table 2). 3.2 MODULATED LSTM Two datasets were used in the LSTM experiments. The first one is the NAMES dataset (Sean, 2016), in which the goal is to take a name as a string and classify its ethnicity or country of origin. Approximately 10% of the data-set was reserved for testing. The second experiment used the SST2 data-set (Socher et al., 2013), which requires a trained model to classify whether a movie review is positive or negative based on the raw text in the review. The SST2 is identical to the SST1 with the exception of the neutral category removed (Socher et al., 2013), leaving only positive and negative reviews. About 20% of the data-set was reserved for testing. Since modulators noticeably increase the parameters in a modulated LSTM, to perform fair comparisons, we create three versions of vanilla LSTMs (see Controls 1, 2, & 3 in Figure 6). Control 1 has an identical total LSTM cell size. Control 2 has the identical number of nodes per layer. Control 3 has an extra Input gate so that it has both an identical total number of nodes and identical nodes per layer. The numbers of parameters in the modulated LSTM and control LSTMs are listed in Table 3 for comparison. The hyper-parameters for the first experiment were set as following: the hidden dimension was set to 32, batch size to 32, embedding dimension to 128, initial learning rate to .01, learning rate decay to 1e-4, an SGD optimizer was used, with dropout of 0.2 applied to the last hidden state of the LSTM and 100 epochs were collected. This condition was tested on the name categorization data-set. The number of parameters in this model ranged from 4.1 K to 6.4 K, depending on the condition. We repeated the experimental runs 30 times. Based on the simplicity of the data-set and the relative sparsity of parameters, this condition will be referred to as Simple-LSTM. As for the second experiment: the hidden dimension was set to 150, he batch size was set to 5, the embedding dimension was set to 300, the initial learning rate was set to 1e-3, there was no learning rate decay, an Adam optimizer was used with no dropout and 100 epochs were collected. The number of parameters in this model ranged from 57.6 K to 90 K, depending on the control setup. This experiment was repeated 100 times. Based on the complexity of the data-set and the relatively large amount of parameters, this condition will be referred to as Advanced-LSTM. In all experiments, the models were trained for 100 epochs. We can observe from the results in Table 4 that, the mean test performance of both modulated LSTMs outperformed all three control groups and achieved the highest validation performance. Statistical significance varied between the two LSTM models. In the Vanilla-LSTM (n = 30), with τl(·) set to sigmoid, statistical significance ranged between p<.06 (Control 3) and P<.001 (Control 2). In the Advanced-LSTM (n = 100), with τl(·) set to tanhshrink, statistical significance was a consistently P<.001 in all conditions. In all cases, variance was lowest in the modulated condition. We further zoom in the activation data-flow and visualized the the effect of our modulation in Table 3.2. The control condition and modulated condition was compared side by side. On the left we can observe the impact of the Ingate on the amplitude of the tanh activation function, on the right we can observe our modulation adjust the slope as well. Each input generates a context dependent activation as shown in continuous lines and specific activations are represented by the blue dots which corresponded to a point on a specific line. Our modulation modification provides new aptitudes for the model to learn, generalize and appears to add a stabilizing feature to the dynamic input-output relationship. 4 CONCLUSION We propose a modulation mechanism addition to traditional ANNs so that the shape of the activation function can be context dependent. Experimental results show that the modulated models consistently outperform their original versions. Our experiment also implied adding modulator can reduce overfitting. We demonstrated even with fewer parameters, the modulated model can still perform on par with it vanilla version of a bigger size. This modulation idea can also be expanded to other setting, such as, different modulator activation or different structure inside the modulator. 5 DISCUSSION It was frequently observed in preliminary testing that arbitrarily increasing model parameters actually hurt network performance, so future studies will be aimed at investigating the relationship between the number of model parameters and the performance of the network. Additionally, it will be important to determine the interaction between specific network implementations and the ideal Activation Function wrapping for slope-determining neurons. Lastly, it may be useful to investigate layer-wide single-node modulation on models with parallel LSTM’s. Epigenetics refers to the activation and inactivation of genes (Weinhold, 2006), often as a result of environmental factors. These changes in gene-expression result in modifications to the generation and regulation of cellular proteins, such as ion channels, that regulate how the cell controls the flow of current through the cell membrane (Meadows et al., 2016). The modulation of these proteins will strongly influence the tendency of a neuron to fire and hence affect the neurons function as a single computational node. These proteins, in turn, can influence epigenetic expression in the form of dynamic control (Kawasaki et al., 2004). Regarding the effects of these signals, we can compare the output of neurons and nodes from a variety of perspectives. First and foremost, intrinsic excitability refers to the ease with which a neurons electrical potential can increase, and this feature has been found to impact plasticity itself (Desai et al., 1999). From this view, the output of a node in an artificial neural network would correspond to a neurons firing rate, which Intrinsic Excitability is a large contributor to, and our extra gate would be setting the node’s intrinsic excitability. With the analogy of firing rate, another phenomenon can be considered. Neurons may experience various modes of information integration, typically labeled Type 1 and Type 2. Type 1 refers to continuous firing rate integration, while Type 2 refers to discontinuous information (Tateno et al., 2004). This is computationally explained as a function of interneuron communication resulting in neuron-activity nullclines with either heavy overlap or discontinuous saddle points (Miller, 2016). In biology, a neuron may switch between Type 1 and Type 2 depending on the presence of neuromodulator (Stiefel & Gutkin, 2012). Controlling the degree to which the tanh function encodes to a binary space, our modification may be conceived as determining the form of information integration. The final possible firing rate equivalence refers to the ability of real neurons to switch between different firing modes. While the common mode of firing, Tonic firing, generally encodes information in rate frequency, neurons in a Bursting mode (though there are many types of bursts) tend to encode information in a binary mode - either firing bursts or not (Tateno et al., 2004). Here too, our modification encompasses a biological phenomenon by enabling the switch between binary and continuous information. Another analogy to an ANN nodes output would be the neurotransmitter released. With this view, our modification is best expressed as an analogy to Activity Dependent Facilitation and Depression, phenomena which cause neurons to release either more or less neurotransmitter. Facilitation and depression occur in response to the same input: past activity (Reyes et al., 1998). Our modification enables a network to use previous activity to determine its current sensitivity to input, allowing for both Facilitation and Depression. On the topic of neurotransmitter release, neuromodulation is the most relevant topic to the previously shown experiments. Once again, Marblestone et al. (2016) explains the situation perfectly, expressing that research (Bargmann, 2012; Bargmann & Marder, 2013) has shown ”the same neuron or circuit can exhibit different input-output responses depending on a global circuit state, as reflected by the concentrations of various neuromodulators”. Relating to our modification, the slope of the activation function may be conceptualized as the mechanism of neuromodulation, with the new gate acting analogously to a source of neuromodulator for all nodes in the network. Returning to a Machine Learning approach, the ability to adjust the slope of an Activation Function has an immediate benefit in making the back-propagation gradient dynamic. For example, for Activations near 0, where the tanh Function gradient is largest, the effect of our modification on node output is minimal. However, at this point, our modification has the ability to decrease the gradient, perhaps acting as pseudo-learning-rate. On the other hand, at activations near 1 and -1, where the tanh Function gradient reaches 0, our modification causes the gradient to reappear, allowing for information to be extracted from inputs outside of the standard range. Additionally, by implementing a slope that is conditional on node input, the node has the ability to generate a wide range of functional Activation Functions, including asymmetric functions. Lastly, injecting noise has been found to help deep neural networks with noisy datasets (Zheng et al., 2016), which is noteworthy since noise may act as a stabilizer for neuronal firing rates, (Touboul et al., 2012). With this in mind, Table 3.2 demonstrates increased clustering in two-dimensional node-Activation space, when the Activation Function slope is made to be dynamic. This indicates that noise may be a mediator of our modification, improving network performance through stabilization, induced by increasing the variability of the input-output relationship. In summary, we have shown evidence that nodes in LSTMs and CNNs benefit from added complexity to their input-output dynamic. Specifically, having a node that adjusts the slope of the main layer’s nodes’ activation functions mimics the functionality of neuromodulators and is shown to benefit the network. The exact mechanism by which this modification improves network performance remains unknown, yet it is possible to support this approach from both a neuroscientific and machine-learning perspective. We believe this demonstrates the need for further research into discovering novel non-computationally-demanding methods of applying principles of neuroscience to artificial networks. 6 APPENDIX 6.1 SUPPLEMENTARY DATA METHODOLOGY Additionally we tested our modulator gate, with τl(·) set to sigmoid, on a much more computationally demanding three-layered LSTM network with weight drop method named awd-lstm-lm (Merity et al., 2017; 2018). This model was equipped to handle the Penn-Treebank dataset (Marcus et al., 1993) and was trained to minimize word perplexity. The network was trained for 500 epochs, however, the sample size was limited due to extremely long training times. 6.2 SUPPLEMENTARY DATA RESULTS On the Penn-Treebank dataset with the awd-lstm-lm implementation, sample size was restricted to 2 per condition, due to long training times and limited resources. However on the data collected, our model outperformed template perplexity, achieving an average of 58.4730 compared to the template average 58.7115. Due to the lack of a control for model parameters, interpretation of these results rests on the assumption that the author fine-tuned network parameters such that the template parameters maximized performance. 7 SUPPLEMENTARY DATA FIGURES & TABLES 7.1 AWD-LSTM-LM ON PENN-TREEBANK Table 7: Comparison of mean test Perplexities lower = better Model Epochs Modulated Control Statistical Analysis awd-lstm-lm on Penn-Treebank 500 58.4730 58.7115 T: 1.842 DOF: 1.9 Hedges’s G: 1.853 Figure 7: Validation Perplexity progress (lower = better) 7.2 SUPPLEMENTAL LSTM DATA
1. What is the focus of the paper regarding neural networks? 2. What are the strengths of the proposed approach, particularly in terms of implementation and applicability? 3. What are the weaknesses of the paper, especially regarding the comparison with prior works and experimental results? 4. Do you have any questions or concerns about the modulation mechanism and its relationship to other neural network architectures? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review Paper summary: This paper proposes a method to scale the activations of a layer of neurons in an ANN depending on the inputs to that layer. The scaling factor, called modulation, is computed using a separate weight matrix and activation function. It is multiplied with each neuron's activation before applying its non-linearity. The weight matrix of the modulator is learned alongside the other weights of the network by backpropagation. The authors evaluate this modulated neural unit in convolutional neural networks, densely connected CNNs and recurrent networks consisting of LSTM units. Reported improvements above the baselines are between 1% - 3%. Pro: + With some minor exceptions the paper is clearly written and comprehensible. + Experiments seem to have been performed with due diligence. + The proposed modulator is easy to implement and applicable to (almost) all network architectures. Contra: - Lin et. al. (2014) proposed a network in network architecture. In this architecture the output of each neural unit is computed using a small neural network contained in it and thus arbitrary, input-dependent activation functions can be realized and learned by each neuron. The proposed neural modulation mechanism in the paper at hand is in fact a more restricted version of the network-in-network model and the authors should discuss the relationship of their proposal to this prior work. - When comparing the test accuracy of CNNs in Fig. 4 the result is questionable. If training of the vanilla CNN was stopped at its best validation loss (early stopping), the difference in accuracies would have been marginal. Also the choice of hyper-parameters may significantly affect the outcome of the comparison experiments. More experiments would be necessary to prove the advantage of this model over a wide range of hyper-parameters. Minor points: - It is unclear whether the modulator weights are shared along the depth of a CNN layer, i.e. between feature maps. - Page 9: "Our modification enables a network to use previous activity to determine its current sensitivity to input [...]" => A vanilla LSTM is already capable of doing that using its input gate. - Page 9: "[...] the ability to adjust the slope of an Activation Function has an immediate benefit in making the back-propagation gradient dynamic." => In fact ReLUs do not suffer from the vanishing gradient problem. Furthermore DenseNets already provide a short-path for the gradient flow by introducing skip connections. - The discussion at the end adds little value and rather seems to be a motivation of the model than a discussion of the results. Rating: My main concern is that the proposed modulator is a version of the network in network model restricted to providing a scaling factor. Although the authors motivate this model biologically, I do not see sufficient empirical evidence to believe that it is advantageous over the full network in network model by Lin et. al. I would recommend to add a direct comparison to that model to a future version of this paper.
ICLR
Title Context Dependent Modulation of Activation Function Abstract We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks. 1 INTRODUCTION Artificial neural networks (ANNs), such as convolutional neural networks (CNNs) (LeCun et al., 1998) and long short-term memory (LSTM) cells (Hochreiter & Schmidhuber, 1997), have incredible capabilities and are applied in a variety of applications including computer vision, natural language analysis, and speech recognition among others. Historically, the development of ANNs (e.g., network architectures and learning algorithms) has benefited significantly from collaborations with Psych-Neuro communities (Churchland & Sejnowski, 1988; Hebb, 1949; Hinton et al., 1984; Hopfield, 1982; McCulloch & Pitts, 1943; Turing, 1950; Hassabis et al., 2017; Elman, 1990; Hopfield & Tank, 1986; Jordan, 1997; Hassabis et al., 2017). The information processing capabilities of traditional ANN nodes are rather rigid when compared to the plasticity of real neurons. A typical traditional ANN node linearly integrate its input signals and run the integration through a transformation called an activation function, which simply takes in a scalar value and outputs another. Of the most popular Activation Functions are sigmoid (Mikolov et al., 2010), tanh (Kalman & Kwasny, 1992) and ReLU (Nair & Hinton, 2010). Researchers have shown that it could be beneficial to deploy layer-/node- specific activation functions in a deep ANN (Chen & Chang, 1996; Solazzi & Uncini, 2000; Goh & Mandic, 2003; He et al., 2015; Agostinelli et al., 2014). However, each ANN node is traditionally stuck with a fixed activation function once trained. Therefore, the same input integration will always produce the same output. This fails to replicate the amazing capability of individual biological neurons to conduct complex nonlinear mappings from inputs to outputs (Antic et al., 2010; Hassabis et al., 2017; Marblestone et al., 2016). In this study, we propose one new modification to ANN architectures by adding a new type of node, termed modulators, to modulate the activation sensitivity of the ANN nodes targeted by modulators (see Figures 1-3 for examples). In one possible setting, a modulator and its target ANN nodes share the same inputs. The modulator maps the input into a modulation signal, which is fed into each target node. Each target node multiples its input integration by the modulator signal prior to transformation by its traditional activation function. Examples of neuronal principles that may be captured by our new modification include intrinsic excitability, diverse firing modes, type 1 and type 2 forms of firing rate integration, activity dependent facilitation and depression and, most notably, neuromodulation (Marder et al., 1996; Sherman, 2001; Ward, 2003; Ringrose & Paro, 2004). Our modulator is relevant to the attention mechanism (Larochelle & Hinton, 2010; Mnih et al., 2014), which dynamically restricts information pathways and has been found to be very useful in practice. Attention mechanisms apply the attention weights, which are calculated in run-time, to the outputs of ANN nodes or LSTM cells. Notably, the gating mechanism in a Simple LSTM cell can also be viewed as a dynamical information modifier. A gate takes the input of the LSTM cell and outputs gating signals for filtering the outputs of its target ANN nodes in the same LSTM cell. A similar gating mechanism was proposed in the Gated Linear Unit (Dauphin et al., 2016) for CNNs. Different from the attention and gating mechanisms, which are applied to the outputs of the target nodes, our modulation mechanism adjusts the sensitivities of the target ANN nodes in run-time by changing the slopes of the corresponding activation functions. Hence, the modulator can also be used as a complement to the attention and gate mechanisms. Below we will explain our modulator mechanism in detail. Experimentation shows that the modulation mechanism can help achieve better test stability and higher test performance using easy to implement and significantly simpler models. Finally, we conclude the paper with discussions on the relevance to the properties of actual neurons. 2 METHODS We designed two modulation mechanisms, one for CNNs and the other for LSTMs. In modulating CNNs, our modulator (see Figure 1) is a layer-specific one that is best compared to the biological phenomenon of neuromodulation. Each CNN layer before activation has one modulator, which shares the input ~x with other CNN nodes in the same layer (Figure 1Left). The modulator (Figure 1Right) of the lth CNN layer calculates a scalar modulation signal as sl = τl(~wTl ~x), where τl(·) is the activation function of the lth modulator, and feeds sl to every other CNN node in the same layer. The kth modulated CNN node in the lth layer linearly integrates its inputs as a traditional ANN nodes vl,k = ~wTl,k~x and modulates the integration to get ul,k = sl · vl,k prior to its traditional activation step ϕl,k(·). The final output is ol,k = ϕl,k(τl(~wTl ~x) · ~wTl,k~x). The above modulation mechanism is slightly modified to expand Densely Connected CNNs (Iandola et al., 2014)(see Figure 2). A modulator is added to each dense block layer to modulate the outputs of its convolution nodes. Given a specific input, the modulator outputs a scalar modulation signal that is multiplied to the scalar outputs of the target convolution nodes in the same layer. In addition to the Cellgate, there are three modifying gates (Forget, Input, and Output) in a traditional LSTM cell. Each gate is a full layer of ANN nodes. Each of ANN node in a gate uses sigmoid to transform the integration of the input into regulation signals. The traditional LSTM cell transforms the input integration to an intermediate output (i.e., C̃t in Figure 3). The Forget gate regulates what is removed from the old cell state (i.e., t−1 in Figure 3), and the Input gate what in C̃t is added to obtain the new cell state (i.e., t). The new cell state is transformed and then regulated by the output gate to become part of the input of the next time point. In modulating LSTM (see Figure 3), for the purpose of easier implementation, we create a new ”modulation gate” (the round dash rectangle in Figure 3) for node-specific sensitivity-adjustment which is most analogous to neuronal facilitation and depression. Different from a conventional LSTM that calculates C̃t = ϕ(Wc[~xt,~ht−1]), a modulated LSTM calculates C̃t = ϕ(τ(WM [~xt,~ht−1]) · (Wc[~xt,~ht−1])). In the above designs, both a multi-layer CNN and single-layer LSTM had multiple modulator nodes within each model. A generalization to the above designs is to allow a modulator to take the outputs from other CNN layers or those of the LSTM cell at other time points as the inputs. 3 EXPERIMENTAL RESULTS 3.1 MODULATED CNNS In our experiments with CNNs, the activation functions of the traditional CNN nodes was ReLU, with our modulator nodes using a sigmoid. We tested six total settings: a vanilla CNN vs a modulated vanilla CNN, a vanilla DenseNet vs a modulated DenseNet, and a vanilla DenseNet-lite vs a modulated DenseNet-lite. The vanilla CNN has 2 convolution blocks, each of which contains two sequential convolution layers, a pooling layer, and a dropout layer. A fully connected layer of 512 nodes is appended at the very end of the model. The convolution layers in the first block have 32 filters with a size of 3x3 while the convolution layers in the second block have 64 filters with a size of 3x3. We apply a dropout of 0.25 to each block. The vanilla DenseNet used the structure (40 in depth and 12 in growth-rate) reported in the original DenseNet paper (Iandola et al., 2014) and a dropout of 0.5 is used in our experiment. The vanilla DenseNet-lite has a similar structure to the vanilla DenseNet, however, uses a smaller growth-rate of 10 instead of 12 in the original configuration, which results in 28% fewer parameters. The modulators are added to the vanilla CNN, the vanilla DenseNet, and the vanilla DenseNet-lite in the way described in Figures 1 and 2 to obtain their modulated versions, respectively. Table 1 summarizes the numbers of the parameters in the above models to indicate their complexities. The modulated networks have slightly more parameters than their vanilla versions do. All the experiments were run for 150 epochs on 4 NVIDIA Titan Xp GPUs with a mini-batch size of 128. CIFAR-10 dataset (Krizhevsky & Hinton, 2009) was used in this experiment. CIFAR-10 consists of colored images at a resolution of 32x32 pixels. The training and test set are containing 50000 and 10000 images respectively. We held 20% of the training data for validation and applied data augmentation of shifting and mirroring on the training data. All the CNN models are trained using the Adam (Kingma & Ba, 2014) optimization method with a learning rate of 1e-3 and shrinks by a factor of 10 at 50% and 80% of the training progress. As shown in Figure 4, the vanilla CNN model begins to overfit after 80 training epochs. Although the modulated CNN model is slightly more complex, it is less prone to overfitting and excels its vanilla counterpart by a large margin (see Table 2). Modulation also significantly helps DenseNets in training, validation, and test. The modulated DenseNet/DenseNet-lite models consistently outperform their vanilla counterparts by a noticeable margin (see Figures 5(a) and 5(b)) during training. The validation and test results of the modulated DenseNet/DenseNet-lite models are also better than those of their vanilla counterparts. It is not surprising that the vanilla DenseNet-lite model underperforms the vanilla DenseNet model. Interestingly, despite having 28% fewer parameters than the vanilla DenseNet model, the modulated DenseNet-lite model outperforms the vanilla DenseNet model (see the dash orange curve vs the solid blue curve in Figure 5(b) and Table 2). 3.2 MODULATED LSTM Two datasets were used in the LSTM experiments. The first one is the NAMES dataset (Sean, 2016), in which the goal is to take a name as a string and classify its ethnicity or country of origin. Approximately 10% of the data-set was reserved for testing. The second experiment used the SST2 data-set (Socher et al., 2013), which requires a trained model to classify whether a movie review is positive or negative based on the raw text in the review. The SST2 is identical to the SST1 with the exception of the neutral category removed (Socher et al., 2013), leaving only positive and negative reviews. About 20% of the data-set was reserved for testing. Since modulators noticeably increase the parameters in a modulated LSTM, to perform fair comparisons, we create three versions of vanilla LSTMs (see Controls 1, 2, & 3 in Figure 6). Control 1 has an identical total LSTM cell size. Control 2 has the identical number of nodes per layer. Control 3 has an extra Input gate so that it has both an identical total number of nodes and identical nodes per layer. The numbers of parameters in the modulated LSTM and control LSTMs are listed in Table 3 for comparison. The hyper-parameters for the first experiment were set as following: the hidden dimension was set to 32, batch size to 32, embedding dimension to 128, initial learning rate to .01, learning rate decay to 1e-4, an SGD optimizer was used, with dropout of 0.2 applied to the last hidden state of the LSTM and 100 epochs were collected. This condition was tested on the name categorization data-set. The number of parameters in this model ranged from 4.1 K to 6.4 K, depending on the condition. We repeated the experimental runs 30 times. Based on the simplicity of the data-set and the relative sparsity of parameters, this condition will be referred to as Simple-LSTM. As for the second experiment: the hidden dimension was set to 150, he batch size was set to 5, the embedding dimension was set to 300, the initial learning rate was set to 1e-3, there was no learning rate decay, an Adam optimizer was used with no dropout and 100 epochs were collected. The number of parameters in this model ranged from 57.6 K to 90 K, depending on the control setup. This experiment was repeated 100 times. Based on the complexity of the data-set and the relatively large amount of parameters, this condition will be referred to as Advanced-LSTM. In all experiments, the models were trained for 100 epochs. We can observe from the results in Table 4 that, the mean test performance of both modulated LSTMs outperformed all three control groups and achieved the highest validation performance. Statistical significance varied between the two LSTM models. In the Vanilla-LSTM (n = 30), with τl(·) set to sigmoid, statistical significance ranged between p<.06 (Control 3) and P<.001 (Control 2). In the Advanced-LSTM (n = 100), with τl(·) set to tanhshrink, statistical significance was a consistently P<.001 in all conditions. In all cases, variance was lowest in the modulated condition. We further zoom in the activation data-flow and visualized the the effect of our modulation in Table 3.2. The control condition and modulated condition was compared side by side. On the left we can observe the impact of the Ingate on the amplitude of the tanh activation function, on the right we can observe our modulation adjust the slope as well. Each input generates a context dependent activation as shown in continuous lines and specific activations are represented by the blue dots which corresponded to a point on a specific line. Our modulation modification provides new aptitudes for the model to learn, generalize and appears to add a stabilizing feature to the dynamic input-output relationship. 4 CONCLUSION We propose a modulation mechanism addition to traditional ANNs so that the shape of the activation function can be context dependent. Experimental results show that the modulated models consistently outperform their original versions. Our experiment also implied adding modulator can reduce overfitting. We demonstrated even with fewer parameters, the modulated model can still perform on par with it vanilla version of a bigger size. This modulation idea can also be expanded to other setting, such as, different modulator activation or different structure inside the modulator. 5 DISCUSSION It was frequently observed in preliminary testing that arbitrarily increasing model parameters actually hurt network performance, so future studies will be aimed at investigating the relationship between the number of model parameters and the performance of the network. Additionally, it will be important to determine the interaction between specific network implementations and the ideal Activation Function wrapping for slope-determining neurons. Lastly, it may be useful to investigate layer-wide single-node modulation on models with parallel LSTM’s. Epigenetics refers to the activation and inactivation of genes (Weinhold, 2006), often as a result of environmental factors. These changes in gene-expression result in modifications to the generation and regulation of cellular proteins, such as ion channels, that regulate how the cell controls the flow of current through the cell membrane (Meadows et al., 2016). The modulation of these proteins will strongly influence the tendency of a neuron to fire and hence affect the neurons function as a single computational node. These proteins, in turn, can influence epigenetic expression in the form of dynamic control (Kawasaki et al., 2004). Regarding the effects of these signals, we can compare the output of neurons and nodes from a variety of perspectives. First and foremost, intrinsic excitability refers to the ease with which a neurons electrical potential can increase, and this feature has been found to impact plasticity itself (Desai et al., 1999). From this view, the output of a node in an artificial neural network would correspond to a neurons firing rate, which Intrinsic Excitability is a large contributor to, and our extra gate would be setting the node’s intrinsic excitability. With the analogy of firing rate, another phenomenon can be considered. Neurons may experience various modes of information integration, typically labeled Type 1 and Type 2. Type 1 refers to continuous firing rate integration, while Type 2 refers to discontinuous information (Tateno et al., 2004). This is computationally explained as a function of interneuron communication resulting in neuron-activity nullclines with either heavy overlap or discontinuous saddle points (Miller, 2016). In biology, a neuron may switch between Type 1 and Type 2 depending on the presence of neuromodulator (Stiefel & Gutkin, 2012). Controlling the degree to which the tanh function encodes to a binary space, our modification may be conceived as determining the form of information integration. The final possible firing rate equivalence refers to the ability of real neurons to switch between different firing modes. While the common mode of firing, Tonic firing, generally encodes information in rate frequency, neurons in a Bursting mode (though there are many types of bursts) tend to encode information in a binary mode - either firing bursts or not (Tateno et al., 2004). Here too, our modification encompasses a biological phenomenon by enabling the switch between binary and continuous information. Another analogy to an ANN nodes output would be the neurotransmitter released. With this view, our modification is best expressed as an analogy to Activity Dependent Facilitation and Depression, phenomena which cause neurons to release either more or less neurotransmitter. Facilitation and depression occur in response to the same input: past activity (Reyes et al., 1998). Our modification enables a network to use previous activity to determine its current sensitivity to input, allowing for both Facilitation and Depression. On the topic of neurotransmitter release, neuromodulation is the most relevant topic to the previously shown experiments. Once again, Marblestone et al. (2016) explains the situation perfectly, expressing that research (Bargmann, 2012; Bargmann & Marder, 2013) has shown ”the same neuron or circuit can exhibit different input-output responses depending on a global circuit state, as reflected by the concentrations of various neuromodulators”. Relating to our modification, the slope of the activation function may be conceptualized as the mechanism of neuromodulation, with the new gate acting analogously to a source of neuromodulator for all nodes in the network. Returning to a Machine Learning approach, the ability to adjust the slope of an Activation Function has an immediate benefit in making the back-propagation gradient dynamic. For example, for Activations near 0, where the tanh Function gradient is largest, the effect of our modification on node output is minimal. However, at this point, our modification has the ability to decrease the gradient, perhaps acting as pseudo-learning-rate. On the other hand, at activations near 1 and -1, where the tanh Function gradient reaches 0, our modification causes the gradient to reappear, allowing for information to be extracted from inputs outside of the standard range. Additionally, by implementing a slope that is conditional on node input, the node has the ability to generate a wide range of functional Activation Functions, including asymmetric functions. Lastly, injecting noise has been found to help deep neural networks with noisy datasets (Zheng et al., 2016), which is noteworthy since noise may act as a stabilizer for neuronal firing rates, (Touboul et al., 2012). With this in mind, Table 3.2 demonstrates increased clustering in two-dimensional node-Activation space, when the Activation Function slope is made to be dynamic. This indicates that noise may be a mediator of our modification, improving network performance through stabilization, induced by increasing the variability of the input-output relationship. In summary, we have shown evidence that nodes in LSTMs and CNNs benefit from added complexity to their input-output dynamic. Specifically, having a node that adjusts the slope of the main layer’s nodes’ activation functions mimics the functionality of neuromodulators and is shown to benefit the network. The exact mechanism by which this modification improves network performance remains unknown, yet it is possible to support this approach from both a neuroscientific and machine-learning perspective. We believe this demonstrates the need for further research into discovering novel non-computationally-demanding methods of applying principles of neuroscience to artificial networks. 6 APPENDIX 6.1 SUPPLEMENTARY DATA METHODOLOGY Additionally we tested our modulator gate, with τl(·) set to sigmoid, on a much more computationally demanding three-layered LSTM network with weight drop method named awd-lstm-lm (Merity et al., 2017; 2018). This model was equipped to handle the Penn-Treebank dataset (Marcus et al., 1993) and was trained to minimize word perplexity. The network was trained for 500 epochs, however, the sample size was limited due to extremely long training times. 6.2 SUPPLEMENTARY DATA RESULTS On the Penn-Treebank dataset with the awd-lstm-lm implementation, sample size was restricted to 2 per condition, due to long training times and limited resources. However on the data collected, our model outperformed template perplexity, achieving an average of 58.4730 compared to the template average 58.7115. Due to the lack of a control for model parameters, interpretation of these results rests on the assumption that the author fine-tuned network parameters such that the template parameters maximized performance. 7 SUPPLEMENTARY DATA FIGURES & TABLES 7.1 AWD-LSTM-LM ON PENN-TREEBANK Table 7: Comparison of mean test Perplexities lower = better Model Epochs Modulated Control Statistical Analysis awd-lstm-lm on Penn-Treebank 500 58.4730 58.7115 T: 1.842 DOF: 1.9 Hedges’s G: 1.853 Figure 7: Validation Perplexity progress (lower = better) 7.2 SUPPLEMENTAL LSTM DATA
1. What is the main contribution of the paper, and how does it relate to previous work in attention and gating mechanisms? 2. How does the proposed approach perform compared to baselines, and what are the limitations of the experimental design? 3. What are the strengths and weaknesses of the paper's writing and presentation? 4. Are there any typos or formatting issues in the review?
Review
Review Summary: This paper introduces an architectural change for basic neurons in neural network. Assuming a "neuron" consists of a linear combination of the input, followed by a non-linear activation function, the idea is to multiply the output of the linear combination by a "modulator", prior to feeding it into the activation function. The modulator is itself a non-linear function of the input. Furthermore, in the paper's implementation, the modulators share weights across the same layer. The idea is demonstrated on basic vision and NLP tasks, showing improvements over the baselines. I - On the substance: 1. Related concepts and biological inspirations The idea is analogous to attention and gating mechanisms, as the authors point out, with the clear distinction that the modulation happens _before_ the activation function. It would have been interesting to experiment a combination of modulation and attention since they do not act on the same levels. Also, the authors claim inspiration from the biological neurons, however, they do not elaborate in depth on the connections to the neuronal concepts mentioned in the introduction. 2. The performance of the proposed approach In the first experiment, the modulated CNN at 150 epochs seems to have comparable performance with the vanilla CNN at 60 (the latter CNN starts overfitting afterwards). Why not extending the learning curve to more epochs since the modulated CNN seems on a positive slope? The other experiments show some improvements over the baselines, however more experiments are necessary for claiming generality. Especially, the baselines remain too simple and there are some well-known well-performing architectures, for both image and text processing, that the authors could compare to (cf winning architectures for imagenet for instance). They could also take these same architectures and augment them with the modulation proposed in the paper. Furthermore, an ablation study is clearly missing, what about different activation functions, combination with other optimization techniques etc.? II - On the form: 1. the paper is sometimes unclear, even though the overall narrative is sound, 2. wiggly red lines are still present in the caption of Figure 1 right. 3. Figure 6 could be greatly simplified by putting its content in the form of a table, I don't find that the rectangles and forms bring much benefit here. 4. Table 5 (should it not be Figure?): it is not fully clear what the lines represent and based on which input. 5. some typos: - abstract: a biological neuron change[s] - abstract: accordingly to -> according to - introduction > paragraph 2 > line 11: Each target node multipl[i]es III - Conclusion: The idea is interesting and some of the experiments show nice results (eg. modulated densenet-lite outperforming densenet) but the overall paper needs further improvements. In particular, the writing needs to be reworked, the experiments to be consolidated, and the link to neuronal modulation to be further investigated.
ICLR
Title Context Dependent Modulation of Activation Function Abstract We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks. 1 INTRODUCTION Artificial neural networks (ANNs), such as convolutional neural networks (CNNs) (LeCun et al., 1998) and long short-term memory (LSTM) cells (Hochreiter & Schmidhuber, 1997), have incredible capabilities and are applied in a variety of applications including computer vision, natural language analysis, and speech recognition among others. Historically, the development of ANNs (e.g., network architectures and learning algorithms) has benefited significantly from collaborations with Psych-Neuro communities (Churchland & Sejnowski, 1988; Hebb, 1949; Hinton et al., 1984; Hopfield, 1982; McCulloch & Pitts, 1943; Turing, 1950; Hassabis et al., 2017; Elman, 1990; Hopfield & Tank, 1986; Jordan, 1997; Hassabis et al., 2017). The information processing capabilities of traditional ANN nodes are rather rigid when compared to the plasticity of real neurons. A typical traditional ANN node linearly integrate its input signals and run the integration through a transformation called an activation function, which simply takes in a scalar value and outputs another. Of the most popular Activation Functions are sigmoid (Mikolov et al., 2010), tanh (Kalman & Kwasny, 1992) and ReLU (Nair & Hinton, 2010). Researchers have shown that it could be beneficial to deploy layer-/node- specific activation functions in a deep ANN (Chen & Chang, 1996; Solazzi & Uncini, 2000; Goh & Mandic, 2003; He et al., 2015; Agostinelli et al., 2014). However, each ANN node is traditionally stuck with a fixed activation function once trained. Therefore, the same input integration will always produce the same output. This fails to replicate the amazing capability of individual biological neurons to conduct complex nonlinear mappings from inputs to outputs (Antic et al., 2010; Hassabis et al., 2017; Marblestone et al., 2016). In this study, we propose one new modification to ANN architectures by adding a new type of node, termed modulators, to modulate the activation sensitivity of the ANN nodes targeted by modulators (see Figures 1-3 for examples). In one possible setting, a modulator and its target ANN nodes share the same inputs. The modulator maps the input into a modulation signal, which is fed into each target node. Each target node multiples its input integration by the modulator signal prior to transformation by its traditional activation function. Examples of neuronal principles that may be captured by our new modification include intrinsic excitability, diverse firing modes, type 1 and type 2 forms of firing rate integration, activity dependent facilitation and depression and, most notably, neuromodulation (Marder et al., 1996; Sherman, 2001; Ward, 2003; Ringrose & Paro, 2004). Our modulator is relevant to the attention mechanism (Larochelle & Hinton, 2010; Mnih et al., 2014), which dynamically restricts information pathways and has been found to be very useful in practice. Attention mechanisms apply the attention weights, which are calculated in run-time, to the outputs of ANN nodes or LSTM cells. Notably, the gating mechanism in a Simple LSTM cell can also be viewed as a dynamical information modifier. A gate takes the input of the LSTM cell and outputs gating signals for filtering the outputs of its target ANN nodes in the same LSTM cell. A similar gating mechanism was proposed in the Gated Linear Unit (Dauphin et al., 2016) for CNNs. Different from the attention and gating mechanisms, which are applied to the outputs of the target nodes, our modulation mechanism adjusts the sensitivities of the target ANN nodes in run-time by changing the slopes of the corresponding activation functions. Hence, the modulator can also be used as a complement to the attention and gate mechanisms. Below we will explain our modulator mechanism in detail. Experimentation shows that the modulation mechanism can help achieve better test stability and higher test performance using easy to implement and significantly simpler models. Finally, we conclude the paper with discussions on the relevance to the properties of actual neurons. 2 METHODS We designed two modulation mechanisms, one for CNNs and the other for LSTMs. In modulating CNNs, our modulator (see Figure 1) is a layer-specific one that is best compared to the biological phenomenon of neuromodulation. Each CNN layer before activation has one modulator, which shares the input ~x with other CNN nodes in the same layer (Figure 1Left). The modulator (Figure 1Right) of the lth CNN layer calculates a scalar modulation signal as sl = τl(~wTl ~x), where τl(·) is the activation function of the lth modulator, and feeds sl to every other CNN node in the same layer. The kth modulated CNN node in the lth layer linearly integrates its inputs as a traditional ANN nodes vl,k = ~wTl,k~x and modulates the integration to get ul,k = sl · vl,k prior to its traditional activation step ϕl,k(·). The final output is ol,k = ϕl,k(τl(~wTl ~x) · ~wTl,k~x). The above modulation mechanism is slightly modified to expand Densely Connected CNNs (Iandola et al., 2014)(see Figure 2). A modulator is added to each dense block layer to modulate the outputs of its convolution nodes. Given a specific input, the modulator outputs a scalar modulation signal that is multiplied to the scalar outputs of the target convolution nodes in the same layer. In addition to the Cellgate, there are three modifying gates (Forget, Input, and Output) in a traditional LSTM cell. Each gate is a full layer of ANN nodes. Each of ANN node in a gate uses sigmoid to transform the integration of the input into regulation signals. The traditional LSTM cell transforms the input integration to an intermediate output (i.e., C̃t in Figure 3). The Forget gate regulates what is removed from the old cell state (i.e., t−1 in Figure 3), and the Input gate what in C̃t is added to obtain the new cell state (i.e., t). The new cell state is transformed and then regulated by the output gate to become part of the input of the next time point. In modulating LSTM (see Figure 3), for the purpose of easier implementation, we create a new ”modulation gate” (the round dash rectangle in Figure 3) for node-specific sensitivity-adjustment which is most analogous to neuronal facilitation and depression. Different from a conventional LSTM that calculates C̃t = ϕ(Wc[~xt,~ht−1]), a modulated LSTM calculates C̃t = ϕ(τ(WM [~xt,~ht−1]) · (Wc[~xt,~ht−1])). In the above designs, both a multi-layer CNN and single-layer LSTM had multiple modulator nodes within each model. A generalization to the above designs is to allow a modulator to take the outputs from other CNN layers or those of the LSTM cell at other time points as the inputs. 3 EXPERIMENTAL RESULTS 3.1 MODULATED CNNS In our experiments with CNNs, the activation functions of the traditional CNN nodes was ReLU, with our modulator nodes using a sigmoid. We tested six total settings: a vanilla CNN vs a modulated vanilla CNN, a vanilla DenseNet vs a modulated DenseNet, and a vanilla DenseNet-lite vs a modulated DenseNet-lite. The vanilla CNN has 2 convolution blocks, each of which contains two sequential convolution layers, a pooling layer, and a dropout layer. A fully connected layer of 512 nodes is appended at the very end of the model. The convolution layers in the first block have 32 filters with a size of 3x3 while the convolution layers in the second block have 64 filters with a size of 3x3. We apply a dropout of 0.25 to each block. The vanilla DenseNet used the structure (40 in depth and 12 in growth-rate) reported in the original DenseNet paper (Iandola et al., 2014) and a dropout of 0.5 is used in our experiment. The vanilla DenseNet-lite has a similar structure to the vanilla DenseNet, however, uses a smaller growth-rate of 10 instead of 12 in the original configuration, which results in 28% fewer parameters. The modulators are added to the vanilla CNN, the vanilla DenseNet, and the vanilla DenseNet-lite in the way described in Figures 1 and 2 to obtain their modulated versions, respectively. Table 1 summarizes the numbers of the parameters in the above models to indicate their complexities. The modulated networks have slightly more parameters than their vanilla versions do. All the experiments were run for 150 epochs on 4 NVIDIA Titan Xp GPUs with a mini-batch size of 128. CIFAR-10 dataset (Krizhevsky & Hinton, 2009) was used in this experiment. CIFAR-10 consists of colored images at a resolution of 32x32 pixels. The training and test set are containing 50000 and 10000 images respectively. We held 20% of the training data for validation and applied data augmentation of shifting and mirroring on the training data. All the CNN models are trained using the Adam (Kingma & Ba, 2014) optimization method with a learning rate of 1e-3 and shrinks by a factor of 10 at 50% and 80% of the training progress. As shown in Figure 4, the vanilla CNN model begins to overfit after 80 training epochs. Although the modulated CNN model is slightly more complex, it is less prone to overfitting and excels its vanilla counterpart by a large margin (see Table 2). Modulation also significantly helps DenseNets in training, validation, and test. The modulated DenseNet/DenseNet-lite models consistently outperform their vanilla counterparts by a noticeable margin (see Figures 5(a) and 5(b)) during training. The validation and test results of the modulated DenseNet/DenseNet-lite models are also better than those of their vanilla counterparts. It is not surprising that the vanilla DenseNet-lite model underperforms the vanilla DenseNet model. Interestingly, despite having 28% fewer parameters than the vanilla DenseNet model, the modulated DenseNet-lite model outperforms the vanilla DenseNet model (see the dash orange curve vs the solid blue curve in Figure 5(b) and Table 2). 3.2 MODULATED LSTM Two datasets were used in the LSTM experiments. The first one is the NAMES dataset (Sean, 2016), in which the goal is to take a name as a string and classify its ethnicity or country of origin. Approximately 10% of the data-set was reserved for testing. The second experiment used the SST2 data-set (Socher et al., 2013), which requires a trained model to classify whether a movie review is positive or negative based on the raw text in the review. The SST2 is identical to the SST1 with the exception of the neutral category removed (Socher et al., 2013), leaving only positive and negative reviews. About 20% of the data-set was reserved for testing. Since modulators noticeably increase the parameters in a modulated LSTM, to perform fair comparisons, we create three versions of vanilla LSTMs (see Controls 1, 2, & 3 in Figure 6). Control 1 has an identical total LSTM cell size. Control 2 has the identical number of nodes per layer. Control 3 has an extra Input gate so that it has both an identical total number of nodes and identical nodes per layer. The numbers of parameters in the modulated LSTM and control LSTMs are listed in Table 3 for comparison. The hyper-parameters for the first experiment were set as following: the hidden dimension was set to 32, batch size to 32, embedding dimension to 128, initial learning rate to .01, learning rate decay to 1e-4, an SGD optimizer was used, with dropout of 0.2 applied to the last hidden state of the LSTM and 100 epochs were collected. This condition was tested on the name categorization data-set. The number of parameters in this model ranged from 4.1 K to 6.4 K, depending on the condition. We repeated the experimental runs 30 times. Based on the simplicity of the data-set and the relative sparsity of parameters, this condition will be referred to as Simple-LSTM. As for the second experiment: the hidden dimension was set to 150, he batch size was set to 5, the embedding dimension was set to 300, the initial learning rate was set to 1e-3, there was no learning rate decay, an Adam optimizer was used with no dropout and 100 epochs were collected. The number of parameters in this model ranged from 57.6 K to 90 K, depending on the control setup. This experiment was repeated 100 times. Based on the complexity of the data-set and the relatively large amount of parameters, this condition will be referred to as Advanced-LSTM. In all experiments, the models were trained for 100 epochs. We can observe from the results in Table 4 that, the mean test performance of both modulated LSTMs outperformed all three control groups and achieved the highest validation performance. Statistical significance varied between the two LSTM models. In the Vanilla-LSTM (n = 30), with τl(·) set to sigmoid, statistical significance ranged between p<.06 (Control 3) and P<.001 (Control 2). In the Advanced-LSTM (n = 100), with τl(·) set to tanhshrink, statistical significance was a consistently P<.001 in all conditions. In all cases, variance was lowest in the modulated condition. We further zoom in the activation data-flow and visualized the the effect of our modulation in Table 3.2. The control condition and modulated condition was compared side by side. On the left we can observe the impact of the Ingate on the amplitude of the tanh activation function, on the right we can observe our modulation adjust the slope as well. Each input generates a context dependent activation as shown in continuous lines and specific activations are represented by the blue dots which corresponded to a point on a specific line. Our modulation modification provides new aptitudes for the model to learn, generalize and appears to add a stabilizing feature to the dynamic input-output relationship. 4 CONCLUSION We propose a modulation mechanism addition to traditional ANNs so that the shape of the activation function can be context dependent. Experimental results show that the modulated models consistently outperform their original versions. Our experiment also implied adding modulator can reduce overfitting. We demonstrated even with fewer parameters, the modulated model can still perform on par with it vanilla version of a bigger size. This modulation idea can also be expanded to other setting, such as, different modulator activation or different structure inside the modulator. 5 DISCUSSION It was frequently observed in preliminary testing that arbitrarily increasing model parameters actually hurt network performance, so future studies will be aimed at investigating the relationship between the number of model parameters and the performance of the network. Additionally, it will be important to determine the interaction between specific network implementations and the ideal Activation Function wrapping for slope-determining neurons. Lastly, it may be useful to investigate layer-wide single-node modulation on models with parallel LSTM’s. Epigenetics refers to the activation and inactivation of genes (Weinhold, 2006), often as a result of environmental factors. These changes in gene-expression result in modifications to the generation and regulation of cellular proteins, such as ion channels, that regulate how the cell controls the flow of current through the cell membrane (Meadows et al., 2016). The modulation of these proteins will strongly influence the tendency of a neuron to fire and hence affect the neurons function as a single computational node. These proteins, in turn, can influence epigenetic expression in the form of dynamic control (Kawasaki et al., 2004). Regarding the effects of these signals, we can compare the output of neurons and nodes from a variety of perspectives. First and foremost, intrinsic excitability refers to the ease with which a neurons electrical potential can increase, and this feature has been found to impact plasticity itself (Desai et al., 1999). From this view, the output of a node in an artificial neural network would correspond to a neurons firing rate, which Intrinsic Excitability is a large contributor to, and our extra gate would be setting the node’s intrinsic excitability. With the analogy of firing rate, another phenomenon can be considered. Neurons may experience various modes of information integration, typically labeled Type 1 and Type 2. Type 1 refers to continuous firing rate integration, while Type 2 refers to discontinuous information (Tateno et al., 2004). This is computationally explained as a function of interneuron communication resulting in neuron-activity nullclines with either heavy overlap or discontinuous saddle points (Miller, 2016). In biology, a neuron may switch between Type 1 and Type 2 depending on the presence of neuromodulator (Stiefel & Gutkin, 2012). Controlling the degree to which the tanh function encodes to a binary space, our modification may be conceived as determining the form of information integration. The final possible firing rate equivalence refers to the ability of real neurons to switch between different firing modes. While the common mode of firing, Tonic firing, generally encodes information in rate frequency, neurons in a Bursting mode (though there are many types of bursts) tend to encode information in a binary mode - either firing bursts or not (Tateno et al., 2004). Here too, our modification encompasses a biological phenomenon by enabling the switch between binary and continuous information. Another analogy to an ANN nodes output would be the neurotransmitter released. With this view, our modification is best expressed as an analogy to Activity Dependent Facilitation and Depression, phenomena which cause neurons to release either more or less neurotransmitter. Facilitation and depression occur in response to the same input: past activity (Reyes et al., 1998). Our modification enables a network to use previous activity to determine its current sensitivity to input, allowing for both Facilitation and Depression. On the topic of neurotransmitter release, neuromodulation is the most relevant topic to the previously shown experiments. Once again, Marblestone et al. (2016) explains the situation perfectly, expressing that research (Bargmann, 2012; Bargmann & Marder, 2013) has shown ”the same neuron or circuit can exhibit different input-output responses depending on a global circuit state, as reflected by the concentrations of various neuromodulators”. Relating to our modification, the slope of the activation function may be conceptualized as the mechanism of neuromodulation, with the new gate acting analogously to a source of neuromodulator for all nodes in the network. Returning to a Machine Learning approach, the ability to adjust the slope of an Activation Function has an immediate benefit in making the back-propagation gradient dynamic. For example, for Activations near 0, where the tanh Function gradient is largest, the effect of our modification on node output is minimal. However, at this point, our modification has the ability to decrease the gradient, perhaps acting as pseudo-learning-rate. On the other hand, at activations near 1 and -1, where the tanh Function gradient reaches 0, our modification causes the gradient to reappear, allowing for information to be extracted from inputs outside of the standard range. Additionally, by implementing a slope that is conditional on node input, the node has the ability to generate a wide range of functional Activation Functions, including asymmetric functions. Lastly, injecting noise has been found to help deep neural networks with noisy datasets (Zheng et al., 2016), which is noteworthy since noise may act as a stabilizer for neuronal firing rates, (Touboul et al., 2012). With this in mind, Table 3.2 demonstrates increased clustering in two-dimensional node-Activation space, when the Activation Function slope is made to be dynamic. This indicates that noise may be a mediator of our modification, improving network performance through stabilization, induced by increasing the variability of the input-output relationship. In summary, we have shown evidence that nodes in LSTMs and CNNs benefit from added complexity to their input-output dynamic. Specifically, having a node that adjusts the slope of the main layer’s nodes’ activation functions mimics the functionality of neuromodulators and is shown to benefit the network. The exact mechanism by which this modification improves network performance remains unknown, yet it is possible to support this approach from both a neuroscientific and machine-learning perspective. We believe this demonstrates the need for further research into discovering novel non-computationally-demanding methods of applying principles of neuroscience to artificial networks. 6 APPENDIX 6.1 SUPPLEMENTARY DATA METHODOLOGY Additionally we tested our modulator gate, with τl(·) set to sigmoid, on a much more computationally demanding three-layered LSTM network with weight drop method named awd-lstm-lm (Merity et al., 2017; 2018). This model was equipped to handle the Penn-Treebank dataset (Marcus et al., 1993) and was trained to minimize word perplexity. The network was trained for 500 epochs, however, the sample size was limited due to extremely long training times. 6.2 SUPPLEMENTARY DATA RESULTS On the Penn-Treebank dataset with the awd-lstm-lm implementation, sample size was restricted to 2 per condition, due to long training times and limited resources. However on the data collected, our model outperformed template perplexity, achieving an average of 58.4730 compared to the template average 58.7115. Due to the lack of a control for model parameters, interpretation of these results rests on the assumption that the author fine-tuned network parameters such that the template parameters maximized performance. 7 SUPPLEMENTARY DATA FIGURES & TABLES 7.1 AWD-LSTM-LM ON PENN-TREEBANK Table 7: Comparison of mean test Perplexities lower = better Model Epochs Modulated Control Statistical Analysis awd-lstm-lm on Penn-Treebank 500 58.4730 58.7115 T: 1.842 DOF: 1.9 Hedges’s G: 1.853 Figure 7: Validation Perplexity progress (lower = better) 7.2 SUPPLEMENTAL LSTM DATA
1. What is the main contribution of the paper, and how does it differ from existing neural network architectures? 2. How does the proposed approach modulate activation functions, and why is this beneficial for supervised learning tasks? 3. What are the strengths and weaknesses of the paper's evaluation and results? 4. How do the proposed modulators compare to other methods of modulating excitability in ANNs, such as changing the firing threshold or using multiplicative interactions? 5. Are there any limitations or potential drawbacks to the proposed approach, such as increased complexity or the need for specific training procedures? 6. How might the proposed approach be extended or adapted for use in other contexts, such as unsupervised learning or reinforcement learning?
Review
Review Summary: this submission proposes a modification of neural network architectures that allows the modulation of activation functions of a given layer as a function of the activations in the previous layer. The author provide different version of their approach adapted to CNN, DenseNets and LSTM, and show it outperforms a vanilla version of these algorithms. Evaluation: In the classical context of supervised learning tasks investigated in this submission, it is unclear to me what could be the benefit of introducing such “modulators”, as vanilla ANNs already have the capability of modulating the excitability of their neurons. Although the results show significant, but quite limited, improvements with respect to the chosen baseline, more extensive baseline comparisons are needed. Details comments: 1. Underlying principles of the approach It is unclear to me why the proposed approach should bring a significant improvement to the existing architectures. First, from a neuroscientific perspective, neuromodulators allow the brain to go through different states, including arousal, sleep, and different levels of stress. While it is relatively clear that state modulation has some benefits to a living system, it is less so for an ANN focused on a supervised learning task. Why should the state change instead of focusing on the optimal way to perform the task? If the authors want to use a neuroscientific argument, I would suggest to elaborate based on the precise context of the tasks they propose to solve. In addition, as mentioned several times in the paper, neuromodulation is frequently associated to changes in cell excitability. While excitability is a concept that can be associated to multiple mechanisms, a simple way to model changes in excitability is to modify the threshold that must be reached by the membrane potential of a given neuron in order for the cell to fire. Such simple change in excitability can be easily implemented in ANNs architectures by affecting one afferent neuron in the previous layer to the modification of this firing threshold (simply adding a bias term). As a consequence, if there is any benefit to the proposed architecture, it is very likely to originate specifically from the multiplicative interactions used to implement modulation in this paper. However, approximation of such multiplicative interactions can also be implemented using multiple layers network equipped with non-linear activations. Overall, it would be good to discuss these aspects in great detail in the introduction and/or discussion of the paper, and possibly find a more convincing justification for the approach. 2. Weak baseline comparison results In the CNN experiments, modulated networks are only compared with a single vanilla counterpart equipped with ReLu. There are at least two obvious additional baseline comparison that would be useful: what if the Re-Lu activations are replaced with fixed sigmoids? And what if batch-normalization is switched on/off (I could not find whether it was used at all). Indeed it, we should exclude benefits that are simply due to the non-linearity of the sigmoid, and batch normalization also implements a form of modulation at training that may provide benefits equivalent to modulation (or on the contrary, batch norm could implement a modulation in the wrong way). It would be better to look at all possible combinations of these architecture choices. Due to lack of details in the paper and my personal lack of expertise in LSTMs, I will not comment on baselines for that part but I assume similar modifications can be done. Overall, given the weak improvements in performance, it is questionable whether this extra degree of complexity should be added to the architecture. Additionally, I could not find the precise description of the statistical tests performed. Ideally, the test, the number of samples, the exact p-value, and whether the method of correction for multiple comparison should be included each time a p-value is mentioned.
ICLR
Title Context Dependent Modulation of Activation Function Abstract We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks. 1 INTRODUCTION Artificial neural networks (ANNs), such as convolutional neural networks (CNNs) (LeCun et al., 1998) and long short-term memory (LSTM) cells (Hochreiter & Schmidhuber, 1997), have incredible capabilities and are applied in a variety of applications including computer vision, natural language analysis, and speech recognition among others. Historically, the development of ANNs (e.g., network architectures and learning algorithms) has benefited significantly from collaborations with Psych-Neuro communities (Churchland & Sejnowski, 1988; Hebb, 1949; Hinton et al., 1984; Hopfield, 1982; McCulloch & Pitts, 1943; Turing, 1950; Hassabis et al., 2017; Elman, 1990; Hopfield & Tank, 1986; Jordan, 1997; Hassabis et al., 2017). The information processing capabilities of traditional ANN nodes are rather rigid when compared to the plasticity of real neurons. A typical traditional ANN node linearly integrate its input signals and run the integration through a transformation called an activation function, which simply takes in a scalar value and outputs another. Of the most popular Activation Functions are sigmoid (Mikolov et al., 2010), tanh (Kalman & Kwasny, 1992) and ReLU (Nair & Hinton, 2010). Researchers have shown that it could be beneficial to deploy layer-/node- specific activation functions in a deep ANN (Chen & Chang, 1996; Solazzi & Uncini, 2000; Goh & Mandic, 2003; He et al., 2015; Agostinelli et al., 2014). However, each ANN node is traditionally stuck with a fixed activation function once trained. Therefore, the same input integration will always produce the same output. This fails to replicate the amazing capability of individual biological neurons to conduct complex nonlinear mappings from inputs to outputs (Antic et al., 2010; Hassabis et al., 2017; Marblestone et al., 2016). In this study, we propose one new modification to ANN architectures by adding a new type of node, termed modulators, to modulate the activation sensitivity of the ANN nodes targeted by modulators (see Figures 1-3 for examples). In one possible setting, a modulator and its target ANN nodes share the same inputs. The modulator maps the input into a modulation signal, which is fed into each target node. Each target node multiples its input integration by the modulator signal prior to transformation by its traditional activation function. Examples of neuronal principles that may be captured by our new modification include intrinsic excitability, diverse firing modes, type 1 and type 2 forms of firing rate integration, activity dependent facilitation and depression and, most notably, neuromodulation (Marder et al., 1996; Sherman, 2001; Ward, 2003; Ringrose & Paro, 2004). Our modulator is relevant to the attention mechanism (Larochelle & Hinton, 2010; Mnih et al., 2014), which dynamically restricts information pathways and has been found to be very useful in practice. Attention mechanisms apply the attention weights, which are calculated in run-time, to the outputs of ANN nodes or LSTM cells. Notably, the gating mechanism in a Simple LSTM cell can also be viewed as a dynamical information modifier. A gate takes the input of the LSTM cell and outputs gating signals for filtering the outputs of its target ANN nodes in the same LSTM cell. A similar gating mechanism was proposed in the Gated Linear Unit (Dauphin et al., 2016) for CNNs. Different from the attention and gating mechanisms, which are applied to the outputs of the target nodes, our modulation mechanism adjusts the sensitivities of the target ANN nodes in run-time by changing the slopes of the corresponding activation functions. Hence, the modulator can also be used as a complement to the attention and gate mechanisms. Below we will explain our modulator mechanism in detail. Experimentation shows that the modulation mechanism can help achieve better test stability and higher test performance using easy to implement and significantly simpler models. Finally, we conclude the paper with discussions on the relevance to the properties of actual neurons. 2 METHODS We designed two modulation mechanisms, one for CNNs and the other for LSTMs. In modulating CNNs, our modulator (see Figure 1) is a layer-specific one that is best compared to the biological phenomenon of neuromodulation. Each CNN layer before activation has one modulator, which shares the input ~x with other CNN nodes in the same layer (Figure 1Left). The modulator (Figure 1Right) of the lth CNN layer calculates a scalar modulation signal as sl = τl(~wTl ~x), where τl(·) is the activation function of the lth modulator, and feeds sl to every other CNN node in the same layer. The kth modulated CNN node in the lth layer linearly integrates its inputs as a traditional ANN nodes vl,k = ~wTl,k~x and modulates the integration to get ul,k = sl · vl,k prior to its traditional activation step ϕl,k(·). The final output is ol,k = ϕl,k(τl(~wTl ~x) · ~wTl,k~x). The above modulation mechanism is slightly modified to expand Densely Connected CNNs (Iandola et al., 2014)(see Figure 2). A modulator is added to each dense block layer to modulate the outputs of its convolution nodes. Given a specific input, the modulator outputs a scalar modulation signal that is multiplied to the scalar outputs of the target convolution nodes in the same layer. In addition to the Cellgate, there are three modifying gates (Forget, Input, and Output) in a traditional LSTM cell. Each gate is a full layer of ANN nodes. Each of ANN node in a gate uses sigmoid to transform the integration of the input into regulation signals. The traditional LSTM cell transforms the input integration to an intermediate output (i.e., C̃t in Figure 3). The Forget gate regulates what is removed from the old cell state (i.e., t−1 in Figure 3), and the Input gate what in C̃t is added to obtain the new cell state (i.e., t). The new cell state is transformed and then regulated by the output gate to become part of the input of the next time point. In modulating LSTM (see Figure 3), for the purpose of easier implementation, we create a new ”modulation gate” (the round dash rectangle in Figure 3) for node-specific sensitivity-adjustment which is most analogous to neuronal facilitation and depression. Different from a conventional LSTM that calculates C̃t = ϕ(Wc[~xt,~ht−1]), a modulated LSTM calculates C̃t = ϕ(τ(WM [~xt,~ht−1]) · (Wc[~xt,~ht−1])). In the above designs, both a multi-layer CNN and single-layer LSTM had multiple modulator nodes within each model. A generalization to the above designs is to allow a modulator to take the outputs from other CNN layers or those of the LSTM cell at other time points as the inputs. 3 EXPERIMENTAL RESULTS 3.1 MODULATED CNNS In our experiments with CNNs, the activation functions of the traditional CNN nodes was ReLU, with our modulator nodes using a sigmoid. We tested six total settings: a vanilla CNN vs a modulated vanilla CNN, a vanilla DenseNet vs a modulated DenseNet, and a vanilla DenseNet-lite vs a modulated DenseNet-lite. The vanilla CNN has 2 convolution blocks, each of which contains two sequential convolution layers, a pooling layer, and a dropout layer. A fully connected layer of 512 nodes is appended at the very end of the model. The convolution layers in the first block have 32 filters with a size of 3x3 while the convolution layers in the second block have 64 filters with a size of 3x3. We apply a dropout of 0.25 to each block. The vanilla DenseNet used the structure (40 in depth and 12 in growth-rate) reported in the original DenseNet paper (Iandola et al., 2014) and a dropout of 0.5 is used in our experiment. The vanilla DenseNet-lite has a similar structure to the vanilla DenseNet, however, uses a smaller growth-rate of 10 instead of 12 in the original configuration, which results in 28% fewer parameters. The modulators are added to the vanilla CNN, the vanilla DenseNet, and the vanilla DenseNet-lite in the way described in Figures 1 and 2 to obtain their modulated versions, respectively. Table 1 summarizes the numbers of the parameters in the above models to indicate their complexities. The modulated networks have slightly more parameters than their vanilla versions do. All the experiments were run for 150 epochs on 4 NVIDIA Titan Xp GPUs with a mini-batch size of 128. CIFAR-10 dataset (Krizhevsky & Hinton, 2009) was used in this experiment. CIFAR-10 consists of colored images at a resolution of 32x32 pixels. The training and test set are containing 50000 and 10000 images respectively. We held 20% of the training data for validation and applied data augmentation of shifting and mirroring on the training data. All the CNN models are trained using the Adam (Kingma & Ba, 2014) optimization method with a learning rate of 1e-3 and shrinks by a factor of 10 at 50% and 80% of the training progress. As shown in Figure 4, the vanilla CNN model begins to overfit after 80 training epochs. Although the modulated CNN model is slightly more complex, it is less prone to overfitting and excels its vanilla counterpart by a large margin (see Table 2). Modulation also significantly helps DenseNets in training, validation, and test. The modulated DenseNet/DenseNet-lite models consistently outperform their vanilla counterparts by a noticeable margin (see Figures 5(a) and 5(b)) during training. The validation and test results of the modulated DenseNet/DenseNet-lite models are also better than those of their vanilla counterparts. It is not surprising that the vanilla DenseNet-lite model underperforms the vanilla DenseNet model. Interestingly, despite having 28% fewer parameters than the vanilla DenseNet model, the modulated DenseNet-lite model outperforms the vanilla DenseNet model (see the dash orange curve vs the solid blue curve in Figure 5(b) and Table 2). 3.2 MODULATED LSTM Two datasets were used in the LSTM experiments. The first one is the NAMES dataset (Sean, 2016), in which the goal is to take a name as a string and classify its ethnicity or country of origin. Approximately 10% of the data-set was reserved for testing. The second experiment used the SST2 data-set (Socher et al., 2013), which requires a trained model to classify whether a movie review is positive or negative based on the raw text in the review. The SST2 is identical to the SST1 with the exception of the neutral category removed (Socher et al., 2013), leaving only positive and negative reviews. About 20% of the data-set was reserved for testing. Since modulators noticeably increase the parameters in a modulated LSTM, to perform fair comparisons, we create three versions of vanilla LSTMs (see Controls 1, 2, & 3 in Figure 6). Control 1 has an identical total LSTM cell size. Control 2 has the identical number of nodes per layer. Control 3 has an extra Input gate so that it has both an identical total number of nodes and identical nodes per layer. The numbers of parameters in the modulated LSTM and control LSTMs are listed in Table 3 for comparison. The hyper-parameters for the first experiment were set as following: the hidden dimension was set to 32, batch size to 32, embedding dimension to 128, initial learning rate to .01, learning rate decay to 1e-4, an SGD optimizer was used, with dropout of 0.2 applied to the last hidden state of the LSTM and 100 epochs were collected. This condition was tested on the name categorization data-set. The number of parameters in this model ranged from 4.1 K to 6.4 K, depending on the condition. We repeated the experimental runs 30 times. Based on the simplicity of the data-set and the relative sparsity of parameters, this condition will be referred to as Simple-LSTM. As for the second experiment: the hidden dimension was set to 150, he batch size was set to 5, the embedding dimension was set to 300, the initial learning rate was set to 1e-3, there was no learning rate decay, an Adam optimizer was used with no dropout and 100 epochs were collected. The number of parameters in this model ranged from 57.6 K to 90 K, depending on the control setup. This experiment was repeated 100 times. Based on the complexity of the data-set and the relatively large amount of parameters, this condition will be referred to as Advanced-LSTM. In all experiments, the models were trained for 100 epochs. We can observe from the results in Table 4 that, the mean test performance of both modulated LSTMs outperformed all three control groups and achieved the highest validation performance. Statistical significance varied between the two LSTM models. In the Vanilla-LSTM (n = 30), with τl(·) set to sigmoid, statistical significance ranged between p<.06 (Control 3) and P<.001 (Control 2). In the Advanced-LSTM (n = 100), with τl(·) set to tanhshrink, statistical significance was a consistently P<.001 in all conditions. In all cases, variance was lowest in the modulated condition. We further zoom in the activation data-flow and visualized the the effect of our modulation in Table 3.2. The control condition and modulated condition was compared side by side. On the left we can observe the impact of the Ingate on the amplitude of the tanh activation function, on the right we can observe our modulation adjust the slope as well. Each input generates a context dependent activation as shown in continuous lines and specific activations are represented by the blue dots which corresponded to a point on a specific line. Our modulation modification provides new aptitudes for the model to learn, generalize and appears to add a stabilizing feature to the dynamic input-output relationship. 4 CONCLUSION We propose a modulation mechanism addition to traditional ANNs so that the shape of the activation function can be context dependent. Experimental results show that the modulated models consistently outperform their original versions. Our experiment also implied adding modulator can reduce overfitting. We demonstrated even with fewer parameters, the modulated model can still perform on par with it vanilla version of a bigger size. This modulation idea can also be expanded to other setting, such as, different modulator activation or different structure inside the modulator. 5 DISCUSSION It was frequently observed in preliminary testing that arbitrarily increasing model parameters actually hurt network performance, so future studies will be aimed at investigating the relationship between the number of model parameters and the performance of the network. Additionally, it will be important to determine the interaction between specific network implementations and the ideal Activation Function wrapping for slope-determining neurons. Lastly, it may be useful to investigate layer-wide single-node modulation on models with parallel LSTM’s. Epigenetics refers to the activation and inactivation of genes (Weinhold, 2006), often as a result of environmental factors. These changes in gene-expression result in modifications to the generation and regulation of cellular proteins, such as ion channels, that regulate how the cell controls the flow of current through the cell membrane (Meadows et al., 2016). The modulation of these proteins will strongly influence the tendency of a neuron to fire and hence affect the neurons function as a single computational node. These proteins, in turn, can influence epigenetic expression in the form of dynamic control (Kawasaki et al., 2004). Regarding the effects of these signals, we can compare the output of neurons and nodes from a variety of perspectives. First and foremost, intrinsic excitability refers to the ease with which a neurons electrical potential can increase, and this feature has been found to impact plasticity itself (Desai et al., 1999). From this view, the output of a node in an artificial neural network would correspond to a neurons firing rate, which Intrinsic Excitability is a large contributor to, and our extra gate would be setting the node’s intrinsic excitability. With the analogy of firing rate, another phenomenon can be considered. Neurons may experience various modes of information integration, typically labeled Type 1 and Type 2. Type 1 refers to continuous firing rate integration, while Type 2 refers to discontinuous information (Tateno et al., 2004). This is computationally explained as a function of interneuron communication resulting in neuron-activity nullclines with either heavy overlap or discontinuous saddle points (Miller, 2016). In biology, a neuron may switch between Type 1 and Type 2 depending on the presence of neuromodulator (Stiefel & Gutkin, 2012). Controlling the degree to which the tanh function encodes to a binary space, our modification may be conceived as determining the form of information integration. The final possible firing rate equivalence refers to the ability of real neurons to switch between different firing modes. While the common mode of firing, Tonic firing, generally encodes information in rate frequency, neurons in a Bursting mode (though there are many types of bursts) tend to encode information in a binary mode - either firing bursts or not (Tateno et al., 2004). Here too, our modification encompasses a biological phenomenon by enabling the switch between binary and continuous information. Another analogy to an ANN nodes output would be the neurotransmitter released. With this view, our modification is best expressed as an analogy to Activity Dependent Facilitation and Depression, phenomena which cause neurons to release either more or less neurotransmitter. Facilitation and depression occur in response to the same input: past activity (Reyes et al., 1998). Our modification enables a network to use previous activity to determine its current sensitivity to input, allowing for both Facilitation and Depression. On the topic of neurotransmitter release, neuromodulation is the most relevant topic to the previously shown experiments. Once again, Marblestone et al. (2016) explains the situation perfectly, expressing that research (Bargmann, 2012; Bargmann & Marder, 2013) has shown ”the same neuron or circuit can exhibit different input-output responses depending on a global circuit state, as reflected by the concentrations of various neuromodulators”. Relating to our modification, the slope of the activation function may be conceptualized as the mechanism of neuromodulation, with the new gate acting analogously to a source of neuromodulator for all nodes in the network. Returning to a Machine Learning approach, the ability to adjust the slope of an Activation Function has an immediate benefit in making the back-propagation gradient dynamic. For example, for Activations near 0, where the tanh Function gradient is largest, the effect of our modification on node output is minimal. However, at this point, our modification has the ability to decrease the gradient, perhaps acting as pseudo-learning-rate. On the other hand, at activations near 1 and -1, where the tanh Function gradient reaches 0, our modification causes the gradient to reappear, allowing for information to be extracted from inputs outside of the standard range. Additionally, by implementing a slope that is conditional on node input, the node has the ability to generate a wide range of functional Activation Functions, including asymmetric functions. Lastly, injecting noise has been found to help deep neural networks with noisy datasets (Zheng et al., 2016), which is noteworthy since noise may act as a stabilizer for neuronal firing rates, (Touboul et al., 2012). With this in mind, Table 3.2 demonstrates increased clustering in two-dimensional node-Activation space, when the Activation Function slope is made to be dynamic. This indicates that noise may be a mediator of our modification, improving network performance through stabilization, induced by increasing the variability of the input-output relationship. In summary, we have shown evidence that nodes in LSTMs and CNNs benefit from added complexity to their input-output dynamic. Specifically, having a node that adjusts the slope of the main layer’s nodes’ activation functions mimics the functionality of neuromodulators and is shown to benefit the network. The exact mechanism by which this modification improves network performance remains unknown, yet it is possible to support this approach from both a neuroscientific and machine-learning perspective. We believe this demonstrates the need for further research into discovering novel non-computationally-demanding methods of applying principles of neuroscience to artificial networks. 6 APPENDIX 6.1 SUPPLEMENTARY DATA METHODOLOGY Additionally we tested our modulator gate, with τl(·) set to sigmoid, on a much more computationally demanding three-layered LSTM network with weight drop method named awd-lstm-lm (Merity et al., 2017; 2018). This model was equipped to handle the Penn-Treebank dataset (Marcus et al., 1993) and was trained to minimize word perplexity. The network was trained for 500 epochs, however, the sample size was limited due to extremely long training times. 6.2 SUPPLEMENTARY DATA RESULTS On the Penn-Treebank dataset with the awd-lstm-lm implementation, sample size was restricted to 2 per condition, due to long training times and limited resources. However on the data collected, our model outperformed template perplexity, achieving an average of 58.4730 compared to the template average 58.7115. Due to the lack of a control for model parameters, interpretation of these results rests on the assumption that the author fine-tuned network parameters such that the template parameters maximized performance. 7 SUPPLEMENTARY DATA FIGURES & TABLES 7.1 AWD-LSTM-LM ON PENN-TREEBANK Table 7: Comparison of mean test Perplexities lower = better Model Epochs Modulated Control Statistical Analysis awd-lstm-lm on Penn-Treebank 500 58.4730 58.7115 T: 1.842 DOF: 1.9 Hedges’s G: 1.853 Figure 7: Validation Perplexity progress (lower = better) 7.2 SUPPLEMENTAL LSTM DATA
1. What is the focus and contribution of the paper regarding scalar modulators? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to control the sensitivity of hidden nodes? 3. Do you have any concerns or suggestions regarding the experimental results and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper proposes a scalar modulator adding to hidden nodes before an activation function. The authors claim that it controls the sensitivity of the hidden nodes by changing the slope of activation function. The modulator is combined with a simple CNN, DesneNet, and a LSTM model, and they provided the performance improvement over the classic models. The paper is clear and easy to understand. The idea is interesting. However, the experimental results are not enough and convincing to justify it. 1) The authors cited the relevant literature, but there is no comparison with any of these related works. 2) Does this modulator actually help for CNN and LSTM architectures? and How? Recently, there are many advanced CNN and LSTM architectures. The experiments the authors showed were with only 2 layer CNNs and 1 layer LSTM. There should be at least some comparison with an architecture that contains more layers/units and performs well. There is a DenseNet comparison, but it seems to have an error. See 4) for more details. 3) The authors mentioned that the modulator can be used as a complement to the attention and gate mechanisms. Indeed, they are very similar. However, the benefit is unclear. More experiments need to be demonstrated among the models with the proposed modulator, attention, and gates, especially learning behavior and performance differences. 4) The comparison in Table 2 is not convincing. - The baseline is too simple. For instance on CIFAR10, a simple CNN architecture introduced much earlier (like LeNet5 or AlexNet) performs better than Vanilla CNNs or modulated CNNs. - DenseNet accuracy reported in Table 2 is different from to the original paper: DenseNet (Huang et al. 2017) CIFAR10 # parameters 1.0M, accuracy 93%, but in this paper 88.9%. Even the accuracy of modulated DenseNet is 90.2% which is still far from the original DenseNet. Furthermore, there are many variations of DenseNet recently e.g., SparsenNet: sparsified DenseNet with attention layer (Liu et al. 2018), # parameters 0.86M, accuracy 95.75%. Authors should check their experiments and related papers more carefully. Side note: page 4, Section 3.1 "The vanilla DenseNet used the structure (40 in depth and 12 in growth-rate) reported in the original DenseNet paper (Iandola et al., 2014)". This DenseNet structure is from Huang et al. 2017 not from Iandola et al. 2014.
ICLR
Title Context Dependent Modulation of Activation Function Abstract We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks. 1 INTRODUCTION Artificial neural networks (ANNs), such as convolutional neural networks (CNNs) (LeCun et al., 1998) and long short-term memory (LSTM) cells (Hochreiter & Schmidhuber, 1997), have incredible capabilities and are applied in a variety of applications including computer vision, natural language analysis, and speech recognition among others. Historically, the development of ANNs (e.g., network architectures and learning algorithms) has benefited significantly from collaborations with Psych-Neuro communities (Churchland & Sejnowski, 1988; Hebb, 1949; Hinton et al., 1984; Hopfield, 1982; McCulloch & Pitts, 1943; Turing, 1950; Hassabis et al., 2017; Elman, 1990; Hopfield & Tank, 1986; Jordan, 1997; Hassabis et al., 2017). The information processing capabilities of traditional ANN nodes are rather rigid when compared to the plasticity of real neurons. A typical traditional ANN node linearly integrate its input signals and run the integration through a transformation called an activation function, which simply takes in a scalar value and outputs another. Of the most popular Activation Functions are sigmoid (Mikolov et al., 2010), tanh (Kalman & Kwasny, 1992) and ReLU (Nair & Hinton, 2010). Researchers have shown that it could be beneficial to deploy layer-/node- specific activation functions in a deep ANN (Chen & Chang, 1996; Solazzi & Uncini, 2000; Goh & Mandic, 2003; He et al., 2015; Agostinelli et al., 2014). However, each ANN node is traditionally stuck with a fixed activation function once trained. Therefore, the same input integration will always produce the same output. This fails to replicate the amazing capability of individual biological neurons to conduct complex nonlinear mappings from inputs to outputs (Antic et al., 2010; Hassabis et al., 2017; Marblestone et al., 2016). In this study, we propose one new modification to ANN architectures by adding a new type of node, termed modulators, to modulate the activation sensitivity of the ANN nodes targeted by modulators (see Figures 1-3 for examples). In one possible setting, a modulator and its target ANN nodes share the same inputs. The modulator maps the input into a modulation signal, which is fed into each target node. Each target node multiples its input integration by the modulator signal prior to transformation by its traditional activation function. Examples of neuronal principles that may be captured by our new modification include intrinsic excitability, diverse firing modes, type 1 and type 2 forms of firing rate integration, activity dependent facilitation and depression and, most notably, neuromodulation (Marder et al., 1996; Sherman, 2001; Ward, 2003; Ringrose & Paro, 2004). Our modulator is relevant to the attention mechanism (Larochelle & Hinton, 2010; Mnih et al., 2014), which dynamically restricts information pathways and has been found to be very useful in practice. Attention mechanisms apply the attention weights, which are calculated in run-time, to the outputs of ANN nodes or LSTM cells. Notably, the gating mechanism in a Simple LSTM cell can also be viewed as a dynamical information modifier. A gate takes the input of the LSTM cell and outputs gating signals for filtering the outputs of its target ANN nodes in the same LSTM cell. A similar gating mechanism was proposed in the Gated Linear Unit (Dauphin et al., 2016) for CNNs. Different from the attention and gating mechanisms, which are applied to the outputs of the target nodes, our modulation mechanism adjusts the sensitivities of the target ANN nodes in run-time by changing the slopes of the corresponding activation functions. Hence, the modulator can also be used as a complement to the attention and gate mechanisms. Below we will explain our modulator mechanism in detail. Experimentation shows that the modulation mechanism can help achieve better test stability and higher test performance using easy to implement and significantly simpler models. Finally, we conclude the paper with discussions on the relevance to the properties of actual neurons. 2 METHODS We designed two modulation mechanisms, one for CNNs and the other for LSTMs. In modulating CNNs, our modulator (see Figure 1) is a layer-specific one that is best compared to the biological phenomenon of neuromodulation. Each CNN layer before activation has one modulator, which shares the input ~x with other CNN nodes in the same layer (Figure 1Left). The modulator (Figure 1Right) of the lth CNN layer calculates a scalar modulation signal as sl = τl(~wTl ~x), where τl(·) is the activation function of the lth modulator, and feeds sl to every other CNN node in the same layer. The kth modulated CNN node in the lth layer linearly integrates its inputs as a traditional ANN nodes vl,k = ~wTl,k~x and modulates the integration to get ul,k = sl · vl,k prior to its traditional activation step ϕl,k(·). The final output is ol,k = ϕl,k(τl(~wTl ~x) · ~wTl,k~x). The above modulation mechanism is slightly modified to expand Densely Connected CNNs (Iandola et al., 2014)(see Figure 2). A modulator is added to each dense block layer to modulate the outputs of its convolution nodes. Given a specific input, the modulator outputs a scalar modulation signal that is multiplied to the scalar outputs of the target convolution nodes in the same layer. In addition to the Cellgate, there are three modifying gates (Forget, Input, and Output) in a traditional LSTM cell. Each gate is a full layer of ANN nodes. Each of ANN node in a gate uses sigmoid to transform the integration of the input into regulation signals. The traditional LSTM cell transforms the input integration to an intermediate output (i.e., C̃t in Figure 3). The Forget gate regulates what is removed from the old cell state (i.e., t−1 in Figure 3), and the Input gate what in C̃t is added to obtain the new cell state (i.e., t). The new cell state is transformed and then regulated by the output gate to become part of the input of the next time point. In modulating LSTM (see Figure 3), for the purpose of easier implementation, we create a new ”modulation gate” (the round dash rectangle in Figure 3) for node-specific sensitivity-adjustment which is most analogous to neuronal facilitation and depression. Different from a conventional LSTM that calculates C̃t = ϕ(Wc[~xt,~ht−1]), a modulated LSTM calculates C̃t = ϕ(τ(WM [~xt,~ht−1]) · (Wc[~xt,~ht−1])). In the above designs, both a multi-layer CNN and single-layer LSTM had multiple modulator nodes within each model. A generalization to the above designs is to allow a modulator to take the outputs from other CNN layers or those of the LSTM cell at other time points as the inputs. 3 EXPERIMENTAL RESULTS 3.1 MODULATED CNNS In our experiments with CNNs, the activation functions of the traditional CNN nodes was ReLU, with our modulator nodes using a sigmoid. We tested six total settings: a vanilla CNN vs a modulated vanilla CNN, a vanilla DenseNet vs a modulated DenseNet, and a vanilla DenseNet-lite vs a modulated DenseNet-lite. The vanilla CNN has 2 convolution blocks, each of which contains two sequential convolution layers, a pooling layer, and a dropout layer. A fully connected layer of 512 nodes is appended at the very end of the model. The convolution layers in the first block have 32 filters with a size of 3x3 while the convolution layers in the second block have 64 filters with a size of 3x3. We apply a dropout of 0.25 to each block. The vanilla DenseNet used the structure (40 in depth and 12 in growth-rate) reported in the original DenseNet paper (Iandola et al., 2014) and a dropout of 0.5 is used in our experiment. The vanilla DenseNet-lite has a similar structure to the vanilla DenseNet, however, uses a smaller growth-rate of 10 instead of 12 in the original configuration, which results in 28% fewer parameters. The modulators are added to the vanilla CNN, the vanilla DenseNet, and the vanilla DenseNet-lite in the way described in Figures 1 and 2 to obtain their modulated versions, respectively. Table 1 summarizes the numbers of the parameters in the above models to indicate their complexities. The modulated networks have slightly more parameters than their vanilla versions do. All the experiments were run for 150 epochs on 4 NVIDIA Titan Xp GPUs with a mini-batch size of 128. CIFAR-10 dataset (Krizhevsky & Hinton, 2009) was used in this experiment. CIFAR-10 consists of colored images at a resolution of 32x32 pixels. The training and test set are containing 50000 and 10000 images respectively. We held 20% of the training data for validation and applied data augmentation of shifting and mirroring on the training data. All the CNN models are trained using the Adam (Kingma & Ba, 2014) optimization method with a learning rate of 1e-3 and shrinks by a factor of 10 at 50% and 80% of the training progress. As shown in Figure 4, the vanilla CNN model begins to overfit after 80 training epochs. Although the modulated CNN model is slightly more complex, it is less prone to overfitting and excels its vanilla counterpart by a large margin (see Table 2). Modulation also significantly helps DenseNets in training, validation, and test. The modulated DenseNet/DenseNet-lite models consistently outperform their vanilla counterparts by a noticeable margin (see Figures 5(a) and 5(b)) during training. The validation and test results of the modulated DenseNet/DenseNet-lite models are also better than those of their vanilla counterparts. It is not surprising that the vanilla DenseNet-lite model underperforms the vanilla DenseNet model. Interestingly, despite having 28% fewer parameters than the vanilla DenseNet model, the modulated DenseNet-lite model outperforms the vanilla DenseNet model (see the dash orange curve vs the solid blue curve in Figure 5(b) and Table 2). 3.2 MODULATED LSTM Two datasets were used in the LSTM experiments. The first one is the NAMES dataset (Sean, 2016), in which the goal is to take a name as a string and classify its ethnicity or country of origin. Approximately 10% of the data-set was reserved for testing. The second experiment used the SST2 data-set (Socher et al., 2013), which requires a trained model to classify whether a movie review is positive or negative based on the raw text in the review. The SST2 is identical to the SST1 with the exception of the neutral category removed (Socher et al., 2013), leaving only positive and negative reviews. About 20% of the data-set was reserved for testing. Since modulators noticeably increase the parameters in a modulated LSTM, to perform fair comparisons, we create three versions of vanilla LSTMs (see Controls 1, 2, & 3 in Figure 6). Control 1 has an identical total LSTM cell size. Control 2 has the identical number of nodes per layer. Control 3 has an extra Input gate so that it has both an identical total number of nodes and identical nodes per layer. The numbers of parameters in the modulated LSTM and control LSTMs are listed in Table 3 for comparison. The hyper-parameters for the first experiment were set as following: the hidden dimension was set to 32, batch size to 32, embedding dimension to 128, initial learning rate to .01, learning rate decay to 1e-4, an SGD optimizer was used, with dropout of 0.2 applied to the last hidden state of the LSTM and 100 epochs were collected. This condition was tested on the name categorization data-set. The number of parameters in this model ranged from 4.1 K to 6.4 K, depending on the condition. We repeated the experimental runs 30 times. Based on the simplicity of the data-set and the relative sparsity of parameters, this condition will be referred to as Simple-LSTM. As for the second experiment: the hidden dimension was set to 150, he batch size was set to 5, the embedding dimension was set to 300, the initial learning rate was set to 1e-3, there was no learning rate decay, an Adam optimizer was used with no dropout and 100 epochs were collected. The number of parameters in this model ranged from 57.6 K to 90 K, depending on the control setup. This experiment was repeated 100 times. Based on the complexity of the data-set and the relatively large amount of parameters, this condition will be referred to as Advanced-LSTM. In all experiments, the models were trained for 100 epochs. We can observe from the results in Table 4 that, the mean test performance of both modulated LSTMs outperformed all three control groups and achieved the highest validation performance. Statistical significance varied between the two LSTM models. In the Vanilla-LSTM (n = 30), with τl(·) set to sigmoid, statistical significance ranged between p<.06 (Control 3) and P<.001 (Control 2). In the Advanced-LSTM (n = 100), with τl(·) set to tanhshrink, statistical significance was a consistently P<.001 in all conditions. In all cases, variance was lowest in the modulated condition. We further zoom in the activation data-flow and visualized the the effect of our modulation in Table 3.2. The control condition and modulated condition was compared side by side. On the left we can observe the impact of the Ingate on the amplitude of the tanh activation function, on the right we can observe our modulation adjust the slope as well. Each input generates a context dependent activation as shown in continuous lines and specific activations are represented by the blue dots which corresponded to a point on a specific line. Our modulation modification provides new aptitudes for the model to learn, generalize and appears to add a stabilizing feature to the dynamic input-output relationship. 4 CONCLUSION We propose a modulation mechanism addition to traditional ANNs so that the shape of the activation function can be context dependent. Experimental results show that the modulated models consistently outperform their original versions. Our experiment also implied adding modulator can reduce overfitting. We demonstrated even with fewer parameters, the modulated model can still perform on par with it vanilla version of a bigger size. This modulation idea can also be expanded to other setting, such as, different modulator activation or different structure inside the modulator. 5 DISCUSSION It was frequently observed in preliminary testing that arbitrarily increasing model parameters actually hurt network performance, so future studies will be aimed at investigating the relationship between the number of model parameters and the performance of the network. Additionally, it will be important to determine the interaction between specific network implementations and the ideal Activation Function wrapping for slope-determining neurons. Lastly, it may be useful to investigate layer-wide single-node modulation on models with parallel LSTM’s. Epigenetics refers to the activation and inactivation of genes (Weinhold, 2006), often as a result of environmental factors. These changes in gene-expression result in modifications to the generation and regulation of cellular proteins, such as ion channels, that regulate how the cell controls the flow of current through the cell membrane (Meadows et al., 2016). The modulation of these proteins will strongly influence the tendency of a neuron to fire and hence affect the neurons function as a single computational node. These proteins, in turn, can influence epigenetic expression in the form of dynamic control (Kawasaki et al., 2004). Regarding the effects of these signals, we can compare the output of neurons and nodes from a variety of perspectives. First and foremost, intrinsic excitability refers to the ease with which a neurons electrical potential can increase, and this feature has been found to impact plasticity itself (Desai et al., 1999). From this view, the output of a node in an artificial neural network would correspond to a neurons firing rate, which Intrinsic Excitability is a large contributor to, and our extra gate would be setting the node’s intrinsic excitability. With the analogy of firing rate, another phenomenon can be considered. Neurons may experience various modes of information integration, typically labeled Type 1 and Type 2. Type 1 refers to continuous firing rate integration, while Type 2 refers to discontinuous information (Tateno et al., 2004). This is computationally explained as a function of interneuron communication resulting in neuron-activity nullclines with either heavy overlap or discontinuous saddle points (Miller, 2016). In biology, a neuron may switch between Type 1 and Type 2 depending on the presence of neuromodulator (Stiefel & Gutkin, 2012). Controlling the degree to which the tanh function encodes to a binary space, our modification may be conceived as determining the form of information integration. The final possible firing rate equivalence refers to the ability of real neurons to switch between different firing modes. While the common mode of firing, Tonic firing, generally encodes information in rate frequency, neurons in a Bursting mode (though there are many types of bursts) tend to encode information in a binary mode - either firing bursts or not (Tateno et al., 2004). Here too, our modification encompasses a biological phenomenon by enabling the switch between binary and continuous information. Another analogy to an ANN nodes output would be the neurotransmitter released. With this view, our modification is best expressed as an analogy to Activity Dependent Facilitation and Depression, phenomena which cause neurons to release either more or less neurotransmitter. Facilitation and depression occur in response to the same input: past activity (Reyes et al., 1998). Our modification enables a network to use previous activity to determine its current sensitivity to input, allowing for both Facilitation and Depression. On the topic of neurotransmitter release, neuromodulation is the most relevant topic to the previously shown experiments. Once again, Marblestone et al. (2016) explains the situation perfectly, expressing that research (Bargmann, 2012; Bargmann & Marder, 2013) has shown ”the same neuron or circuit can exhibit different input-output responses depending on a global circuit state, as reflected by the concentrations of various neuromodulators”. Relating to our modification, the slope of the activation function may be conceptualized as the mechanism of neuromodulation, with the new gate acting analogously to a source of neuromodulator for all nodes in the network. Returning to a Machine Learning approach, the ability to adjust the slope of an Activation Function has an immediate benefit in making the back-propagation gradient dynamic. For example, for Activations near 0, where the tanh Function gradient is largest, the effect of our modification on node output is minimal. However, at this point, our modification has the ability to decrease the gradient, perhaps acting as pseudo-learning-rate. On the other hand, at activations near 1 and -1, where the tanh Function gradient reaches 0, our modification causes the gradient to reappear, allowing for information to be extracted from inputs outside of the standard range. Additionally, by implementing a slope that is conditional on node input, the node has the ability to generate a wide range of functional Activation Functions, including asymmetric functions. Lastly, injecting noise has been found to help deep neural networks with noisy datasets (Zheng et al., 2016), which is noteworthy since noise may act as a stabilizer for neuronal firing rates, (Touboul et al., 2012). With this in mind, Table 3.2 demonstrates increased clustering in two-dimensional node-Activation space, when the Activation Function slope is made to be dynamic. This indicates that noise may be a mediator of our modification, improving network performance through stabilization, induced by increasing the variability of the input-output relationship. In summary, we have shown evidence that nodes in LSTMs and CNNs benefit from added complexity to their input-output dynamic. Specifically, having a node that adjusts the slope of the main layer’s nodes’ activation functions mimics the functionality of neuromodulators and is shown to benefit the network. The exact mechanism by which this modification improves network performance remains unknown, yet it is possible to support this approach from both a neuroscientific and machine-learning perspective. We believe this demonstrates the need for further research into discovering novel non-computationally-demanding methods of applying principles of neuroscience to artificial networks. 6 APPENDIX 6.1 SUPPLEMENTARY DATA METHODOLOGY Additionally we tested our modulator gate, with τl(·) set to sigmoid, on a much more computationally demanding three-layered LSTM network with weight drop method named awd-lstm-lm (Merity et al., 2017; 2018). This model was equipped to handle the Penn-Treebank dataset (Marcus et al., 1993) and was trained to minimize word perplexity. The network was trained for 500 epochs, however, the sample size was limited due to extremely long training times. 6.2 SUPPLEMENTARY DATA RESULTS On the Penn-Treebank dataset with the awd-lstm-lm implementation, sample size was restricted to 2 per condition, due to long training times and limited resources. However on the data collected, our model outperformed template perplexity, achieving an average of 58.4730 compared to the template average 58.7115. Due to the lack of a control for model parameters, interpretation of these results rests on the assumption that the author fine-tuned network parameters such that the template parameters maximized performance. 7 SUPPLEMENTARY DATA FIGURES & TABLES 7.1 AWD-LSTM-LM ON PENN-TREEBANK Table 7: Comparison of mean test Perplexities lower = better Model Epochs Modulated Control Statistical Analysis awd-lstm-lm on Penn-Treebank 500 58.4730 58.7115 T: 1.842 DOF: 1.9 Hedges’s G: 1.853 Figure 7: Validation Perplexity progress (lower = better) 7.2 SUPPLEMENTAL LSTM DATA
1. What is the focus of the paper regarding neural networks? 2. What are the strengths of the proposed approach, particularly in its simplicity and performance? 3. What are the weaknesses of the paper, especially regarding the evaluation of the modulator value and the lack of clarity on the separation of modulator weights? 4. How does the reviewer assess the novelty and significance of the introduced twist in activation functions? 5. Are there any concerns regarding the experimental results and their interpretation?
Review
Review The paper introduces a new twist to the activation of a particular neuron. They use a modulator which looks at the input and performs a matrix multiplication to produce a vector. That vector is then used to scale the original input before passing it through an activation function. Since this modulating scalar can look across neurons to apply a per-neuron scalar, it overcomes the problem that otherwise neurons cannot incorporate their relative activation within a layer. They apply this new addition to several different kinds of neural network architectures and several different applications and show that it can achieve better performance than some models with more parameters. Strengths: - This is a simple, easy-to-implement idea that could easily be incorporated into existing models and frameworks. - As the authors state, adding more width to a vanilla layer stops increasing performance at a certain point. Adding more complex connections to a given layer, like this, is a good way forward to increase capacity of layers. - They achieve better performance than existing baselines in a wide variety of applications. - The reasons this should perform better are intuitive and the introduction is well written. Weaknesses: - After identifying the problem with just summing inputs to a neuron, they evaluate the modulator value by just summing inputs in a layer. So while doing it twice computes a more complicated function, it is still a fundamentally simple computation. - It is not clear from reading this whether the modulator weights are tied to the normal layer weights or not. The modulator nets have more parameters than their counterparts, so they would have to be separate, I imagine. - The authors repeatedly emphasize that this is incorporating "run-time" information into the activation. This is true only in the sense that feedforward nets compute their output from their input, by definition at run-time. This information is no different from the tradition input to a network in any other regard, though. - The p-values in the experiment section add no value to the conclusions drawn there and are not convincing. Suggested Revisions: - In the abstract: "A biological neuron change[s]" - The conclusion is too long and adds little to the paper
ICLR
Title Exploring Target Driven Image Classification Abstract For a given image, traditional supervised image classification using deep neural networks is akin to answering the question ‘what object category does this image belong to?’. The model takes in an image as input and produces the most likely label for it. However, there is an alternate approach to arrive at the final answer which we investigate in this paper. We argue that, for any arbitrary category ỹ , the composed question ‘Is this image of an object category ỹ’ serves as a viable approach for image classification via. deep neural networks. The difference lies in the supplied additional information in form of the target along with the image. Motivated by the curiosity to unravel the advantages and limitations of the addressed approach, we propose Indicator Neural Networks(INN). It utilizes a pair of image and label as input and produces a image-label compatibility response. INN consists of 2 encoding components namely: label encoder and image encoder which learns latent representations for labels and images respectively. Predictor, the third component, combines the learnt individual label and image representations to make the final yes/no prediction. The network is trained end-to-end. We perform evaluations on image classification and fine-grained image classification datasets against strong baselines. We also investigate various components of INNs to understand their contribution in the final prediction of the model. Our probing of the modules reveals that, as opposed to traditionally trained deep counterpart, INN tends to much larger regions of the input image for generating the image features. The generated image feature is further refined by the generated label encoding prior to the final prediction. 1 INTRODUCTION Deep neural networks achieve state of the art in supervised classification across different tasks (Rawat & Wang, 2017; Girdhar et al., 2017; Yang et al., 2016). Our work focuses on supervised image classification. Conventionally, while training, the network fθ is provided as input a set of training images X and corresponding labels Y . It learns by predicting the class labels Ŷ = fθ(X) and minimising a predefined loss function L(Ŷ , Y ). During inference, the network predicts the most likely category for the input image. This approach is analogous to asking a person to name the object present in an image. An alternate approach is to present an image and a class category say cat and ask if the image is of a cat. However, under this scheme one has to exhaustively query every known category to arrive at a final answer. Figure 1 illustrates these scenarios in a natural setting. Prior to the dominance of deep learning based approaches, many methods relied on one-vs-rest SVM(Cortes & Vapnik, 1995) trained on handcrafted image features(Sánchez et al., 2013). The direction saught in this work has a big overlap with the idea of one-vs-rest classification. As we will see in the subsequent sections, we intend to perform a one-vs-rest classification with a single model. To the best of our knowledge, this alternate approach for supervised image classification has not yet been explored in the setting of deep neural networks. This paper is driven by the curiosity to understand the implications of adopting the plausible alternate strategy of framing the supervised classification task. Our core contributions are as follows: • We explore an alternate strategy of performing supervised image classification using labels as additional cues for inference. To the best of our knowledge this the first work which provides a unique re-interpretation of the multi-class classification problem. • To model such a strategy with deep neural networks, we propose a novel architecture termed as Indicator Neural Network(INN). INN produces a binary response conditioned jointly on the input image and query label. It performs multiple one-vs-rest classifications to arrive at the final label assignment. • Our experiments show that the INNs outperform strong baselines on various image classification datasets. These baselines depict ‘traditional’ route of training an image classifier. • We qualitatively and quantitatively investigate the various components of INN and highlight the differences arising due to our pursued structure of the problem. We have structured the paper as follows: we dive deeper into the motivation behind proposing a new architecture for supervised image classification in section 2. In section 3, we describe the said architecture and it’s train and test time methodology. We visit related work w.r.t the proposed architecture in section 4. Section 5 briefly covers the implementation details of the proposed model, selected baselines and chosen datasets. Through sections 6 – 9 we perform various experiments to obtain insights into strengths and weaknesses of the proposed model. We conclude in section 10 by summarising our efforts and discussing the research directions emanating from our work. 2 MOTIVATION FOR A NOVEL ARCHITECTURE The literature for supervised image classification is vast, as a result, we restrict the discussion to deep learning approaches. The existing solutions for image classification ranging from AlexNet(Krizhevsky et al., 2012) to EfficientNets(Tan & Le, 2019) take the ‘traditional’ direction for image classification. The traditional direction is depicted in figure 1(left) as a person predicting the category solely based on the input image. These deep learning solutions generate a probability distribution over all known categories as a response and ultimately select the category corresponding to the highest response. The learning of such solutions is backed by categorical cross-entropy loss(Baum & Wilczek, 1988; Solla et al., 1988) which allows a well established framework for training and inference. Other than changing the base architecture, approaches have also been proposed which utilize target transformations(Szegedy et al., 2016; Jarrett & van der Schaar, 2020; Sun et al., 2017), data augmentations(Hongyi Zhang, 2018; Yun et al., 2019) to aid supervised classification. However, these approaches also do not modify the query-response structure of the classifier. Arguably, predictions of a k-way classification model can be interpreted as answering a multi-cue query. This can be achieved by focusing on a single output unit. However, we have to understand that this response is still conditioned only on the input image. Moreover, the learning process ignores the supplied target label. A recently proposed approach(Khosla et al., 2020) tries to diverge from the norm by utilizing contrastive estimation(Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013) to perform the task of supervised image classification. In a two step process, it first computes an ideal embedding space using positive(images of the same category) and negative(images from other categories) samples. After learning the embedding function, it then trains a traditional classifier(based on cross-entropy loss) on the computed embeddings. The final response however, is yet again an answer to the query ‘Which category does this image belong to?’ conditioned only on the input image. As we noted from the above discussion, the existing methods do not provide us with an appropriate way to model supervised predictions conditioned on images and labels. Specifically, allowing us model the query ‘Is this image of a cat?’. As a result, we propose a novel architecture termed Indicator Neural Networks(INN), which we introduce in the subsequent section. 3 METHOD We consider a random image-label pair as (x, ỹ). We represent a deep neural network, fθ with learnable parameters θ. Let ỹ represent a one-hot encoded vector of a randomly sampled category. To infer the ground-truth category for an input image, all pairings of image and class categories are required to be queried. The class label corresponding to which a largest response is recorded, it can be assigned as the predicted category for the displayed image. Assuming there are Y ′ unique labels in the data, this would imply Y ′ queries for obtaining the predicted category for one image. We model this approach using INNs, fθ(x, ỹ). The naming is motivated by indicator functions (1ỹ=y) as for a single input of image and label, the aim of the model is to predict fθ(x, ỹ| y) = ŷ = { 1, if ỹ = y 0, otherwise. (1) where, y is the correct label corresponding to x. Realistically an INN will output ŷ ∈ [0, 1]. 3.1 INN ARCHITECTURE We break down fθ into its components which comprises of an image encoder, label encoder and predictor denoted respectively as: fθ1(x) = z ∈ Rd, fθ2(ỹ) = ψ ∈ Rd, fθ3(z, ψ) = ŷ ∈ [0, 1]1. (2) Here, d represents the dimensions of the embedded features. z and ψ are image and label encodings respectively. Note, that for generating z the input ỹ is irrelevant, and similarly for ψ the input image doesn’t matter. The predictor utilises z and ψ to generate the joint image-label representation h = z ◦ ψ ∈ Rd. ◦ is the element-wise multiplication. It then utilizes h to make the final linear classification decision. Figure 2 shows the pipeline as described above alongside last layers of a traditionally trained model for a visual comparison. Hypothesis: To have a better understanding of what the model is performing under the hood, we can consider ψ comparable to a 1d attention map. As a result, ψ will magnify or diminish certain features in z to produce a refined h. We suspect that this reduces the burden of image encoder to produce strong category discriminative features and allows the network to attend to larger regions of the input image. But what stops image encoder from focusing on irrelevant regions in the input image? To answer it, we have to change the perspective with which we observe h. We can also view h as a non-uniformly scaled label embedding(ψ scaled by z). Predictor is necessarily a linear classification head and for it to function appropriately, z extracted from different images of the same category should be similar. As, this will allow the predictor to learn meaningful classification boundaries. As an example, the image encoder will seek common characteristics in all the images of the category dog. 3.2 INN TRAINING To train the INN, we utilise positive and negative pairings of images and labels. The target of the model is to predict no(0) for an incorrect pairing whereas, yes(1) for a correct one. For a batch of correctly paired input data(sized b), we first extend the batch by concatenating randomly generated incorrect pairings to it. If N is the desired number of incorrect pairings per image per batch, the the resulting size of the input batch after the concatenation operation will be (N + 1) × b. By applying the i.i.d assumption for image-label pairs, we can write the empirical log-likelihood which the network aims to maximize as: log(P (Ŷ |X, Ỹ ; θ)) =⇒ log(Πb×(N+1)i=0 P (ŷi|xi, ỹi; θ)) =⇒ b×(N+1)∑ i=0 log(P (ŷi|xi, ỹi; θ)) (3) Alternatively, in terms of loss, for a single image(x), input query label(ỹ) and ground-truth class label(y) the corresponding loss is denoted as L(fθ(x, ỹ), 1ỹ=y). We employ binary cross-entropy for the implementation of loss. We extend the loss for a single image to the entire dataset as, L(X, Y ) = 1 |X| ∑ (x, y)∈(X, Y ) { 1 K1 L(fθ(x, y), 1) + 1 K2 i<N∑ ỹi∈Y ′−{y} L(fθ(x, ỹi), 0) } (4) 3.2.1 COMPARISON TO TRADITIONAL TRAINING It is relevant to point out the differences between an INN and traditional mode of training. 1. Traditionally, the networks designed for supervised classification maximise the likelihood P (Y |X; θ). In our case, the predictions are conditioned both on the input image and the randomly supplied target. 2. Negative labels are involved indirectly in the loss computation(cross-entropy) due to the softmax operation (Goodfellow et al., 2016, Chapter 6.2.2.3). The supplied target corresponds to the correct label and the resulting contribution to the loss is from the output unit corresponding to this target label. In our framework, the negative classes(stemming from incorrect pairings) are directly involved into the loss computation as we explicitly provide a dedicated target for them. 3. Backpropagating gradient ∂L∂h ∂h ∂z for the image encoder branch is scaled by ψ due the nature of bi-linear operation. Similarly for label encoder, the gradients are scaled by z. This aspect allows the model to eventually learn compatible representations to make the final prediction. 3.3 INN INFERENCE For inferring the class label of an input image x, we select the input label which yields the largest response. Formally, ŷ = arg max ỹ fθ(x, ỹ) ∀ỹ ∈ Y ′ (5) 4 RELATED WORK Two-stream models have been deployed successfully for the tasks of action recognition(Simonyan & Zisserman, 2014; Feichtenhofer et al., 2016), video classification(Wang et al., 2018), fine-grained image classification(Lin et al., 2015), multi-label image classification(Yu et al., 2019) and aerial scene classification(Yu & Liu, 2018) to name a few. Apart from the evident difference in the application of these models, the differences lie in the choice of inputs and the function for fusing the 2 stream outputs. Many approaches have been proposed which utilize labels as auxiliary inputs in image classification (Weston et al., 2010; Frome et al., 2013; Akata et al., 2016; Sun et al., 2017), text classification (Weinberger & Chapelle, 2009; Guoyin Wang, 2018; Dong et al., 2020), and text recognition(Rodriguez-Serrano et al., 2015). In computer vision, these approaches rely on a language model(Mikolov et al., 2013) trained on external data to obtain label embeddings. The main focus of these approaches (Frome et al., 2013; Gang Wang & Forsyth, 2009; Wang & Mori, 2010; Akata et al., 2016) is to use the pre-learnt embeddings to enforce high similarity between image representations of contextually similar categories. These methods are targeted towards zero-shot learning as they rely on enforced similarities to detect novel image categories. As opposed to the existing line of work, we use one-hot encodings as input to our classifier which removes the requirement to utilize any external data. Also, we work without explicitly enforcing similarity constraints on learnt embeddings. In our training we utilize negative pairing of images and labels. This idea is based on the principle of noise contrastive estimation(Gutmann & Hyvärinen, 2010). SCL(Khosla et al., 2020) also follows this direction to learn meaningful embeddings in their classification approach. Their positive and negative samples consists of images from same and different categories respectively. In contrast, we consider the correctly paired image-label combinations as positives and incorrectly paired image-labels as negatives. Also, ours is a single stage end-to-end differentiable training routine. In INNs, we can assign to label encodings the role of a 1d attention map(Xu et al., 2015). For image classification, the existing approaches based on attention(Wang et al., 2017; Woo et al., 2018; Hu et al., 2018; Bello et al., 2019; Jetley et al., 2018) introduce spatial or channel-wise attention at different depth of a traditional neural network. In contrast to our proposed model, this modification is made to the image encoder. We can easily replace INN’s image encoder with the one equipped with such an attention mechanism. This will incorporate a dual attention mechanism at the level of label fusion and image embedding. However, INN depict one of the simplest ways of modelling the pursued query structure and it is this formulation which gives rise to attention. Attention based approaches as mentioned above focus on answering the query ’What category does the image belong to?’. Moreover, we focus our work to compare different approaches for modelling the classification task rather than different mechanism of performing a traditional classification task. 5 IMPLEMENTATION DETAILS Datasets: Throughout our paper, we refer to a size of a dataset for the number of unique categories it contains. For small datasets we use CIFAR-10, STL-10, BMW-10(Ultra fine-grain cars dataset), CUB-20(formed using 20 categories of CUB-200-2011), and Oxford-IIIT Pets. Study involving larger dataset utilizes CUB-200. Table provided in appendix A.2 shows the common statistics of the utilized datasets. Architectures: Here we provide brief details of selected baselines and INN. All the models are trained from scratch to provide an even ground for comparison. Detailed hyper-parameters are provided in appendix D. • Baseline-Traditional(B-T): We’ve selected Resnet-18(He et al., 2015) trained with categorical cross-entropy loss as our traditional baseline. It is a widely popular architecture and portrays the standard manner of training an image classifier(Khosla et al., 2020; Tan & Le, 2019). Evaluation with a VGG-11(Simonyan & Zisserman, 2015) model is shared in appendix A.4. • Baseline-Multi-Label(B-ML): We train the Resnet-18 as a multi-label classifierNam et al. (2014). Each of the Y ′ output units is treated independently with its own binary crossentropy computation. This allows us to use Y ′ − 1 output units as negative targets in training. • Supervised Contrastive Learning(SCL)(Khosla et al., 2020): In a much recently proposed approach, the authors make use of contrastive loss based supervised representation learning. As the second step, a linear classifier is trained on top of learnt representations by employing standard cross entropy loss. We train Resnet-18 using the official code1. 1https://github.com/HobbitLong/SupContrast • INN: We describe the implementation details of the different components of an INN below. – Image Encoder: We use a Resnet-18 without the fully connected final layer. – Label Encoder: We use a 2-layered MLP with no activation(see appendix C.1 for an ablation with activations). The number of units per layer are d/2 and d.2 – Predictor: z and ψ are combined to form h using element-wise product. h is then connected to the output units which forms the fully-connected final layer for prediction. 6 EXPERIMENT: WHAT DOES THE NETWORK SEE? Grad-CAM(Selvaraju et al., 2017) is an approach for interpreting the predictions of a network by qualitatively assessing the identified salient regions in the input image. It utilises the gradient of classification output w.r.t. feature map to generate coarse heatmaps, highlighting important spatial locations in the input image. Recently, Adebayo et al. (2018) assessed different approaches for interpreting a network’s prediction. As per their finding, Grad-CAMs generate meaningful heat maps and passed their meticulously constructed sanity tests. Grad-CAM has been utilised by many approaches (Yun et al., 2019; Woo et al., 2018) to emphasize on attended regions by the network. We use Grad-CAM for similar purpose and perform a qualitative and quantitative comparison w.r.t baselines. Quantitative analysis: Figure 3 shows the heatmaps produced for sample input images for the baseline and INN models. We can notice the significant difference in the spatial spread of salient regions. Comparing the baselines we observe the larger spread on heatmap for B-ML than B-T and SCL. The heatmaps generated for SCL and B-T appear to be localized to highly distinguishable regions. On the other hand, the visuals indicate INN to be looking at a wider region for making a label specific prediction. Qualitative analysis: To quantify the salient regions we scale the heatmaps between 0 and 1. We consider pixels with values greater than t = 0.5 as salient. We use the training set for this comparison. Since we are focused on assessing how the different attended regions vary across methods, the utilization of training data does not restrict us from this goal and moreover, provides us with a larger overlap of accurately predicted samples for computing the salient regions. Table 1 contain the proportion of an image on an average considered salient as per Grad-CAM. The results are in-line to qualitative assessments we made. For majority of the datasets B-ML and INN produce larger salient regions of the input image. We do not state that focusing on larger regions is 2Overall, INN introduces approximately d× d/2 additional parameters. For Resnet-18, d = 512 beneficial as compared to more focused distinguishable features. We only aim to support our hypothesis behind the working of an INN. As per our assumption, we hypothesized that the production of disjoint representations z and ψ allows for less discriminative features z. Here we interpreted increase in spatial spread of saliency as producing less discriminative features thereby supporting our hypothesis. 7 EXPERIMENT: IMAGE CLASSIFICATION We evaluate the performance of INNs against small datasets(Y ′ < 50). To train INNs, we use K1 = K2 = 1 as the value of scaling constants in equation 4. Results: The corresponding results reported in table 2 highlight the effectiveness of INNs. There are four key observations to be made. Firstly, B-T and B-ML show peculiar trend across datasets. In STL-10, B-ML outperforms B-T, we hypothesize that as the predictions are based on a larger input image region which proves beneficial where categories are visually dissimilar. Consequently, for fine-grained visual classification datasets, where the categories are highly similar, B-T performs better. Secondly, there is a significant difference in performance of the baselines and INN(N=9) for majority of the datasets. For CIFAR-10, the results are comparable. We believe that the small size of the input image does not provide much room for improvement. To verify this, we conduct an experiment in appendix A.5 with images of STL-10 resized to 32× 32. We observe a trend of limited improvement for resized STL-10 as we did for CIFAR-10, which supports our theory. Thirdly, as the value of N increases the performance of INN increases. We believe this is a direct consequence of providing more negative label examples for a given input image during training. By providing many more samples, the network can learn better(more compatible) representations. Lastly, INN out performs contrastive learning based approach, SCL. For CUB-20 and Pets, we expect further improvement in the performance of INN as the value forN is smaller than the maximum allowed for these datasets. 8 EXPERIMENT: IMPORTANCE OF z AND ψ To understand the relevance of z and ψ, we train a linear classifier on top of z in the traditional manner using multi-class cross entropy loss. We compare the accuracy of the model obtained with that of INN. This will help us understand the nature of z as well as improvements made by ψ. Implementation details: Using the train split of the data we gather ztrain from fθ1. Note, that the input ỹ chosen is irrelevant for producing z. Next, we train a multi-class logistic regression classifier using stochastic gradient descent on ztrain, ytrain. Additional training details are shared in appendix D.8. For inference, we pass the ztest to the learnt classifier and record the predicted class. The INN models selected for extracting z corresponds to INN(N = 9) in table 2. Results: Table 3 shows the performance of a classifier trained on top of z in comparison to INN(N = 9). We observe that for image classification datasets of CIFAR-10 and STL-10, the classification performance of the two approaches is highly comparable. However, we observe significant differences for the fine-grained visual classification datasets. We believe that due to high visual dissimilarity between categories in CIFAR-10 and STL-10, obtained z is sufficient to perform the task of classification. However, in fine-grained datasets since the categories are quite visually similar, ψ plays an important role in further refining the representations. These observations are inline to our hypothesis behind the working of the model. To further highlight the nature of z and ψ we perform additional experiments in appendix C.2. 9 EXTENSION TO LARGER DATASETS So far, we have observed that the approach of utilizing labels as an additional cue allows to perform the task of multi-class classification. However, the datasets considered only included few unique categories. In this section, we reflect upon the short comings of adopting our pursued approach and subsequently the failures of INNs. • For smaller datasets, larger the value of N , higher is the classification accuracy. If we extend this logic to larger datasets such as ImageNet(Deng et al., 2009), the best value of N will be close to 1000. Using a traditional batch size(b) of 128 will push the effective batch size to 128, 000, larger than the largest considered for large mini-batch training. methods(Goyal et al., 2017). To counter such large values of N , one can significantly reduce b which in turn will extend the training time from days to months. In order to draw relevant conclusions in a reasonable time frame, we limit the discussions in this section to CUB-200 which contains 200 unique categories. • Latent dimension plays an important in the predictive performance. We conducted experiments on CUB-200 and CUB-20 by varying the latent dimension of the model between 64, 128, 512, 1024 and observe that impact is more for CUB-200 than CUB-20. The details of the corresponding experiment are described in appendix C.3. • Large imbalance of positive and negative samples arising as a result of increasing N can destablize an INN training. For similar reasons, we observe B-ML training to collapse as well. We can balance the weights for positive and negative targets by adjusting their contribution to the loss, however, we find that this approach impedes INN performance. As an alternative, instead of training an INN from scratch on larger values of N , one can initialize the weights from an INN trained on a smaller valueN ′, whereN ′ < N . By doing this, we find that not only INN(N ) surpasses the accuracy of INN(N ′) but also performs comparable to the baseline. The corresponding experimentation details and results are provided in appendix C.4. 10 DISCUSSION & CONCLUSION As opposed to the traditional approach, we explored the applicability of a target driven method. Specifically, we modelled the question ‘Does the given image belong to category ỹ’. We showed that it is possible to tackle the multi-class classification problem from a non-traditional perspective. Our aim was not to show that the pursued approach is better, rather, we aimed to explore and highlight the pros and cons of this unexplored paradigm. Our approach adapts classical one-vs-rest approach in a modern deep learning setting. To achieve this goal, we introduced INNs which rely on a pair of input image and target label to produce a response. By inferring exhaustively with all the target categories we arrive at the final decision. Our study involving class activation maps revealed that INNs utilize much larger regions of the input image to generate features. We hypothesize the imposed independence on image embeddings and labels allow the image encoder to tend to larger regions than highly discriminative features from traditional approaches. We also explored the scenarios where learned image features are adequate to learn a traditional classifier on top. This observation was made for cases where the categories are visually dissimilar. Label embeddings refine the coarse image representations immensely for fine-grained tasks. By pitting INNs against strong baselines we were able to highlight the strength of our adopted approach in comparision. The INNs outperformed the baselines on all the datasets(Y ′ < 50) considered for image classification and fine-grained image classification. Additional experiments on Out-of-distribution(OOD, appendix C) and label embedding(appendix B) analysis helps to broaden our understanding following a one-vs-rest setting. OOD analysis shows that INN performs comparable to contrastive learning based SCL. An indicative qualitative result on learnt label embeddings show that similar categories often have nearby label embeddings. On the down side, we witnessed the difficulties of extending the method to larger datasets. We consider dependency on latent dimension and N the main reasons for this limitation. To make the approach scalable, we believe, constructing a smarter negative sampling approach will be the direction moving forward. We see numerous avenues for future research. Our proposed direction of training a neural network is comparable to classical one-vs-rest approaches(Sánchez et al., 2013). Due to the sudden outburst and adoption of deep learning approaches, the classical one-vs-rest direction has suddenly phased out. And, to cover and compare all the aspects of a traditionally trained neural network which evolved over the past years in a single work is not feasible. As a result, there are multitude of directions of adopting a one-vs-rest approach as devised in this work. Some directions include but are not limited to object detection(Ren et al., 2015), image segmentation (Chen et al., 2018), anomaly detection(Chandola et al., 2009). Our main focus will be to extend our experimentation theme(and not just the INN) to these problems and analyse its subsequent impact. We will publicly share the source code supplied in supplementary to facilitate brisk research. A APPENDIX A.1 NOTATIONS A.2 DATASET STATISTICS A.3 GRAD-CAM VISUALIZATIONS We provide more visualisations to compare the recognised salient regions across baselines in figure 4. A.4 EXPERIMENT: VGG IMAGE ENCODER In this section we replace the image encoder of the INN with a VGG-11(with batch normalisation) model. For an INN, we use the features from the last convolutional block after an adaptive average pooling. Results: Table 6 shows that VGG based INN outperforms the baselines by a large margin. For CIFAR-10, we suspect that similar to the Resnet based INN the small size of the input image restricts the added advantage of using target driven approach. A.5 EXPERIMENT: RESCALED STL-10 For this experiment, we downscale the STL-10 images to 32×32 to bring it down to the same size as that of CIFAR-10. For training, we use identical hyper-parameters as we did for training the model on unaltered STL-10 dataset. 3Created using 20 categories of CUB-200 Results: We notice in table 7 that the INN performance is quite similar to that of the baseline when the image size is small. Similar trend was observed in case of CIFAR-10 as well. We believe that INNs and the baseline both utilize equal portion of the input image to generate representations, which leads to similar performance in accuracy. B EXPERIMENT: LABEL EMBEDDINGS, ψ We have witnessed that INNs rely on ψ and z to make a correct prediction. Also, depending on the content of the dataset, ψ can play a vital role in further improving the performance. In this experimental set up, we aim to explore more about ψ. Specifically, how different encoded labels relate to each other. We believe that the visual content of images drives the learning of label embeddings, i.e. similar visual categories have nearby label representations. Though the results presented here are qualitative in nature, we believe they provide adequate evidence to back our claim. Implementation details: We select INN(N = 9) for CIFAR-10 in this study. We generate ψY ′ = {fθ2(ỹ) | ∀ỹ ∈ Y ′}. Next, we compute L2 distance between every pair of entry in ψY ′ as a measure of similarity. In table 8 we have reported the nearest matching labels(smallest distance) for all the categories in the dataset. Results: Though not perfect, for many source categories, the nearest matching categories tend to be visually similar. For example, the categories truck-car and bird-airplane. However, we also see some non-apparent pairings such as deer-car and frog-car. C EXPERIMENT: OUT-OF-DISTRIBUTION DETECTION In this section, we experiment the robustness of the learnt classifiers for detecting out-ofdistribution(OOD) images. The standard approach is to utilise the predicted confidence in distinguishing in- and out-of-distribution data(Hendrycks & Gimpel, 2017). Following this framework, we report the AU-ROC for models trained on the chosen datasets while tested on out-of-distribution datasets of LSUN(Yu et al., 2015), Tiny ImageNet(Le & Yang, 2015), Fashion-MNIST(Xiao et al., 2017). The out-distribution datasets are standardised using mean and standard deviation of the indistribution datasets. The INN models chosen correspond to INN(N = 9) in table 2. Results: The results reported in table 9 show that SCL and INN outperform the traditional baselines by a large margin for majority of the datasets. The comparatively lower performance of INN for CUB-20 and Pets can be attributed to its limited training. To recall, the corresponding INNs were trained withN = 9, and we expect OOD performance to improve as the values ofN used in training is increased. 90.81% 90.53% 86.5% 90.02 % 90.76% C.1 EXPERIMENT: DIFFERENT ACTIVATIONS FOR LABEL ENCODER In the main paper, the label encoder branch consisted of a 2 layered MLP with no activation. In this experiment, we apply the following 4 activations to the label encoder units and train INN(N = 9, b = 32) on the STL-10 dataset. 1. RELU(Glorot et al., 2011) 2. Leaky-RELU(Maas et al., 2013) 3. Sigmoid 4. Tanh Results: The results indicate maginally better accuracy for RELU and Leaky-RELU. Tanh and no activation based models closely follow the accuracy. For sigmoid, the performance is low. Our hypothesis is that, due to the limited scaling nature of the logistic function, the features of z are under refined. However, more extensive research is required to arrive at a stronger conclusion. We hope that our experiment provides an apt working ground for future research in this direction. To qualitatively assess the contributing regions of the image across activations, we provide GradCAM visualisations in figure 5. RELU, Leaky-RELU, Tanh, and No-activation are able to rely on relevant regions of the input image while making the prediction. In case of Sigmoid, we notice disorganised regions of attention. C.2 EXPERIMENT: COMPATIBILITY OF ψ & z To further highlight the fact that INNs do learn compatible representations and rely both on ψ & z to make an accurate prediction, we utilise the following 4 variations of ỹ for evaluating test accuracy on STL-10: 1. ỹ = y: We provide the correct class label as input. 2. ỹ : ỹ ∈ Y ′ − {y}: We provide a random incorrect class label as input. 3. ỹ = 1Y ′ : All the values are set to 1 in the input label vector. 4. ỹ = 0Y ′ : All the values are set to 0 in the input label vector. For evaluation, we record the argmax for each individual query between a yes-no response. If the representations are compatible we shall see a higher number of yes responses for case 1 than all the other variations. 85.2% 0.004% 0.0% 0.0% Results: Table 11 shows that label encoding ψ play a vital role in classification of the input images. Only when the image is paired with its corresponding ground-truth ỹ, INN makes the prediction of yes majority of the time. For ỹ corresponding to an incorrect class, the number of samples predicted as yes is quite insignificant. For the other two cases, INN never makes a yes prediction. This shows that INNs do rely on a compatible z and ψ to generate a correct class prediction. Visualisation: To further highlight the compatibility of ψ and z we generate a UMAP (McInnes et al., 2018) plot. UMAP is a non-linear dimension reduction technique which has been utilised in visualising high dimensional data. Figure 6 corresponds to the joint representations generated for training images(drawn as blobs) and a single test image of the STL-10 dataset(shown as star). For generating joint representations corresponding to the training set, htrain, ground-truth ytrain are utilised. Whereas, for generating test htest, we provide ỹ ∈ Y ′. Consequently, 10 points are generated for a single test image. The ground-truth label of the test image corresponds to airplane(integer label of 0). The figure shows that only when the input label is a one-hot encoded vector corresponding to the ground-truth label airplane, h for the test image overlaps with the training cluster(red dashed box). For other input labels, the test sample is further away from its corresponding ỹ cluster. C.3 EXPERIMENT: VARYING HIDDEN DIMENSION, d In this experiment we aim to determine the impact of latent dimension on the training of an INN. We conduct this experiment on CUB-200 and CUB-20 datasets with N = 1. The latent dimension is selected from the values {64, 128, 512, 1024} for a Resnet-18 based INN. Results: The results in figure 7 indicate the relevance of the dimensions of latent representations. The impact of the latent dimension is more for CUB-200 than CUB-20. For CUB-200 the accuracy increases with increase in dimensionality whereas, for CUB-20, the performance saturates roughly around d/Y ′ = 10 and decreases later on. The results indicate that for training larger datasets we are required to employ networks with comparatively larger latent dimensions. C.4 EXPERIMENT: CLASSIFICATION WITH CUB-200 In order to apply INN to CUB-200 we replace the Resnet-18 image encoder with Resnet-50. The latent dimension is 2048 for Resnet-50. The baseline for this study is B-T. For B-ML, we found that the network doesn’t train and obtains an accuracy of 0.5%, which is of a random chance. Even though, INN trains for small values of N , it fails to match its performance on larger values. In order to enable training for an INN when N is large, we initialise the weights from INN(N ′), where N ′ < N . For example, we first train the model with N ′ = 9 from scratch and for the subsequent fine-tuning we select the value N = 15. If we wish to train on a larger value of N such as 24, we initialize the weights from previously obtained INN(N = 15). In this study, we select N ∈ 15, 24, 31, 41, 51 and N ′ = 9. Results: Figure 8 shows the increase in accuracy for an INN with increasingN by applying iterative fine-tuning. The small increment in accuracy at each step is due to proportionally smaller increment of N . N = 41 is roughly 20% of the categories of CUB-200. We expect the INN to match and even surpass with higher values of N . However, we did observe the large jump in training time due to lowering of b to accommodate for increasing N . The per epoch time increases from 32 seconds for INN(N = 9) to 300 seconds for INN(N = 41). D TRAINING DETAILS We firstly cover B-T, B-ML and INN training hyper-parameters. Then we move on to the SCL training hyper-parameters. Baselines(B-T, B-ML) are referred to as N=0 in this section. Deep learning framework used is Pytorch(Paszke et al., 2017) version 1.2. D.1 CIFAR-10 • Training pre-processing: Random(cropping(32×32, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 75, 150, 225, 275 • Batch sizes: (N=0, b=256), (N=1, b=128), (N={3, 7, 9}, b=64) D.2 STL-10 • Training pre-processing: Random(cropping(96×96, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 200, 250, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N=3, b=64), (N={7,9}, b=32) D.3 BMW-10 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N={0, 1, 3, 7}, b=32), (N=9, b=16) D.4 CUB-20 • Categories: Black footed Albatross, Laysan Albatross, Sooty Albatross, Groove billed Ani, Crested Auklet, Least Auklet, Parakeet Auklet, Rhinoceros Auklet, Brewer Blackbird, Red winged Blackbird, Rusty Blackbird, Yellow headed Blackbird, Bobolink, Indigo Bunting, Lazuli Bunting, Painted Bunting, Cardinal, Spotted Catbird, Gray Catbird, Yellow breasted Chat These are the first 20 categories as they appeared in torchvision’s(Marcel & Rodriguez, 2010) implementation of CUB-200. • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 250, 300 • Batch sizes: (N={0, 1, 3, 7, 9}, b=32) D.5 PETS • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N={3, 7}, b=64), (N=9, b=32) D.6 CUB-200 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • N=0, Epochs=350, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[125, 200, 250, 300], batch size=128 • N=9, Epochs=500, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[100, 200, 300, 400, 450], batch size=64 • N=[15, 24, 31], Epochs=300, Start learning rate = 0.005, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=[32, 20, 16] • N=41, Epochs=300, Start learning rate = 0.0025, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=12 • N=51, Epochs=300, Start learning rate = 0.001, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=10 D.7 SCL TRAINING Image pre-processing steps are identical to those mentioned in the corresponding previous subsections. Common parameters: Temperature=0.1, decay(0.0001), cosine(True), and epochs=500. • CIFAR-10 – Learning rate: 0.05 – Batch size: 256 • STL-10 – Learning rate: 0.5 – Batch size: 256 • BMW-10 – Learning rate: 0.1 – Batch size: 128 • CUB-20 – Learning rate: 0.5 – Batch size: 128 • Pets – Learning rate: 0.1 – Batch size: 128 D.8 LINEAR CLASSIFICATION USING z We have used the SGDClassifier provided by sklearn(Pedregosa et al., 2011) library. Apart from the loss(loss=‘log’) and tol(tol=1e-5) we use the default values to train the model.
1. What is the main contribution of the paper regarding neural network architecture for classification? 2. What are the strengths and weaknesses of the proposed approach compared to previous works? 3. How does the reviewer assess the novelty and significance of the research question addressed by the paper? 4. What are some clarifying questions that could be asked regarding the training scheme, computational complexity, and comparison with other methods? 5. How does this work compare to traditional generative modeling approaches?
Review
Review Summary This paper proposes a neural network architecture to classify a combination of label and images. The network is target driven, because the prediction is conditioned on a query label. Results show improved accuracies for image classification (CIFAR10, STL10) and out-of-distribution detection Strong and Weak points Strong points Section 2.1 provides both background and intuition for the design choices of the model and training method. Experiment 8 provides deeper investigation into similarity of labels and how neural networks represent similar labels. Weak points In terms of novelty, it seems the idea stems back to 2006 with Chapter 2 of [3] The related work mentions many previous works from the past decade, but misses context in the earlier works on neural networks. The results miss a comparison against published works. Table 1 notes 95.10% accuracy on CIFAR10, which is incomparable to the 96.0% published in [4], possibly due to the use of a different architecture of the neural network. Moreover, the re-implemented baseline achieves 95.10% accuracy, while 33 models reported on paperswithcode.com [5] achieve higher accuracy on the same dataset (even without using additional training data). Likewise, 15 models achieve higher accuracy on the STL10 dataset [6], compared to the best proposed INN model, INN(N=9). Finally, Table 4 misses comparison against published works. For example, also published literature [9] and [10] evaluate Out-of-Distribution detection using AU-ROC, but their results are not compared against. I would want either a) a proper comparison, or b) an explanation why a comparison is not possible. For a dataset with 10 labels, the proposed method increases test time compute with a factor 10. Traditional methods like PixelCNN [7] or Normalizing Flows [8] have the same computational complexity. I miss the comparison with such models to evidence the choice of the proposed INN architecture. Statement Recommendation: Reject Reasons: The results miss comparison against published works [4, 5, 6] The motivation misses grounding in existing theory relating to Noise Contrastive Estimation, even though this was published in 2010 and received 600+ citations. Questions Clarifying questions: The training scheme is set up as an exhaustive K-way binary classification problem. Might there be a speed up using loss functions like InfoNCE [1], triplet losses [2] or max-margin [14]. These works are mentioned in the related work, but not compared in terms of required compute. How could one use this model for semi supervised learning? Chapter 2.2 in Chapelle 2013 [3] also employs a similar semi supervised setup. “We use a 2-layered MLP with no activation”: when there’s no activation function, why not collapse the 2 layers into 1 layer? What is the justification for constants K_1 and K_2, and how was their value determined? What is exactly the motivation for the research question? To quote from the introduction “Though seemingly tedious, to the best of our knowledge, this alternate approach has not yet been modelled by deep neural networks”. If the advantage would be in terms of adversarial attacks [12] or out-of-distribution robustness [13], then the evaluation section misses those numbers. How does this compare to traditional generative modelling? I.e. calculate p(x|y), instead of p(y|x)? Minor feedback These points are not part of my assessment Minor feedback Caption, figure 1: “represents” -> “represent”… Equation (1): there’s a definition for \tilde{y} and \hat{y}, but I miss the definition of normal y. May I assume it’s the correct label for the corresponding x? “It is widely popular architecture”. When using such adjectives I would prefer to see one or two references. Equation (3): the end condition of the summation is incorrect. There’s a sum over N-1 elements of the set Y ′ y . However, that iteration has no end condition i < N which is noted on top of the \Sigma sign. “However, in fgvc datasets since”. Could “fgvc” be either a typo, or an undefined abbreviation? [1] Oord, A. V. D., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. [2] Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 815-823). [3] Olivier Chapelle, Bernhard Schölkopf, Alexander Zien: Introduction to Semi-Supervised Learning. Semi-Supervised Learning 2006: 1-12 [4] Khosla, Prannay, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. "Supervised contrastive learning." arXiv preprint arXiv:2004.11362 (2020). [5] https://paperswithcode.com/sota/image-classification-on-cifar-10 [6] https://paperswithcode.com/sota/image-classification-on-stl-10 [7] Van den Oord, Aaron, et al. "Conditional image generation with pixelcnn decoders." Advances in neural information processing systems. 2016. [8] Kingma, Durk P., and Prafulla Dhariwal. "Glow: Generative flow with invertible 1x1 convolutions." Advances in neural information processing systems. 2018. [9] van Amersfoort, J., Smith, L., Teh, Y. W., & Gal, Y. (2020). Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network. arXiv preprint arXiv:2003.02037. [11] Gutmann, M., & Hyvärinen, A. (2010, March). Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (pp. 297-304). [12] Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. [13] Hendrycks, D., & Dietterich, T. (2019). Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261. [14] Taskar, B., Klein, D., Collins, M., Koller, D., & Manning, C. D. (2004, July). Max-margin parsing. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (pp. 1-8).
ICLR
Title Exploring Target Driven Image Classification Abstract For a given image, traditional supervised image classification using deep neural networks is akin to answering the question ‘what object category does this image belong to?’. The model takes in an image as input and produces the most likely label for it. However, there is an alternate approach to arrive at the final answer which we investigate in this paper. We argue that, for any arbitrary category ỹ , the composed question ‘Is this image of an object category ỹ’ serves as a viable approach for image classification via. deep neural networks. The difference lies in the supplied additional information in form of the target along with the image. Motivated by the curiosity to unravel the advantages and limitations of the addressed approach, we propose Indicator Neural Networks(INN). It utilizes a pair of image and label as input and produces a image-label compatibility response. INN consists of 2 encoding components namely: label encoder and image encoder which learns latent representations for labels and images respectively. Predictor, the third component, combines the learnt individual label and image representations to make the final yes/no prediction. The network is trained end-to-end. We perform evaluations on image classification and fine-grained image classification datasets against strong baselines. We also investigate various components of INNs to understand their contribution in the final prediction of the model. Our probing of the modules reveals that, as opposed to traditionally trained deep counterpart, INN tends to much larger regions of the input image for generating the image features. The generated image feature is further refined by the generated label encoding prior to the final prediction. 1 INTRODUCTION Deep neural networks achieve state of the art in supervised classification across different tasks (Rawat & Wang, 2017; Girdhar et al., 2017; Yang et al., 2016). Our work focuses on supervised image classification. Conventionally, while training, the network fθ is provided as input a set of training images X and corresponding labels Y . It learns by predicting the class labels Ŷ = fθ(X) and minimising a predefined loss function L(Ŷ , Y ). During inference, the network predicts the most likely category for the input image. This approach is analogous to asking a person to name the object present in an image. An alternate approach is to present an image and a class category say cat and ask if the image is of a cat. However, under this scheme one has to exhaustively query every known category to arrive at a final answer. Figure 1 illustrates these scenarios in a natural setting. Prior to the dominance of deep learning based approaches, many methods relied on one-vs-rest SVM(Cortes & Vapnik, 1995) trained on handcrafted image features(Sánchez et al., 2013). The direction saught in this work has a big overlap with the idea of one-vs-rest classification. As we will see in the subsequent sections, we intend to perform a one-vs-rest classification with a single model. To the best of our knowledge, this alternate approach for supervised image classification has not yet been explored in the setting of deep neural networks. This paper is driven by the curiosity to understand the implications of adopting the plausible alternate strategy of framing the supervised classification task. Our core contributions are as follows: • We explore an alternate strategy of performing supervised image classification using labels as additional cues for inference. To the best of our knowledge this the first work which provides a unique re-interpretation of the multi-class classification problem. • To model such a strategy with deep neural networks, we propose a novel architecture termed as Indicator Neural Network(INN). INN produces a binary response conditioned jointly on the input image and query label. It performs multiple one-vs-rest classifications to arrive at the final label assignment. • Our experiments show that the INNs outperform strong baselines on various image classification datasets. These baselines depict ‘traditional’ route of training an image classifier. • We qualitatively and quantitatively investigate the various components of INN and highlight the differences arising due to our pursued structure of the problem. We have structured the paper as follows: we dive deeper into the motivation behind proposing a new architecture for supervised image classification in section 2. In section 3, we describe the said architecture and it’s train and test time methodology. We visit related work w.r.t the proposed architecture in section 4. Section 5 briefly covers the implementation details of the proposed model, selected baselines and chosen datasets. Through sections 6 – 9 we perform various experiments to obtain insights into strengths and weaknesses of the proposed model. We conclude in section 10 by summarising our efforts and discussing the research directions emanating from our work. 2 MOTIVATION FOR A NOVEL ARCHITECTURE The literature for supervised image classification is vast, as a result, we restrict the discussion to deep learning approaches. The existing solutions for image classification ranging from AlexNet(Krizhevsky et al., 2012) to EfficientNets(Tan & Le, 2019) take the ‘traditional’ direction for image classification. The traditional direction is depicted in figure 1(left) as a person predicting the category solely based on the input image. These deep learning solutions generate a probability distribution over all known categories as a response and ultimately select the category corresponding to the highest response. The learning of such solutions is backed by categorical cross-entropy loss(Baum & Wilczek, 1988; Solla et al., 1988) which allows a well established framework for training and inference. Other than changing the base architecture, approaches have also been proposed which utilize target transformations(Szegedy et al., 2016; Jarrett & van der Schaar, 2020; Sun et al., 2017), data augmentations(Hongyi Zhang, 2018; Yun et al., 2019) to aid supervised classification. However, these approaches also do not modify the query-response structure of the classifier. Arguably, predictions of a k-way classification model can be interpreted as answering a multi-cue query. This can be achieved by focusing on a single output unit. However, we have to understand that this response is still conditioned only on the input image. Moreover, the learning process ignores the supplied target label. A recently proposed approach(Khosla et al., 2020) tries to diverge from the norm by utilizing contrastive estimation(Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013) to perform the task of supervised image classification. In a two step process, it first computes an ideal embedding space using positive(images of the same category) and negative(images from other categories) samples. After learning the embedding function, it then trains a traditional classifier(based on cross-entropy loss) on the computed embeddings. The final response however, is yet again an answer to the query ‘Which category does this image belong to?’ conditioned only on the input image. As we noted from the above discussion, the existing methods do not provide us with an appropriate way to model supervised predictions conditioned on images and labels. Specifically, allowing us model the query ‘Is this image of a cat?’. As a result, we propose a novel architecture termed Indicator Neural Networks(INN), which we introduce in the subsequent section. 3 METHOD We consider a random image-label pair as (x, ỹ). We represent a deep neural network, fθ with learnable parameters θ. Let ỹ represent a one-hot encoded vector of a randomly sampled category. To infer the ground-truth category for an input image, all pairings of image and class categories are required to be queried. The class label corresponding to which a largest response is recorded, it can be assigned as the predicted category for the displayed image. Assuming there are Y ′ unique labels in the data, this would imply Y ′ queries for obtaining the predicted category for one image. We model this approach using INNs, fθ(x, ỹ). The naming is motivated by indicator functions (1ỹ=y) as for a single input of image and label, the aim of the model is to predict fθ(x, ỹ| y) = ŷ = { 1, if ỹ = y 0, otherwise. (1) where, y is the correct label corresponding to x. Realistically an INN will output ŷ ∈ [0, 1]. 3.1 INN ARCHITECTURE We break down fθ into its components which comprises of an image encoder, label encoder and predictor denoted respectively as: fθ1(x) = z ∈ Rd, fθ2(ỹ) = ψ ∈ Rd, fθ3(z, ψ) = ŷ ∈ [0, 1]1. (2) Here, d represents the dimensions of the embedded features. z and ψ are image and label encodings respectively. Note, that for generating z the input ỹ is irrelevant, and similarly for ψ the input image doesn’t matter. The predictor utilises z and ψ to generate the joint image-label representation h = z ◦ ψ ∈ Rd. ◦ is the element-wise multiplication. It then utilizes h to make the final linear classification decision. Figure 2 shows the pipeline as described above alongside last layers of a traditionally trained model for a visual comparison. Hypothesis: To have a better understanding of what the model is performing under the hood, we can consider ψ comparable to a 1d attention map. As a result, ψ will magnify or diminish certain features in z to produce a refined h. We suspect that this reduces the burden of image encoder to produce strong category discriminative features and allows the network to attend to larger regions of the input image. But what stops image encoder from focusing on irrelevant regions in the input image? To answer it, we have to change the perspective with which we observe h. We can also view h as a non-uniformly scaled label embedding(ψ scaled by z). Predictor is necessarily a linear classification head and for it to function appropriately, z extracted from different images of the same category should be similar. As, this will allow the predictor to learn meaningful classification boundaries. As an example, the image encoder will seek common characteristics in all the images of the category dog. 3.2 INN TRAINING To train the INN, we utilise positive and negative pairings of images and labels. The target of the model is to predict no(0) for an incorrect pairing whereas, yes(1) for a correct one. For a batch of correctly paired input data(sized b), we first extend the batch by concatenating randomly generated incorrect pairings to it. If N is the desired number of incorrect pairings per image per batch, the the resulting size of the input batch after the concatenation operation will be (N + 1) × b. By applying the i.i.d assumption for image-label pairs, we can write the empirical log-likelihood which the network aims to maximize as: log(P (Ŷ |X, Ỹ ; θ)) =⇒ log(Πb×(N+1)i=0 P (ŷi|xi, ỹi; θ)) =⇒ b×(N+1)∑ i=0 log(P (ŷi|xi, ỹi; θ)) (3) Alternatively, in terms of loss, for a single image(x), input query label(ỹ) and ground-truth class label(y) the corresponding loss is denoted as L(fθ(x, ỹ), 1ỹ=y). We employ binary cross-entropy for the implementation of loss. We extend the loss for a single image to the entire dataset as, L(X, Y ) = 1 |X| ∑ (x, y)∈(X, Y ) { 1 K1 L(fθ(x, y), 1) + 1 K2 i<N∑ ỹi∈Y ′−{y} L(fθ(x, ỹi), 0) } (4) 3.2.1 COMPARISON TO TRADITIONAL TRAINING It is relevant to point out the differences between an INN and traditional mode of training. 1. Traditionally, the networks designed for supervised classification maximise the likelihood P (Y |X; θ). In our case, the predictions are conditioned both on the input image and the randomly supplied target. 2. Negative labels are involved indirectly in the loss computation(cross-entropy) due to the softmax operation (Goodfellow et al., 2016, Chapter 6.2.2.3). The supplied target corresponds to the correct label and the resulting contribution to the loss is from the output unit corresponding to this target label. In our framework, the negative classes(stemming from incorrect pairings) are directly involved into the loss computation as we explicitly provide a dedicated target for them. 3. Backpropagating gradient ∂L∂h ∂h ∂z for the image encoder branch is scaled by ψ due the nature of bi-linear operation. Similarly for label encoder, the gradients are scaled by z. This aspect allows the model to eventually learn compatible representations to make the final prediction. 3.3 INN INFERENCE For inferring the class label of an input image x, we select the input label which yields the largest response. Formally, ŷ = arg max ỹ fθ(x, ỹ) ∀ỹ ∈ Y ′ (5) 4 RELATED WORK Two-stream models have been deployed successfully for the tasks of action recognition(Simonyan & Zisserman, 2014; Feichtenhofer et al., 2016), video classification(Wang et al., 2018), fine-grained image classification(Lin et al., 2015), multi-label image classification(Yu et al., 2019) and aerial scene classification(Yu & Liu, 2018) to name a few. Apart from the evident difference in the application of these models, the differences lie in the choice of inputs and the function for fusing the 2 stream outputs. Many approaches have been proposed which utilize labels as auxiliary inputs in image classification (Weston et al., 2010; Frome et al., 2013; Akata et al., 2016; Sun et al., 2017), text classification (Weinberger & Chapelle, 2009; Guoyin Wang, 2018; Dong et al., 2020), and text recognition(Rodriguez-Serrano et al., 2015). In computer vision, these approaches rely on a language model(Mikolov et al., 2013) trained on external data to obtain label embeddings. The main focus of these approaches (Frome et al., 2013; Gang Wang & Forsyth, 2009; Wang & Mori, 2010; Akata et al., 2016) is to use the pre-learnt embeddings to enforce high similarity between image representations of contextually similar categories. These methods are targeted towards zero-shot learning as they rely on enforced similarities to detect novel image categories. As opposed to the existing line of work, we use one-hot encodings as input to our classifier which removes the requirement to utilize any external data. Also, we work without explicitly enforcing similarity constraints on learnt embeddings. In our training we utilize negative pairing of images and labels. This idea is based on the principle of noise contrastive estimation(Gutmann & Hyvärinen, 2010). SCL(Khosla et al., 2020) also follows this direction to learn meaningful embeddings in their classification approach. Their positive and negative samples consists of images from same and different categories respectively. In contrast, we consider the correctly paired image-label combinations as positives and incorrectly paired image-labels as negatives. Also, ours is a single stage end-to-end differentiable training routine. In INNs, we can assign to label encodings the role of a 1d attention map(Xu et al., 2015). For image classification, the existing approaches based on attention(Wang et al., 2017; Woo et al., 2018; Hu et al., 2018; Bello et al., 2019; Jetley et al., 2018) introduce spatial or channel-wise attention at different depth of a traditional neural network. In contrast to our proposed model, this modification is made to the image encoder. We can easily replace INN’s image encoder with the one equipped with such an attention mechanism. This will incorporate a dual attention mechanism at the level of label fusion and image embedding. However, INN depict one of the simplest ways of modelling the pursued query structure and it is this formulation which gives rise to attention. Attention based approaches as mentioned above focus on answering the query ’What category does the image belong to?’. Moreover, we focus our work to compare different approaches for modelling the classification task rather than different mechanism of performing a traditional classification task. 5 IMPLEMENTATION DETAILS Datasets: Throughout our paper, we refer to a size of a dataset for the number of unique categories it contains. For small datasets we use CIFAR-10, STL-10, BMW-10(Ultra fine-grain cars dataset), CUB-20(formed using 20 categories of CUB-200-2011), and Oxford-IIIT Pets. Study involving larger dataset utilizes CUB-200. Table provided in appendix A.2 shows the common statistics of the utilized datasets. Architectures: Here we provide brief details of selected baselines and INN. All the models are trained from scratch to provide an even ground for comparison. Detailed hyper-parameters are provided in appendix D. • Baseline-Traditional(B-T): We’ve selected Resnet-18(He et al., 2015) trained with categorical cross-entropy loss as our traditional baseline. It is a widely popular architecture and portrays the standard manner of training an image classifier(Khosla et al., 2020; Tan & Le, 2019). Evaluation with a VGG-11(Simonyan & Zisserman, 2015) model is shared in appendix A.4. • Baseline-Multi-Label(B-ML): We train the Resnet-18 as a multi-label classifierNam et al. (2014). Each of the Y ′ output units is treated independently with its own binary crossentropy computation. This allows us to use Y ′ − 1 output units as negative targets in training. • Supervised Contrastive Learning(SCL)(Khosla et al., 2020): In a much recently proposed approach, the authors make use of contrastive loss based supervised representation learning. As the second step, a linear classifier is trained on top of learnt representations by employing standard cross entropy loss. We train Resnet-18 using the official code1. 1https://github.com/HobbitLong/SupContrast • INN: We describe the implementation details of the different components of an INN below. – Image Encoder: We use a Resnet-18 without the fully connected final layer. – Label Encoder: We use a 2-layered MLP with no activation(see appendix C.1 for an ablation with activations). The number of units per layer are d/2 and d.2 – Predictor: z and ψ are combined to form h using element-wise product. h is then connected to the output units which forms the fully-connected final layer for prediction. 6 EXPERIMENT: WHAT DOES THE NETWORK SEE? Grad-CAM(Selvaraju et al., 2017) is an approach for interpreting the predictions of a network by qualitatively assessing the identified salient regions in the input image. It utilises the gradient of classification output w.r.t. feature map to generate coarse heatmaps, highlighting important spatial locations in the input image. Recently, Adebayo et al. (2018) assessed different approaches for interpreting a network’s prediction. As per their finding, Grad-CAMs generate meaningful heat maps and passed their meticulously constructed sanity tests. Grad-CAM has been utilised by many approaches (Yun et al., 2019; Woo et al., 2018) to emphasize on attended regions by the network. We use Grad-CAM for similar purpose and perform a qualitative and quantitative comparison w.r.t baselines. Quantitative analysis: Figure 3 shows the heatmaps produced for sample input images for the baseline and INN models. We can notice the significant difference in the spatial spread of salient regions. Comparing the baselines we observe the larger spread on heatmap for B-ML than B-T and SCL. The heatmaps generated for SCL and B-T appear to be localized to highly distinguishable regions. On the other hand, the visuals indicate INN to be looking at a wider region for making a label specific prediction. Qualitative analysis: To quantify the salient regions we scale the heatmaps between 0 and 1. We consider pixels with values greater than t = 0.5 as salient. We use the training set for this comparison. Since we are focused on assessing how the different attended regions vary across methods, the utilization of training data does not restrict us from this goal and moreover, provides us with a larger overlap of accurately predicted samples for computing the salient regions. Table 1 contain the proportion of an image on an average considered salient as per Grad-CAM. The results are in-line to qualitative assessments we made. For majority of the datasets B-ML and INN produce larger salient regions of the input image. We do not state that focusing on larger regions is 2Overall, INN introduces approximately d× d/2 additional parameters. For Resnet-18, d = 512 beneficial as compared to more focused distinguishable features. We only aim to support our hypothesis behind the working of an INN. As per our assumption, we hypothesized that the production of disjoint representations z and ψ allows for less discriminative features z. Here we interpreted increase in spatial spread of saliency as producing less discriminative features thereby supporting our hypothesis. 7 EXPERIMENT: IMAGE CLASSIFICATION We evaluate the performance of INNs against small datasets(Y ′ < 50). To train INNs, we use K1 = K2 = 1 as the value of scaling constants in equation 4. Results: The corresponding results reported in table 2 highlight the effectiveness of INNs. There are four key observations to be made. Firstly, B-T and B-ML show peculiar trend across datasets. In STL-10, B-ML outperforms B-T, we hypothesize that as the predictions are based on a larger input image region which proves beneficial where categories are visually dissimilar. Consequently, for fine-grained visual classification datasets, where the categories are highly similar, B-T performs better. Secondly, there is a significant difference in performance of the baselines and INN(N=9) for majority of the datasets. For CIFAR-10, the results are comparable. We believe that the small size of the input image does not provide much room for improvement. To verify this, we conduct an experiment in appendix A.5 with images of STL-10 resized to 32× 32. We observe a trend of limited improvement for resized STL-10 as we did for CIFAR-10, which supports our theory. Thirdly, as the value of N increases the performance of INN increases. We believe this is a direct consequence of providing more negative label examples for a given input image during training. By providing many more samples, the network can learn better(more compatible) representations. Lastly, INN out performs contrastive learning based approach, SCL. For CUB-20 and Pets, we expect further improvement in the performance of INN as the value forN is smaller than the maximum allowed for these datasets. 8 EXPERIMENT: IMPORTANCE OF z AND ψ To understand the relevance of z and ψ, we train a linear classifier on top of z in the traditional manner using multi-class cross entropy loss. We compare the accuracy of the model obtained with that of INN. This will help us understand the nature of z as well as improvements made by ψ. Implementation details: Using the train split of the data we gather ztrain from fθ1. Note, that the input ỹ chosen is irrelevant for producing z. Next, we train a multi-class logistic regression classifier using stochastic gradient descent on ztrain, ytrain. Additional training details are shared in appendix D.8. For inference, we pass the ztest to the learnt classifier and record the predicted class. The INN models selected for extracting z corresponds to INN(N = 9) in table 2. Results: Table 3 shows the performance of a classifier trained on top of z in comparison to INN(N = 9). We observe that for image classification datasets of CIFAR-10 and STL-10, the classification performance of the two approaches is highly comparable. However, we observe significant differences for the fine-grained visual classification datasets. We believe that due to high visual dissimilarity between categories in CIFAR-10 and STL-10, obtained z is sufficient to perform the task of classification. However, in fine-grained datasets since the categories are quite visually similar, ψ plays an important role in further refining the representations. These observations are inline to our hypothesis behind the working of the model. To further highlight the nature of z and ψ we perform additional experiments in appendix C.2. 9 EXTENSION TO LARGER DATASETS So far, we have observed that the approach of utilizing labels as an additional cue allows to perform the task of multi-class classification. However, the datasets considered only included few unique categories. In this section, we reflect upon the short comings of adopting our pursued approach and subsequently the failures of INNs. • For smaller datasets, larger the value of N , higher is the classification accuracy. If we extend this logic to larger datasets such as ImageNet(Deng et al., 2009), the best value of N will be close to 1000. Using a traditional batch size(b) of 128 will push the effective batch size to 128, 000, larger than the largest considered for large mini-batch training. methods(Goyal et al., 2017). To counter such large values of N , one can significantly reduce b which in turn will extend the training time from days to months. In order to draw relevant conclusions in a reasonable time frame, we limit the discussions in this section to CUB-200 which contains 200 unique categories. • Latent dimension plays an important in the predictive performance. We conducted experiments on CUB-200 and CUB-20 by varying the latent dimension of the model between 64, 128, 512, 1024 and observe that impact is more for CUB-200 than CUB-20. The details of the corresponding experiment are described in appendix C.3. • Large imbalance of positive and negative samples arising as a result of increasing N can destablize an INN training. For similar reasons, we observe B-ML training to collapse as well. We can balance the weights for positive and negative targets by adjusting their contribution to the loss, however, we find that this approach impedes INN performance. As an alternative, instead of training an INN from scratch on larger values of N , one can initialize the weights from an INN trained on a smaller valueN ′, whereN ′ < N . By doing this, we find that not only INN(N ) surpasses the accuracy of INN(N ′) but also performs comparable to the baseline. The corresponding experimentation details and results are provided in appendix C.4. 10 DISCUSSION & CONCLUSION As opposed to the traditional approach, we explored the applicability of a target driven method. Specifically, we modelled the question ‘Does the given image belong to category ỹ’. We showed that it is possible to tackle the multi-class classification problem from a non-traditional perspective. Our aim was not to show that the pursued approach is better, rather, we aimed to explore and highlight the pros and cons of this unexplored paradigm. Our approach adapts classical one-vs-rest approach in a modern deep learning setting. To achieve this goal, we introduced INNs which rely on a pair of input image and target label to produce a response. By inferring exhaustively with all the target categories we arrive at the final decision. Our study involving class activation maps revealed that INNs utilize much larger regions of the input image to generate features. We hypothesize the imposed independence on image embeddings and labels allow the image encoder to tend to larger regions than highly discriminative features from traditional approaches. We also explored the scenarios where learned image features are adequate to learn a traditional classifier on top. This observation was made for cases where the categories are visually dissimilar. Label embeddings refine the coarse image representations immensely for fine-grained tasks. By pitting INNs against strong baselines we were able to highlight the strength of our adopted approach in comparision. The INNs outperformed the baselines on all the datasets(Y ′ < 50) considered for image classification and fine-grained image classification. Additional experiments on Out-of-distribution(OOD, appendix C) and label embedding(appendix B) analysis helps to broaden our understanding following a one-vs-rest setting. OOD analysis shows that INN performs comparable to contrastive learning based SCL. An indicative qualitative result on learnt label embeddings show that similar categories often have nearby label embeddings. On the down side, we witnessed the difficulties of extending the method to larger datasets. We consider dependency on latent dimension and N the main reasons for this limitation. To make the approach scalable, we believe, constructing a smarter negative sampling approach will be the direction moving forward. We see numerous avenues for future research. Our proposed direction of training a neural network is comparable to classical one-vs-rest approaches(Sánchez et al., 2013). Due to the sudden outburst and adoption of deep learning approaches, the classical one-vs-rest direction has suddenly phased out. And, to cover and compare all the aspects of a traditionally trained neural network which evolved over the past years in a single work is not feasible. As a result, there are multitude of directions of adopting a one-vs-rest approach as devised in this work. Some directions include but are not limited to object detection(Ren et al., 2015), image segmentation (Chen et al., 2018), anomaly detection(Chandola et al., 2009). Our main focus will be to extend our experimentation theme(and not just the INN) to these problems and analyse its subsequent impact. We will publicly share the source code supplied in supplementary to facilitate brisk research. A APPENDIX A.1 NOTATIONS A.2 DATASET STATISTICS A.3 GRAD-CAM VISUALIZATIONS We provide more visualisations to compare the recognised salient regions across baselines in figure 4. A.4 EXPERIMENT: VGG IMAGE ENCODER In this section we replace the image encoder of the INN with a VGG-11(with batch normalisation) model. For an INN, we use the features from the last convolutional block after an adaptive average pooling. Results: Table 6 shows that VGG based INN outperforms the baselines by a large margin. For CIFAR-10, we suspect that similar to the Resnet based INN the small size of the input image restricts the added advantage of using target driven approach. A.5 EXPERIMENT: RESCALED STL-10 For this experiment, we downscale the STL-10 images to 32×32 to bring it down to the same size as that of CIFAR-10. For training, we use identical hyper-parameters as we did for training the model on unaltered STL-10 dataset. 3Created using 20 categories of CUB-200 Results: We notice in table 7 that the INN performance is quite similar to that of the baseline when the image size is small. Similar trend was observed in case of CIFAR-10 as well. We believe that INNs and the baseline both utilize equal portion of the input image to generate representations, which leads to similar performance in accuracy. B EXPERIMENT: LABEL EMBEDDINGS, ψ We have witnessed that INNs rely on ψ and z to make a correct prediction. Also, depending on the content of the dataset, ψ can play a vital role in further improving the performance. In this experimental set up, we aim to explore more about ψ. Specifically, how different encoded labels relate to each other. We believe that the visual content of images drives the learning of label embeddings, i.e. similar visual categories have nearby label representations. Though the results presented here are qualitative in nature, we believe they provide adequate evidence to back our claim. Implementation details: We select INN(N = 9) for CIFAR-10 in this study. We generate ψY ′ = {fθ2(ỹ) | ∀ỹ ∈ Y ′}. Next, we compute L2 distance between every pair of entry in ψY ′ as a measure of similarity. In table 8 we have reported the nearest matching labels(smallest distance) for all the categories in the dataset. Results: Though not perfect, for many source categories, the nearest matching categories tend to be visually similar. For example, the categories truck-car and bird-airplane. However, we also see some non-apparent pairings such as deer-car and frog-car. C EXPERIMENT: OUT-OF-DISTRIBUTION DETECTION In this section, we experiment the robustness of the learnt classifiers for detecting out-ofdistribution(OOD) images. The standard approach is to utilise the predicted confidence in distinguishing in- and out-of-distribution data(Hendrycks & Gimpel, 2017). Following this framework, we report the AU-ROC for models trained on the chosen datasets while tested on out-of-distribution datasets of LSUN(Yu et al., 2015), Tiny ImageNet(Le & Yang, 2015), Fashion-MNIST(Xiao et al., 2017). The out-distribution datasets are standardised using mean and standard deviation of the indistribution datasets. The INN models chosen correspond to INN(N = 9) in table 2. Results: The results reported in table 9 show that SCL and INN outperform the traditional baselines by a large margin for majority of the datasets. The comparatively lower performance of INN for CUB-20 and Pets can be attributed to its limited training. To recall, the corresponding INNs were trained withN = 9, and we expect OOD performance to improve as the values ofN used in training is increased. 90.81% 90.53% 86.5% 90.02 % 90.76% C.1 EXPERIMENT: DIFFERENT ACTIVATIONS FOR LABEL ENCODER In the main paper, the label encoder branch consisted of a 2 layered MLP with no activation. In this experiment, we apply the following 4 activations to the label encoder units and train INN(N = 9, b = 32) on the STL-10 dataset. 1. RELU(Glorot et al., 2011) 2. Leaky-RELU(Maas et al., 2013) 3. Sigmoid 4. Tanh Results: The results indicate maginally better accuracy for RELU and Leaky-RELU. Tanh and no activation based models closely follow the accuracy. For sigmoid, the performance is low. Our hypothesis is that, due to the limited scaling nature of the logistic function, the features of z are under refined. However, more extensive research is required to arrive at a stronger conclusion. We hope that our experiment provides an apt working ground for future research in this direction. To qualitatively assess the contributing regions of the image across activations, we provide GradCAM visualisations in figure 5. RELU, Leaky-RELU, Tanh, and No-activation are able to rely on relevant regions of the input image while making the prediction. In case of Sigmoid, we notice disorganised regions of attention. C.2 EXPERIMENT: COMPATIBILITY OF ψ & z To further highlight the fact that INNs do learn compatible representations and rely both on ψ & z to make an accurate prediction, we utilise the following 4 variations of ỹ for evaluating test accuracy on STL-10: 1. ỹ = y: We provide the correct class label as input. 2. ỹ : ỹ ∈ Y ′ − {y}: We provide a random incorrect class label as input. 3. ỹ = 1Y ′ : All the values are set to 1 in the input label vector. 4. ỹ = 0Y ′ : All the values are set to 0 in the input label vector. For evaluation, we record the argmax for each individual query between a yes-no response. If the representations are compatible we shall see a higher number of yes responses for case 1 than all the other variations. 85.2% 0.004% 0.0% 0.0% Results: Table 11 shows that label encoding ψ play a vital role in classification of the input images. Only when the image is paired with its corresponding ground-truth ỹ, INN makes the prediction of yes majority of the time. For ỹ corresponding to an incorrect class, the number of samples predicted as yes is quite insignificant. For the other two cases, INN never makes a yes prediction. This shows that INNs do rely on a compatible z and ψ to generate a correct class prediction. Visualisation: To further highlight the compatibility of ψ and z we generate a UMAP (McInnes et al., 2018) plot. UMAP is a non-linear dimension reduction technique which has been utilised in visualising high dimensional data. Figure 6 corresponds to the joint representations generated for training images(drawn as blobs) and a single test image of the STL-10 dataset(shown as star). For generating joint representations corresponding to the training set, htrain, ground-truth ytrain are utilised. Whereas, for generating test htest, we provide ỹ ∈ Y ′. Consequently, 10 points are generated for a single test image. The ground-truth label of the test image corresponds to airplane(integer label of 0). The figure shows that only when the input label is a one-hot encoded vector corresponding to the ground-truth label airplane, h for the test image overlaps with the training cluster(red dashed box). For other input labels, the test sample is further away from its corresponding ỹ cluster. C.3 EXPERIMENT: VARYING HIDDEN DIMENSION, d In this experiment we aim to determine the impact of latent dimension on the training of an INN. We conduct this experiment on CUB-200 and CUB-20 datasets with N = 1. The latent dimension is selected from the values {64, 128, 512, 1024} for a Resnet-18 based INN. Results: The results in figure 7 indicate the relevance of the dimensions of latent representations. The impact of the latent dimension is more for CUB-200 than CUB-20. For CUB-200 the accuracy increases with increase in dimensionality whereas, for CUB-20, the performance saturates roughly around d/Y ′ = 10 and decreases later on. The results indicate that for training larger datasets we are required to employ networks with comparatively larger latent dimensions. C.4 EXPERIMENT: CLASSIFICATION WITH CUB-200 In order to apply INN to CUB-200 we replace the Resnet-18 image encoder with Resnet-50. The latent dimension is 2048 for Resnet-50. The baseline for this study is B-T. For B-ML, we found that the network doesn’t train and obtains an accuracy of 0.5%, which is of a random chance. Even though, INN trains for small values of N , it fails to match its performance on larger values. In order to enable training for an INN when N is large, we initialise the weights from INN(N ′), where N ′ < N . For example, we first train the model with N ′ = 9 from scratch and for the subsequent fine-tuning we select the value N = 15. If we wish to train on a larger value of N such as 24, we initialize the weights from previously obtained INN(N = 15). In this study, we select N ∈ 15, 24, 31, 41, 51 and N ′ = 9. Results: Figure 8 shows the increase in accuracy for an INN with increasingN by applying iterative fine-tuning. The small increment in accuracy at each step is due to proportionally smaller increment of N . N = 41 is roughly 20% of the categories of CUB-200. We expect the INN to match and even surpass with higher values of N . However, we did observe the large jump in training time due to lowering of b to accommodate for increasing N . The per epoch time increases from 32 seconds for INN(N = 9) to 300 seconds for INN(N = 41). D TRAINING DETAILS We firstly cover B-T, B-ML and INN training hyper-parameters. Then we move on to the SCL training hyper-parameters. Baselines(B-T, B-ML) are referred to as N=0 in this section. Deep learning framework used is Pytorch(Paszke et al., 2017) version 1.2. D.1 CIFAR-10 • Training pre-processing: Random(cropping(32×32, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 75, 150, 225, 275 • Batch sizes: (N=0, b=256), (N=1, b=128), (N={3, 7, 9}, b=64) D.2 STL-10 • Training pre-processing: Random(cropping(96×96, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 200, 250, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N=3, b=64), (N={7,9}, b=32) D.3 BMW-10 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N={0, 1, 3, 7}, b=32), (N=9, b=16) D.4 CUB-20 • Categories: Black footed Albatross, Laysan Albatross, Sooty Albatross, Groove billed Ani, Crested Auklet, Least Auklet, Parakeet Auklet, Rhinoceros Auklet, Brewer Blackbird, Red winged Blackbird, Rusty Blackbird, Yellow headed Blackbird, Bobolink, Indigo Bunting, Lazuli Bunting, Painted Bunting, Cardinal, Spotted Catbird, Gray Catbird, Yellow breasted Chat These are the first 20 categories as they appeared in torchvision’s(Marcel & Rodriguez, 2010) implementation of CUB-200. • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 250, 300 • Batch sizes: (N={0, 1, 3, 7, 9}, b=32) D.5 PETS • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N={3, 7}, b=64), (N=9, b=32) D.6 CUB-200 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • N=0, Epochs=350, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[125, 200, 250, 300], batch size=128 • N=9, Epochs=500, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[100, 200, 300, 400, 450], batch size=64 • N=[15, 24, 31], Epochs=300, Start learning rate = 0.005, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=[32, 20, 16] • N=41, Epochs=300, Start learning rate = 0.0025, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=12 • N=51, Epochs=300, Start learning rate = 0.001, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=10 D.7 SCL TRAINING Image pre-processing steps are identical to those mentioned in the corresponding previous subsections. Common parameters: Temperature=0.1, decay(0.0001), cosine(True), and epochs=500. • CIFAR-10 – Learning rate: 0.05 – Batch size: 256 • STL-10 – Learning rate: 0.5 – Batch size: 256 • BMW-10 – Learning rate: 0.1 – Batch size: 128 • CUB-20 – Learning rate: 0.5 – Batch size: 128 • Pets – Learning rate: 0.1 – Batch size: 128 D.8 LINEAR CLASSIFICATION USING z We have used the SGDClassifier provided by sklearn(Pedregosa et al., 2011) library. Apart from the loss(loss=‘log’) and tol(tol=1e-5) we use the default values to train the model.
1. How does the proposed Indicator Neural Network (INN) differ from traditional image classification approaches? 2. What is the significance of the statement "the classification task of the image is driven by the input target"? 3. How does the authors' assertion that the INN reduces the burden of the image encoder to produce strong category discriminative features and allows the network to attend to larger regions of the input image hold up? 4. How does the joint image-label representation used in the INN vary depending on the labeling scheme used, and what are the implications of this? 5. What is meant by "visually dissimilar" in the context of the INN's ability to work well with visually dissimilar categories? 6. Is the statement "the learnt label embeddings also mirror the visual similarity across different categories" due to a bias caused by the fact that the INN is designed for verification rather than pure classification?
Review
Review The authors propose to utilize (image, class label) inputs to a proposed Indicator Neural Network (INN), while the excepted binary output is a confirmation or not of the adequacy to the supplied label of the paired image. As the proposed model outputs a real valued response for that adequacy measure, it can be considered the probability that the image belongs to its paired label/category. In other words, an image classification application using an INN. Questions: - In 1.INTRODUCTION: You state “Hence, the classification task of the image is driven by the input target.” o Isn’t this just a “traditional” image classification followed by a final post processing step, “does the predicted class of the image correspond to the foretasted one”? o Can you detail how is this “driven” by the target supplied to the network? - In 2.1: You state “We suspect that this reduces the burden of image encoder to produce strong category discriminative features and allows the network to attend to larger regions of the input image.” What drives you to make such an assertion or is it what occurred in your experiment results? - This reviewer does not see the major difference with standard image classification approach (even with the supply of the to-be verified image class label). Nevertheless, Doesn’t your joint image-label representation and therefore end results vary depending on the labeling scheme used (and furthermore its encoding)? - In 6 EXPERIMENT: IMAGE CLASSIFICATION: You state “which proves beneficial where categories are visually dissimilar.” Everything hinges here of the definition of “visually dissimilar”. Is it in the context of your approach, meant to describe different image categories/labels which relate to the objects contained in the images (case in the classification problems) or more global image features (style for example)? - In 11 DISCUSSION & CONCLUSION: You state “the learnt label embeddings also mirror the visual similarity across different categories.” Isn’t this due to a bias due to the fact that according to your approach the likelihood of the paired label of an image supplied as input to the network is high, as it is a verification task rather than a “pure” classification?
ICLR
Title Exploring Target Driven Image Classification Abstract For a given image, traditional supervised image classification using deep neural networks is akin to answering the question ‘what object category does this image belong to?’. The model takes in an image as input and produces the most likely label for it. However, there is an alternate approach to arrive at the final answer which we investigate in this paper. We argue that, for any arbitrary category ỹ , the composed question ‘Is this image of an object category ỹ’ serves as a viable approach for image classification via. deep neural networks. The difference lies in the supplied additional information in form of the target along with the image. Motivated by the curiosity to unravel the advantages and limitations of the addressed approach, we propose Indicator Neural Networks(INN). It utilizes a pair of image and label as input and produces a image-label compatibility response. INN consists of 2 encoding components namely: label encoder and image encoder which learns latent representations for labels and images respectively. Predictor, the third component, combines the learnt individual label and image representations to make the final yes/no prediction. The network is trained end-to-end. We perform evaluations on image classification and fine-grained image classification datasets against strong baselines. We also investigate various components of INNs to understand their contribution in the final prediction of the model. Our probing of the modules reveals that, as opposed to traditionally trained deep counterpart, INN tends to much larger regions of the input image for generating the image features. The generated image feature is further refined by the generated label encoding prior to the final prediction. 1 INTRODUCTION Deep neural networks achieve state of the art in supervised classification across different tasks (Rawat & Wang, 2017; Girdhar et al., 2017; Yang et al., 2016). Our work focuses on supervised image classification. Conventionally, while training, the network fθ is provided as input a set of training images X and corresponding labels Y . It learns by predicting the class labels Ŷ = fθ(X) and minimising a predefined loss function L(Ŷ , Y ). During inference, the network predicts the most likely category for the input image. This approach is analogous to asking a person to name the object present in an image. An alternate approach is to present an image and a class category say cat and ask if the image is of a cat. However, under this scheme one has to exhaustively query every known category to arrive at a final answer. Figure 1 illustrates these scenarios in a natural setting. Prior to the dominance of deep learning based approaches, many methods relied on one-vs-rest SVM(Cortes & Vapnik, 1995) trained on handcrafted image features(Sánchez et al., 2013). The direction saught in this work has a big overlap with the idea of one-vs-rest classification. As we will see in the subsequent sections, we intend to perform a one-vs-rest classification with a single model. To the best of our knowledge, this alternate approach for supervised image classification has not yet been explored in the setting of deep neural networks. This paper is driven by the curiosity to understand the implications of adopting the plausible alternate strategy of framing the supervised classification task. Our core contributions are as follows: • We explore an alternate strategy of performing supervised image classification using labels as additional cues for inference. To the best of our knowledge this the first work which provides a unique re-interpretation of the multi-class classification problem. • To model such a strategy with deep neural networks, we propose a novel architecture termed as Indicator Neural Network(INN). INN produces a binary response conditioned jointly on the input image and query label. It performs multiple one-vs-rest classifications to arrive at the final label assignment. • Our experiments show that the INNs outperform strong baselines on various image classification datasets. These baselines depict ‘traditional’ route of training an image classifier. • We qualitatively and quantitatively investigate the various components of INN and highlight the differences arising due to our pursued structure of the problem. We have structured the paper as follows: we dive deeper into the motivation behind proposing a new architecture for supervised image classification in section 2. In section 3, we describe the said architecture and it’s train and test time methodology. We visit related work w.r.t the proposed architecture in section 4. Section 5 briefly covers the implementation details of the proposed model, selected baselines and chosen datasets. Through sections 6 – 9 we perform various experiments to obtain insights into strengths and weaknesses of the proposed model. We conclude in section 10 by summarising our efforts and discussing the research directions emanating from our work. 2 MOTIVATION FOR A NOVEL ARCHITECTURE The literature for supervised image classification is vast, as a result, we restrict the discussion to deep learning approaches. The existing solutions for image classification ranging from AlexNet(Krizhevsky et al., 2012) to EfficientNets(Tan & Le, 2019) take the ‘traditional’ direction for image classification. The traditional direction is depicted in figure 1(left) as a person predicting the category solely based on the input image. These deep learning solutions generate a probability distribution over all known categories as a response and ultimately select the category corresponding to the highest response. The learning of such solutions is backed by categorical cross-entropy loss(Baum & Wilczek, 1988; Solla et al., 1988) which allows a well established framework for training and inference. Other than changing the base architecture, approaches have also been proposed which utilize target transformations(Szegedy et al., 2016; Jarrett & van der Schaar, 2020; Sun et al., 2017), data augmentations(Hongyi Zhang, 2018; Yun et al., 2019) to aid supervised classification. However, these approaches also do not modify the query-response structure of the classifier. Arguably, predictions of a k-way classification model can be interpreted as answering a multi-cue query. This can be achieved by focusing on a single output unit. However, we have to understand that this response is still conditioned only on the input image. Moreover, the learning process ignores the supplied target label. A recently proposed approach(Khosla et al., 2020) tries to diverge from the norm by utilizing contrastive estimation(Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013) to perform the task of supervised image classification. In a two step process, it first computes an ideal embedding space using positive(images of the same category) and negative(images from other categories) samples. After learning the embedding function, it then trains a traditional classifier(based on cross-entropy loss) on the computed embeddings. The final response however, is yet again an answer to the query ‘Which category does this image belong to?’ conditioned only on the input image. As we noted from the above discussion, the existing methods do not provide us with an appropriate way to model supervised predictions conditioned on images and labels. Specifically, allowing us model the query ‘Is this image of a cat?’. As a result, we propose a novel architecture termed Indicator Neural Networks(INN), which we introduce in the subsequent section. 3 METHOD We consider a random image-label pair as (x, ỹ). We represent a deep neural network, fθ with learnable parameters θ. Let ỹ represent a one-hot encoded vector of a randomly sampled category. To infer the ground-truth category for an input image, all pairings of image and class categories are required to be queried. The class label corresponding to which a largest response is recorded, it can be assigned as the predicted category for the displayed image. Assuming there are Y ′ unique labels in the data, this would imply Y ′ queries for obtaining the predicted category for one image. We model this approach using INNs, fθ(x, ỹ). The naming is motivated by indicator functions (1ỹ=y) as for a single input of image and label, the aim of the model is to predict fθ(x, ỹ| y) = ŷ = { 1, if ỹ = y 0, otherwise. (1) where, y is the correct label corresponding to x. Realistically an INN will output ŷ ∈ [0, 1]. 3.1 INN ARCHITECTURE We break down fθ into its components which comprises of an image encoder, label encoder and predictor denoted respectively as: fθ1(x) = z ∈ Rd, fθ2(ỹ) = ψ ∈ Rd, fθ3(z, ψ) = ŷ ∈ [0, 1]1. (2) Here, d represents the dimensions of the embedded features. z and ψ are image and label encodings respectively. Note, that for generating z the input ỹ is irrelevant, and similarly for ψ the input image doesn’t matter. The predictor utilises z and ψ to generate the joint image-label representation h = z ◦ ψ ∈ Rd. ◦ is the element-wise multiplication. It then utilizes h to make the final linear classification decision. Figure 2 shows the pipeline as described above alongside last layers of a traditionally trained model for a visual comparison. Hypothesis: To have a better understanding of what the model is performing under the hood, we can consider ψ comparable to a 1d attention map. As a result, ψ will magnify or diminish certain features in z to produce a refined h. We suspect that this reduces the burden of image encoder to produce strong category discriminative features and allows the network to attend to larger regions of the input image. But what stops image encoder from focusing on irrelevant regions in the input image? To answer it, we have to change the perspective with which we observe h. We can also view h as a non-uniformly scaled label embedding(ψ scaled by z). Predictor is necessarily a linear classification head and for it to function appropriately, z extracted from different images of the same category should be similar. As, this will allow the predictor to learn meaningful classification boundaries. As an example, the image encoder will seek common characteristics in all the images of the category dog. 3.2 INN TRAINING To train the INN, we utilise positive and negative pairings of images and labels. The target of the model is to predict no(0) for an incorrect pairing whereas, yes(1) for a correct one. For a batch of correctly paired input data(sized b), we first extend the batch by concatenating randomly generated incorrect pairings to it. If N is the desired number of incorrect pairings per image per batch, the the resulting size of the input batch after the concatenation operation will be (N + 1) × b. By applying the i.i.d assumption for image-label pairs, we can write the empirical log-likelihood which the network aims to maximize as: log(P (Ŷ |X, Ỹ ; θ)) =⇒ log(Πb×(N+1)i=0 P (ŷi|xi, ỹi; θ)) =⇒ b×(N+1)∑ i=0 log(P (ŷi|xi, ỹi; θ)) (3) Alternatively, in terms of loss, for a single image(x), input query label(ỹ) and ground-truth class label(y) the corresponding loss is denoted as L(fθ(x, ỹ), 1ỹ=y). We employ binary cross-entropy for the implementation of loss. We extend the loss for a single image to the entire dataset as, L(X, Y ) = 1 |X| ∑ (x, y)∈(X, Y ) { 1 K1 L(fθ(x, y), 1) + 1 K2 i<N∑ ỹi∈Y ′−{y} L(fθ(x, ỹi), 0) } (4) 3.2.1 COMPARISON TO TRADITIONAL TRAINING It is relevant to point out the differences between an INN and traditional mode of training. 1. Traditionally, the networks designed for supervised classification maximise the likelihood P (Y |X; θ). In our case, the predictions are conditioned both on the input image and the randomly supplied target. 2. Negative labels are involved indirectly in the loss computation(cross-entropy) due to the softmax operation (Goodfellow et al., 2016, Chapter 6.2.2.3). The supplied target corresponds to the correct label and the resulting contribution to the loss is from the output unit corresponding to this target label. In our framework, the negative classes(stemming from incorrect pairings) are directly involved into the loss computation as we explicitly provide a dedicated target for them. 3. Backpropagating gradient ∂L∂h ∂h ∂z for the image encoder branch is scaled by ψ due the nature of bi-linear operation. Similarly for label encoder, the gradients are scaled by z. This aspect allows the model to eventually learn compatible representations to make the final prediction. 3.3 INN INFERENCE For inferring the class label of an input image x, we select the input label which yields the largest response. Formally, ŷ = arg max ỹ fθ(x, ỹ) ∀ỹ ∈ Y ′ (5) 4 RELATED WORK Two-stream models have been deployed successfully for the tasks of action recognition(Simonyan & Zisserman, 2014; Feichtenhofer et al., 2016), video classification(Wang et al., 2018), fine-grained image classification(Lin et al., 2015), multi-label image classification(Yu et al., 2019) and aerial scene classification(Yu & Liu, 2018) to name a few. Apart from the evident difference in the application of these models, the differences lie in the choice of inputs and the function for fusing the 2 stream outputs. Many approaches have been proposed which utilize labels as auxiliary inputs in image classification (Weston et al., 2010; Frome et al., 2013; Akata et al., 2016; Sun et al., 2017), text classification (Weinberger & Chapelle, 2009; Guoyin Wang, 2018; Dong et al., 2020), and text recognition(Rodriguez-Serrano et al., 2015). In computer vision, these approaches rely on a language model(Mikolov et al., 2013) trained on external data to obtain label embeddings. The main focus of these approaches (Frome et al., 2013; Gang Wang & Forsyth, 2009; Wang & Mori, 2010; Akata et al., 2016) is to use the pre-learnt embeddings to enforce high similarity between image representations of contextually similar categories. These methods are targeted towards zero-shot learning as they rely on enforced similarities to detect novel image categories. As opposed to the existing line of work, we use one-hot encodings as input to our classifier which removes the requirement to utilize any external data. Also, we work without explicitly enforcing similarity constraints on learnt embeddings. In our training we utilize negative pairing of images and labels. This idea is based on the principle of noise contrastive estimation(Gutmann & Hyvärinen, 2010). SCL(Khosla et al., 2020) also follows this direction to learn meaningful embeddings in their classification approach. Their positive and negative samples consists of images from same and different categories respectively. In contrast, we consider the correctly paired image-label combinations as positives and incorrectly paired image-labels as negatives. Also, ours is a single stage end-to-end differentiable training routine. In INNs, we can assign to label encodings the role of a 1d attention map(Xu et al., 2015). For image classification, the existing approaches based on attention(Wang et al., 2017; Woo et al., 2018; Hu et al., 2018; Bello et al., 2019; Jetley et al., 2018) introduce spatial or channel-wise attention at different depth of a traditional neural network. In contrast to our proposed model, this modification is made to the image encoder. We can easily replace INN’s image encoder with the one equipped with such an attention mechanism. This will incorporate a dual attention mechanism at the level of label fusion and image embedding. However, INN depict one of the simplest ways of modelling the pursued query structure and it is this formulation which gives rise to attention. Attention based approaches as mentioned above focus on answering the query ’What category does the image belong to?’. Moreover, we focus our work to compare different approaches for modelling the classification task rather than different mechanism of performing a traditional classification task. 5 IMPLEMENTATION DETAILS Datasets: Throughout our paper, we refer to a size of a dataset for the number of unique categories it contains. For small datasets we use CIFAR-10, STL-10, BMW-10(Ultra fine-grain cars dataset), CUB-20(formed using 20 categories of CUB-200-2011), and Oxford-IIIT Pets. Study involving larger dataset utilizes CUB-200. Table provided in appendix A.2 shows the common statistics of the utilized datasets. Architectures: Here we provide brief details of selected baselines and INN. All the models are trained from scratch to provide an even ground for comparison. Detailed hyper-parameters are provided in appendix D. • Baseline-Traditional(B-T): We’ve selected Resnet-18(He et al., 2015) trained with categorical cross-entropy loss as our traditional baseline. It is a widely popular architecture and portrays the standard manner of training an image classifier(Khosla et al., 2020; Tan & Le, 2019). Evaluation with a VGG-11(Simonyan & Zisserman, 2015) model is shared in appendix A.4. • Baseline-Multi-Label(B-ML): We train the Resnet-18 as a multi-label classifierNam et al. (2014). Each of the Y ′ output units is treated independently with its own binary crossentropy computation. This allows us to use Y ′ − 1 output units as negative targets in training. • Supervised Contrastive Learning(SCL)(Khosla et al., 2020): In a much recently proposed approach, the authors make use of contrastive loss based supervised representation learning. As the second step, a linear classifier is trained on top of learnt representations by employing standard cross entropy loss. We train Resnet-18 using the official code1. 1https://github.com/HobbitLong/SupContrast • INN: We describe the implementation details of the different components of an INN below. – Image Encoder: We use a Resnet-18 without the fully connected final layer. – Label Encoder: We use a 2-layered MLP with no activation(see appendix C.1 for an ablation with activations). The number of units per layer are d/2 and d.2 – Predictor: z and ψ are combined to form h using element-wise product. h is then connected to the output units which forms the fully-connected final layer for prediction. 6 EXPERIMENT: WHAT DOES THE NETWORK SEE? Grad-CAM(Selvaraju et al., 2017) is an approach for interpreting the predictions of a network by qualitatively assessing the identified salient regions in the input image. It utilises the gradient of classification output w.r.t. feature map to generate coarse heatmaps, highlighting important spatial locations in the input image. Recently, Adebayo et al. (2018) assessed different approaches for interpreting a network’s prediction. As per their finding, Grad-CAMs generate meaningful heat maps and passed their meticulously constructed sanity tests. Grad-CAM has been utilised by many approaches (Yun et al., 2019; Woo et al., 2018) to emphasize on attended regions by the network. We use Grad-CAM for similar purpose and perform a qualitative and quantitative comparison w.r.t baselines. Quantitative analysis: Figure 3 shows the heatmaps produced for sample input images for the baseline and INN models. We can notice the significant difference in the spatial spread of salient regions. Comparing the baselines we observe the larger spread on heatmap for B-ML than B-T and SCL. The heatmaps generated for SCL and B-T appear to be localized to highly distinguishable regions. On the other hand, the visuals indicate INN to be looking at a wider region for making a label specific prediction. Qualitative analysis: To quantify the salient regions we scale the heatmaps between 0 and 1. We consider pixels with values greater than t = 0.5 as salient. We use the training set for this comparison. Since we are focused on assessing how the different attended regions vary across methods, the utilization of training data does not restrict us from this goal and moreover, provides us with a larger overlap of accurately predicted samples for computing the salient regions. Table 1 contain the proportion of an image on an average considered salient as per Grad-CAM. The results are in-line to qualitative assessments we made. For majority of the datasets B-ML and INN produce larger salient regions of the input image. We do not state that focusing on larger regions is 2Overall, INN introduces approximately d× d/2 additional parameters. For Resnet-18, d = 512 beneficial as compared to more focused distinguishable features. We only aim to support our hypothesis behind the working of an INN. As per our assumption, we hypothesized that the production of disjoint representations z and ψ allows for less discriminative features z. Here we interpreted increase in spatial spread of saliency as producing less discriminative features thereby supporting our hypothesis. 7 EXPERIMENT: IMAGE CLASSIFICATION We evaluate the performance of INNs against small datasets(Y ′ < 50). To train INNs, we use K1 = K2 = 1 as the value of scaling constants in equation 4. Results: The corresponding results reported in table 2 highlight the effectiveness of INNs. There are four key observations to be made. Firstly, B-T and B-ML show peculiar trend across datasets. In STL-10, B-ML outperforms B-T, we hypothesize that as the predictions are based on a larger input image region which proves beneficial where categories are visually dissimilar. Consequently, for fine-grained visual classification datasets, where the categories are highly similar, B-T performs better. Secondly, there is a significant difference in performance of the baselines and INN(N=9) for majority of the datasets. For CIFAR-10, the results are comparable. We believe that the small size of the input image does not provide much room for improvement. To verify this, we conduct an experiment in appendix A.5 with images of STL-10 resized to 32× 32. We observe a trend of limited improvement for resized STL-10 as we did for CIFAR-10, which supports our theory. Thirdly, as the value of N increases the performance of INN increases. We believe this is a direct consequence of providing more negative label examples for a given input image during training. By providing many more samples, the network can learn better(more compatible) representations. Lastly, INN out performs contrastive learning based approach, SCL. For CUB-20 and Pets, we expect further improvement in the performance of INN as the value forN is smaller than the maximum allowed for these datasets. 8 EXPERIMENT: IMPORTANCE OF z AND ψ To understand the relevance of z and ψ, we train a linear classifier on top of z in the traditional manner using multi-class cross entropy loss. We compare the accuracy of the model obtained with that of INN. This will help us understand the nature of z as well as improvements made by ψ. Implementation details: Using the train split of the data we gather ztrain from fθ1. Note, that the input ỹ chosen is irrelevant for producing z. Next, we train a multi-class logistic regression classifier using stochastic gradient descent on ztrain, ytrain. Additional training details are shared in appendix D.8. For inference, we pass the ztest to the learnt classifier and record the predicted class. The INN models selected for extracting z corresponds to INN(N = 9) in table 2. Results: Table 3 shows the performance of a classifier trained on top of z in comparison to INN(N = 9). We observe that for image classification datasets of CIFAR-10 and STL-10, the classification performance of the two approaches is highly comparable. However, we observe significant differences for the fine-grained visual classification datasets. We believe that due to high visual dissimilarity between categories in CIFAR-10 and STL-10, obtained z is sufficient to perform the task of classification. However, in fine-grained datasets since the categories are quite visually similar, ψ plays an important role in further refining the representations. These observations are inline to our hypothesis behind the working of the model. To further highlight the nature of z and ψ we perform additional experiments in appendix C.2. 9 EXTENSION TO LARGER DATASETS So far, we have observed that the approach of utilizing labels as an additional cue allows to perform the task of multi-class classification. However, the datasets considered only included few unique categories. In this section, we reflect upon the short comings of adopting our pursued approach and subsequently the failures of INNs. • For smaller datasets, larger the value of N , higher is the classification accuracy. If we extend this logic to larger datasets such as ImageNet(Deng et al., 2009), the best value of N will be close to 1000. Using a traditional batch size(b) of 128 will push the effective batch size to 128, 000, larger than the largest considered for large mini-batch training. methods(Goyal et al., 2017). To counter such large values of N , one can significantly reduce b which in turn will extend the training time from days to months. In order to draw relevant conclusions in a reasonable time frame, we limit the discussions in this section to CUB-200 which contains 200 unique categories. • Latent dimension plays an important in the predictive performance. We conducted experiments on CUB-200 and CUB-20 by varying the latent dimension of the model between 64, 128, 512, 1024 and observe that impact is more for CUB-200 than CUB-20. The details of the corresponding experiment are described in appendix C.3. • Large imbalance of positive and negative samples arising as a result of increasing N can destablize an INN training. For similar reasons, we observe B-ML training to collapse as well. We can balance the weights for positive and negative targets by adjusting their contribution to the loss, however, we find that this approach impedes INN performance. As an alternative, instead of training an INN from scratch on larger values of N , one can initialize the weights from an INN trained on a smaller valueN ′, whereN ′ < N . By doing this, we find that not only INN(N ) surpasses the accuracy of INN(N ′) but also performs comparable to the baseline. The corresponding experimentation details and results are provided in appendix C.4. 10 DISCUSSION & CONCLUSION As opposed to the traditional approach, we explored the applicability of a target driven method. Specifically, we modelled the question ‘Does the given image belong to category ỹ’. We showed that it is possible to tackle the multi-class classification problem from a non-traditional perspective. Our aim was not to show that the pursued approach is better, rather, we aimed to explore and highlight the pros and cons of this unexplored paradigm. Our approach adapts classical one-vs-rest approach in a modern deep learning setting. To achieve this goal, we introduced INNs which rely on a pair of input image and target label to produce a response. By inferring exhaustively with all the target categories we arrive at the final decision. Our study involving class activation maps revealed that INNs utilize much larger regions of the input image to generate features. We hypothesize the imposed independence on image embeddings and labels allow the image encoder to tend to larger regions than highly discriminative features from traditional approaches. We also explored the scenarios where learned image features are adequate to learn a traditional classifier on top. This observation was made for cases where the categories are visually dissimilar. Label embeddings refine the coarse image representations immensely for fine-grained tasks. By pitting INNs against strong baselines we were able to highlight the strength of our adopted approach in comparision. The INNs outperformed the baselines on all the datasets(Y ′ < 50) considered for image classification and fine-grained image classification. Additional experiments on Out-of-distribution(OOD, appendix C) and label embedding(appendix B) analysis helps to broaden our understanding following a one-vs-rest setting. OOD analysis shows that INN performs comparable to contrastive learning based SCL. An indicative qualitative result on learnt label embeddings show that similar categories often have nearby label embeddings. On the down side, we witnessed the difficulties of extending the method to larger datasets. We consider dependency on latent dimension and N the main reasons for this limitation. To make the approach scalable, we believe, constructing a smarter negative sampling approach will be the direction moving forward. We see numerous avenues for future research. Our proposed direction of training a neural network is comparable to classical one-vs-rest approaches(Sánchez et al., 2013). Due to the sudden outburst and adoption of deep learning approaches, the classical one-vs-rest direction has suddenly phased out. And, to cover and compare all the aspects of a traditionally trained neural network which evolved over the past years in a single work is not feasible. As a result, there are multitude of directions of adopting a one-vs-rest approach as devised in this work. Some directions include but are not limited to object detection(Ren et al., 2015), image segmentation (Chen et al., 2018), anomaly detection(Chandola et al., 2009). Our main focus will be to extend our experimentation theme(and not just the INN) to these problems and analyse its subsequent impact. We will publicly share the source code supplied in supplementary to facilitate brisk research. A APPENDIX A.1 NOTATIONS A.2 DATASET STATISTICS A.3 GRAD-CAM VISUALIZATIONS We provide more visualisations to compare the recognised salient regions across baselines in figure 4. A.4 EXPERIMENT: VGG IMAGE ENCODER In this section we replace the image encoder of the INN with a VGG-11(with batch normalisation) model. For an INN, we use the features from the last convolutional block after an adaptive average pooling. Results: Table 6 shows that VGG based INN outperforms the baselines by a large margin. For CIFAR-10, we suspect that similar to the Resnet based INN the small size of the input image restricts the added advantage of using target driven approach. A.5 EXPERIMENT: RESCALED STL-10 For this experiment, we downscale the STL-10 images to 32×32 to bring it down to the same size as that of CIFAR-10. For training, we use identical hyper-parameters as we did for training the model on unaltered STL-10 dataset. 3Created using 20 categories of CUB-200 Results: We notice in table 7 that the INN performance is quite similar to that of the baseline when the image size is small. Similar trend was observed in case of CIFAR-10 as well. We believe that INNs and the baseline both utilize equal portion of the input image to generate representations, which leads to similar performance in accuracy. B EXPERIMENT: LABEL EMBEDDINGS, ψ We have witnessed that INNs rely on ψ and z to make a correct prediction. Also, depending on the content of the dataset, ψ can play a vital role in further improving the performance. In this experimental set up, we aim to explore more about ψ. Specifically, how different encoded labels relate to each other. We believe that the visual content of images drives the learning of label embeddings, i.e. similar visual categories have nearby label representations. Though the results presented here are qualitative in nature, we believe they provide adequate evidence to back our claim. Implementation details: We select INN(N = 9) for CIFAR-10 in this study. We generate ψY ′ = {fθ2(ỹ) | ∀ỹ ∈ Y ′}. Next, we compute L2 distance between every pair of entry in ψY ′ as a measure of similarity. In table 8 we have reported the nearest matching labels(smallest distance) for all the categories in the dataset. Results: Though not perfect, for many source categories, the nearest matching categories tend to be visually similar. For example, the categories truck-car and bird-airplane. However, we also see some non-apparent pairings such as deer-car and frog-car. C EXPERIMENT: OUT-OF-DISTRIBUTION DETECTION In this section, we experiment the robustness of the learnt classifiers for detecting out-ofdistribution(OOD) images. The standard approach is to utilise the predicted confidence in distinguishing in- and out-of-distribution data(Hendrycks & Gimpel, 2017). Following this framework, we report the AU-ROC for models trained on the chosen datasets while tested on out-of-distribution datasets of LSUN(Yu et al., 2015), Tiny ImageNet(Le & Yang, 2015), Fashion-MNIST(Xiao et al., 2017). The out-distribution datasets are standardised using mean and standard deviation of the indistribution datasets. The INN models chosen correspond to INN(N = 9) in table 2. Results: The results reported in table 9 show that SCL and INN outperform the traditional baselines by a large margin for majority of the datasets. The comparatively lower performance of INN for CUB-20 and Pets can be attributed to its limited training. To recall, the corresponding INNs were trained withN = 9, and we expect OOD performance to improve as the values ofN used in training is increased. 90.81% 90.53% 86.5% 90.02 % 90.76% C.1 EXPERIMENT: DIFFERENT ACTIVATIONS FOR LABEL ENCODER In the main paper, the label encoder branch consisted of a 2 layered MLP with no activation. In this experiment, we apply the following 4 activations to the label encoder units and train INN(N = 9, b = 32) on the STL-10 dataset. 1. RELU(Glorot et al., 2011) 2. Leaky-RELU(Maas et al., 2013) 3. Sigmoid 4. Tanh Results: The results indicate maginally better accuracy for RELU and Leaky-RELU. Tanh and no activation based models closely follow the accuracy. For sigmoid, the performance is low. Our hypothesis is that, due to the limited scaling nature of the logistic function, the features of z are under refined. However, more extensive research is required to arrive at a stronger conclusion. We hope that our experiment provides an apt working ground for future research in this direction. To qualitatively assess the contributing regions of the image across activations, we provide GradCAM visualisations in figure 5. RELU, Leaky-RELU, Tanh, and No-activation are able to rely on relevant regions of the input image while making the prediction. In case of Sigmoid, we notice disorganised regions of attention. C.2 EXPERIMENT: COMPATIBILITY OF ψ & z To further highlight the fact that INNs do learn compatible representations and rely both on ψ & z to make an accurate prediction, we utilise the following 4 variations of ỹ for evaluating test accuracy on STL-10: 1. ỹ = y: We provide the correct class label as input. 2. ỹ : ỹ ∈ Y ′ − {y}: We provide a random incorrect class label as input. 3. ỹ = 1Y ′ : All the values are set to 1 in the input label vector. 4. ỹ = 0Y ′ : All the values are set to 0 in the input label vector. For evaluation, we record the argmax for each individual query between a yes-no response. If the representations are compatible we shall see a higher number of yes responses for case 1 than all the other variations. 85.2% 0.004% 0.0% 0.0% Results: Table 11 shows that label encoding ψ play a vital role in classification of the input images. Only when the image is paired with its corresponding ground-truth ỹ, INN makes the prediction of yes majority of the time. For ỹ corresponding to an incorrect class, the number of samples predicted as yes is quite insignificant. For the other two cases, INN never makes a yes prediction. This shows that INNs do rely on a compatible z and ψ to generate a correct class prediction. Visualisation: To further highlight the compatibility of ψ and z we generate a UMAP (McInnes et al., 2018) plot. UMAP is a non-linear dimension reduction technique which has been utilised in visualising high dimensional data. Figure 6 corresponds to the joint representations generated for training images(drawn as blobs) and a single test image of the STL-10 dataset(shown as star). For generating joint representations corresponding to the training set, htrain, ground-truth ytrain are utilised. Whereas, for generating test htest, we provide ỹ ∈ Y ′. Consequently, 10 points are generated for a single test image. The ground-truth label of the test image corresponds to airplane(integer label of 0). The figure shows that only when the input label is a one-hot encoded vector corresponding to the ground-truth label airplane, h for the test image overlaps with the training cluster(red dashed box). For other input labels, the test sample is further away from its corresponding ỹ cluster. C.3 EXPERIMENT: VARYING HIDDEN DIMENSION, d In this experiment we aim to determine the impact of latent dimension on the training of an INN. We conduct this experiment on CUB-200 and CUB-20 datasets with N = 1. The latent dimension is selected from the values {64, 128, 512, 1024} for a Resnet-18 based INN. Results: The results in figure 7 indicate the relevance of the dimensions of latent representations. The impact of the latent dimension is more for CUB-200 than CUB-20. For CUB-200 the accuracy increases with increase in dimensionality whereas, for CUB-20, the performance saturates roughly around d/Y ′ = 10 and decreases later on. The results indicate that for training larger datasets we are required to employ networks with comparatively larger latent dimensions. C.4 EXPERIMENT: CLASSIFICATION WITH CUB-200 In order to apply INN to CUB-200 we replace the Resnet-18 image encoder with Resnet-50. The latent dimension is 2048 for Resnet-50. The baseline for this study is B-T. For B-ML, we found that the network doesn’t train and obtains an accuracy of 0.5%, which is of a random chance. Even though, INN trains for small values of N , it fails to match its performance on larger values. In order to enable training for an INN when N is large, we initialise the weights from INN(N ′), where N ′ < N . For example, we first train the model with N ′ = 9 from scratch and for the subsequent fine-tuning we select the value N = 15. If we wish to train on a larger value of N such as 24, we initialize the weights from previously obtained INN(N = 15). In this study, we select N ∈ 15, 24, 31, 41, 51 and N ′ = 9. Results: Figure 8 shows the increase in accuracy for an INN with increasingN by applying iterative fine-tuning. The small increment in accuracy at each step is due to proportionally smaller increment of N . N = 41 is roughly 20% of the categories of CUB-200. We expect the INN to match and even surpass with higher values of N . However, we did observe the large jump in training time due to lowering of b to accommodate for increasing N . The per epoch time increases from 32 seconds for INN(N = 9) to 300 seconds for INN(N = 41). D TRAINING DETAILS We firstly cover B-T, B-ML and INN training hyper-parameters. Then we move on to the SCL training hyper-parameters. Baselines(B-T, B-ML) are referred to as N=0 in this section. Deep learning framework used is Pytorch(Paszke et al., 2017) version 1.2. D.1 CIFAR-10 • Training pre-processing: Random(cropping(32×32, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 75, 150, 225, 275 • Batch sizes: (N=0, b=256), (N=1, b=128), (N={3, 7, 9}, b=64) D.2 STL-10 • Training pre-processing: Random(cropping(96×96, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 200, 250, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N=3, b=64), (N={7,9}, b=32) D.3 BMW-10 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N={0, 1, 3, 7}, b=32), (N=9, b=16) D.4 CUB-20 • Categories: Black footed Albatross, Laysan Albatross, Sooty Albatross, Groove billed Ani, Crested Auklet, Least Auklet, Parakeet Auklet, Rhinoceros Auklet, Brewer Blackbird, Red winged Blackbird, Rusty Blackbird, Yellow headed Blackbird, Bobolink, Indigo Bunting, Lazuli Bunting, Painted Bunting, Cardinal, Spotted Catbird, Gray Catbird, Yellow breasted Chat These are the first 20 categories as they appeared in torchvision’s(Marcel & Rodriguez, 2010) implementation of CUB-200. • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 250, 300 • Batch sizes: (N={0, 1, 3, 7, 9}, b=32) D.5 PETS • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N={3, 7}, b=64), (N=9, b=32) D.6 CUB-200 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • N=0, Epochs=350, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[125, 200, 250, 300], batch size=128 • N=9, Epochs=500, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[100, 200, 300, 400, 450], batch size=64 • N=[15, 24, 31], Epochs=300, Start learning rate = 0.005, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=[32, 20, 16] • N=41, Epochs=300, Start learning rate = 0.0025, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=12 • N=51, Epochs=300, Start learning rate = 0.001, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=10 D.7 SCL TRAINING Image pre-processing steps are identical to those mentioned in the corresponding previous subsections. Common parameters: Temperature=0.1, decay(0.0001), cosine(True), and epochs=500. • CIFAR-10 – Learning rate: 0.05 – Batch size: 256 • STL-10 – Learning rate: 0.5 – Batch size: 256 • BMW-10 – Learning rate: 0.1 – Batch size: 128 • CUB-20 – Learning rate: 0.5 – Batch size: 128 • Pets – Learning rate: 0.1 – Batch size: 128 D.8 LINEAR CLASSIFICATION USING z We have used the SGDClassifier provided by sklearn(Pedregosa et al., 2011) library. Apart from the loss(loss=‘log’) and tol(tol=1e-5) we use the default values to train the model.
1. What is the main contribution of the paper regarding multiclass classification? 2. What are the strengths and weaknesses of the proposed approach compared to prior works such as [1]? 3. How does the reviewer assess the quality and clarity of the paper's content? 4. Do you have any concerns about the scalability of the method to larger datasets? 5. What are the potential improvements or modifications that could be made to the proposed approach to enhance its performance and novelty?
Review
Review Summary: The authors substitute the question - 'Which one of a set of output categories does an image belong to ?' by 'Does a particular image i belong to category c ?', thereby formulating a multi-class classification problem as a set of binary classification tasks. They argue that in the standard multiclass classification setup the NN gets a negative feedback only at the loss calculation step, i.e. when the cross entropy loss tries to suppress the logit scores of the incorrect classes in order to enhance the logits of the correct one. This is insufficient. Instead, they propose an approach wherein an image encoder network yields the image representation, a label encoder network yields the label representation, and the dot product of the two representation vectors serves as the input to a classifier (an mlp). The label representation can in effect be viewed as an attention mask and the joint training of all the units in the pipeline, together with the explicit negative pairings ( i and c , when i does not belong to label category c ) could afford the learning of more meaningful representations. Remarks on quality: The language is clear, text is well supplied by visualisations, mathematical formulations are accessible, as is the implementation and experimental analysis. Pros: (a) The proposed formulation is straightforward and easy to implement. (b) It has been analysed from various different angles, on tasks of classification, recognition of out-of-distribution samples, and also covers quality assessment of learned image and label embeddings. Cons: (a) The motivation is a bit vague, why would the negative feedback during CE loss calculation not be sufficient for learning good visual representations? (b) Relevant references and comparisons are missing. For example, see [1], wherein the notion of using the label (inferred rather than supplied) is being used to estimate an attention mask that is then applied on the image feature map to learn more discriminatory representations for classification. What are the likely differences, advantages and gains of the current approach w.r.t. to [1]. (c) The proposed approach (INN) seems to translate to results that perhaps statistically significant do not offer substantial and/or consistent improvements over existing methods: -- In Table 1 comparison with existing approaches (employing attention maps to boost performance) is missing. -- Table 4 that studies the performance on task of out of distribution detection, doesn't show a consistent gain by using INN -- learned similarity between labels in the label embedding space is sometime non-intuitive (see Table 3, deer-car and frog-car are confused). (d) Scalability to larger datasets (with bigger number of classes) is a crucial concern as noted by the authors themselves in Sec. 10. In the light of above, I am not convinced of the novelty of the approach both in terms of the formulation and the resulting performance and hence vote to reject this submission in its current form. [1] Learn to pay attention, Jetley et.al. (ICLR 2018)
ICLR
Title Exploring Target Driven Image Classification Abstract For a given image, traditional supervised image classification using deep neural networks is akin to answering the question ‘what object category does this image belong to?’. The model takes in an image as input and produces the most likely label for it. However, there is an alternate approach to arrive at the final answer which we investigate in this paper. We argue that, for any arbitrary category ỹ , the composed question ‘Is this image of an object category ỹ’ serves as a viable approach for image classification via. deep neural networks. The difference lies in the supplied additional information in form of the target along with the image. Motivated by the curiosity to unravel the advantages and limitations of the addressed approach, we propose Indicator Neural Networks(INN). It utilizes a pair of image and label as input and produces a image-label compatibility response. INN consists of 2 encoding components namely: label encoder and image encoder which learns latent representations for labels and images respectively. Predictor, the third component, combines the learnt individual label and image representations to make the final yes/no prediction. The network is trained end-to-end. We perform evaluations on image classification and fine-grained image classification datasets against strong baselines. We also investigate various components of INNs to understand their contribution in the final prediction of the model. Our probing of the modules reveals that, as opposed to traditionally trained deep counterpart, INN tends to much larger regions of the input image for generating the image features. The generated image feature is further refined by the generated label encoding prior to the final prediction. 1 INTRODUCTION Deep neural networks achieve state of the art in supervised classification across different tasks (Rawat & Wang, 2017; Girdhar et al., 2017; Yang et al., 2016). Our work focuses on supervised image classification. Conventionally, while training, the network fθ is provided as input a set of training images X and corresponding labels Y . It learns by predicting the class labels Ŷ = fθ(X) and minimising a predefined loss function L(Ŷ , Y ). During inference, the network predicts the most likely category for the input image. This approach is analogous to asking a person to name the object present in an image. An alternate approach is to present an image and a class category say cat and ask if the image is of a cat. However, under this scheme one has to exhaustively query every known category to arrive at a final answer. Figure 1 illustrates these scenarios in a natural setting. Prior to the dominance of deep learning based approaches, many methods relied on one-vs-rest SVM(Cortes & Vapnik, 1995) trained on handcrafted image features(Sánchez et al., 2013). The direction saught in this work has a big overlap with the idea of one-vs-rest classification. As we will see in the subsequent sections, we intend to perform a one-vs-rest classification with a single model. To the best of our knowledge, this alternate approach for supervised image classification has not yet been explored in the setting of deep neural networks. This paper is driven by the curiosity to understand the implications of adopting the plausible alternate strategy of framing the supervised classification task. Our core contributions are as follows: • We explore an alternate strategy of performing supervised image classification using labels as additional cues for inference. To the best of our knowledge this the first work which provides a unique re-interpretation of the multi-class classification problem. • To model such a strategy with deep neural networks, we propose a novel architecture termed as Indicator Neural Network(INN). INN produces a binary response conditioned jointly on the input image and query label. It performs multiple one-vs-rest classifications to arrive at the final label assignment. • Our experiments show that the INNs outperform strong baselines on various image classification datasets. These baselines depict ‘traditional’ route of training an image classifier. • We qualitatively and quantitatively investigate the various components of INN and highlight the differences arising due to our pursued structure of the problem. We have structured the paper as follows: we dive deeper into the motivation behind proposing a new architecture for supervised image classification in section 2. In section 3, we describe the said architecture and it’s train and test time methodology. We visit related work w.r.t the proposed architecture in section 4. Section 5 briefly covers the implementation details of the proposed model, selected baselines and chosen datasets. Through sections 6 – 9 we perform various experiments to obtain insights into strengths and weaknesses of the proposed model. We conclude in section 10 by summarising our efforts and discussing the research directions emanating from our work. 2 MOTIVATION FOR A NOVEL ARCHITECTURE The literature for supervised image classification is vast, as a result, we restrict the discussion to deep learning approaches. The existing solutions for image classification ranging from AlexNet(Krizhevsky et al., 2012) to EfficientNets(Tan & Le, 2019) take the ‘traditional’ direction for image classification. The traditional direction is depicted in figure 1(left) as a person predicting the category solely based on the input image. These deep learning solutions generate a probability distribution over all known categories as a response and ultimately select the category corresponding to the highest response. The learning of such solutions is backed by categorical cross-entropy loss(Baum & Wilczek, 1988; Solla et al., 1988) which allows a well established framework for training and inference. Other than changing the base architecture, approaches have also been proposed which utilize target transformations(Szegedy et al., 2016; Jarrett & van der Schaar, 2020; Sun et al., 2017), data augmentations(Hongyi Zhang, 2018; Yun et al., 2019) to aid supervised classification. However, these approaches also do not modify the query-response structure of the classifier. Arguably, predictions of a k-way classification model can be interpreted as answering a multi-cue query. This can be achieved by focusing on a single output unit. However, we have to understand that this response is still conditioned only on the input image. Moreover, the learning process ignores the supplied target label. A recently proposed approach(Khosla et al., 2020) tries to diverge from the norm by utilizing contrastive estimation(Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013) to perform the task of supervised image classification. In a two step process, it first computes an ideal embedding space using positive(images of the same category) and negative(images from other categories) samples. After learning the embedding function, it then trains a traditional classifier(based on cross-entropy loss) on the computed embeddings. The final response however, is yet again an answer to the query ‘Which category does this image belong to?’ conditioned only on the input image. As we noted from the above discussion, the existing methods do not provide us with an appropriate way to model supervised predictions conditioned on images and labels. Specifically, allowing us model the query ‘Is this image of a cat?’. As a result, we propose a novel architecture termed Indicator Neural Networks(INN), which we introduce in the subsequent section. 3 METHOD We consider a random image-label pair as (x, ỹ). We represent a deep neural network, fθ with learnable parameters θ. Let ỹ represent a one-hot encoded vector of a randomly sampled category. To infer the ground-truth category for an input image, all pairings of image and class categories are required to be queried. The class label corresponding to which a largest response is recorded, it can be assigned as the predicted category for the displayed image. Assuming there are Y ′ unique labels in the data, this would imply Y ′ queries for obtaining the predicted category for one image. We model this approach using INNs, fθ(x, ỹ). The naming is motivated by indicator functions (1ỹ=y) as for a single input of image and label, the aim of the model is to predict fθ(x, ỹ| y) = ŷ = { 1, if ỹ = y 0, otherwise. (1) where, y is the correct label corresponding to x. Realistically an INN will output ŷ ∈ [0, 1]. 3.1 INN ARCHITECTURE We break down fθ into its components which comprises of an image encoder, label encoder and predictor denoted respectively as: fθ1(x) = z ∈ Rd, fθ2(ỹ) = ψ ∈ Rd, fθ3(z, ψ) = ŷ ∈ [0, 1]1. (2) Here, d represents the dimensions of the embedded features. z and ψ are image and label encodings respectively. Note, that for generating z the input ỹ is irrelevant, and similarly for ψ the input image doesn’t matter. The predictor utilises z and ψ to generate the joint image-label representation h = z ◦ ψ ∈ Rd. ◦ is the element-wise multiplication. It then utilizes h to make the final linear classification decision. Figure 2 shows the pipeline as described above alongside last layers of a traditionally trained model for a visual comparison. Hypothesis: To have a better understanding of what the model is performing under the hood, we can consider ψ comparable to a 1d attention map. As a result, ψ will magnify or diminish certain features in z to produce a refined h. We suspect that this reduces the burden of image encoder to produce strong category discriminative features and allows the network to attend to larger regions of the input image. But what stops image encoder from focusing on irrelevant regions in the input image? To answer it, we have to change the perspective with which we observe h. We can also view h as a non-uniformly scaled label embedding(ψ scaled by z). Predictor is necessarily a linear classification head and for it to function appropriately, z extracted from different images of the same category should be similar. As, this will allow the predictor to learn meaningful classification boundaries. As an example, the image encoder will seek common characteristics in all the images of the category dog. 3.2 INN TRAINING To train the INN, we utilise positive and negative pairings of images and labels. The target of the model is to predict no(0) for an incorrect pairing whereas, yes(1) for a correct one. For a batch of correctly paired input data(sized b), we first extend the batch by concatenating randomly generated incorrect pairings to it. If N is the desired number of incorrect pairings per image per batch, the the resulting size of the input batch after the concatenation operation will be (N + 1) × b. By applying the i.i.d assumption for image-label pairs, we can write the empirical log-likelihood which the network aims to maximize as: log(P (Ŷ |X, Ỹ ; θ)) =⇒ log(Πb×(N+1)i=0 P (ŷi|xi, ỹi; θ)) =⇒ b×(N+1)∑ i=0 log(P (ŷi|xi, ỹi; θ)) (3) Alternatively, in terms of loss, for a single image(x), input query label(ỹ) and ground-truth class label(y) the corresponding loss is denoted as L(fθ(x, ỹ), 1ỹ=y). We employ binary cross-entropy for the implementation of loss. We extend the loss for a single image to the entire dataset as, L(X, Y ) = 1 |X| ∑ (x, y)∈(X, Y ) { 1 K1 L(fθ(x, y), 1) + 1 K2 i<N∑ ỹi∈Y ′−{y} L(fθ(x, ỹi), 0) } (4) 3.2.1 COMPARISON TO TRADITIONAL TRAINING It is relevant to point out the differences between an INN and traditional mode of training. 1. Traditionally, the networks designed for supervised classification maximise the likelihood P (Y |X; θ). In our case, the predictions are conditioned both on the input image and the randomly supplied target. 2. Negative labels are involved indirectly in the loss computation(cross-entropy) due to the softmax operation (Goodfellow et al., 2016, Chapter 6.2.2.3). The supplied target corresponds to the correct label and the resulting contribution to the loss is from the output unit corresponding to this target label. In our framework, the negative classes(stemming from incorrect pairings) are directly involved into the loss computation as we explicitly provide a dedicated target for them. 3. Backpropagating gradient ∂L∂h ∂h ∂z for the image encoder branch is scaled by ψ due the nature of bi-linear operation. Similarly for label encoder, the gradients are scaled by z. This aspect allows the model to eventually learn compatible representations to make the final prediction. 3.3 INN INFERENCE For inferring the class label of an input image x, we select the input label which yields the largest response. Formally, ŷ = arg max ỹ fθ(x, ỹ) ∀ỹ ∈ Y ′ (5) 4 RELATED WORK Two-stream models have been deployed successfully for the tasks of action recognition(Simonyan & Zisserman, 2014; Feichtenhofer et al., 2016), video classification(Wang et al., 2018), fine-grained image classification(Lin et al., 2015), multi-label image classification(Yu et al., 2019) and aerial scene classification(Yu & Liu, 2018) to name a few. Apart from the evident difference in the application of these models, the differences lie in the choice of inputs and the function for fusing the 2 stream outputs. Many approaches have been proposed which utilize labels as auxiliary inputs in image classification (Weston et al., 2010; Frome et al., 2013; Akata et al., 2016; Sun et al., 2017), text classification (Weinberger & Chapelle, 2009; Guoyin Wang, 2018; Dong et al., 2020), and text recognition(Rodriguez-Serrano et al., 2015). In computer vision, these approaches rely on a language model(Mikolov et al., 2013) trained on external data to obtain label embeddings. The main focus of these approaches (Frome et al., 2013; Gang Wang & Forsyth, 2009; Wang & Mori, 2010; Akata et al., 2016) is to use the pre-learnt embeddings to enforce high similarity between image representations of contextually similar categories. These methods are targeted towards zero-shot learning as they rely on enforced similarities to detect novel image categories. As opposed to the existing line of work, we use one-hot encodings as input to our classifier which removes the requirement to utilize any external data. Also, we work without explicitly enforcing similarity constraints on learnt embeddings. In our training we utilize negative pairing of images and labels. This idea is based on the principle of noise contrastive estimation(Gutmann & Hyvärinen, 2010). SCL(Khosla et al., 2020) also follows this direction to learn meaningful embeddings in their classification approach. Their positive and negative samples consists of images from same and different categories respectively. In contrast, we consider the correctly paired image-label combinations as positives and incorrectly paired image-labels as negatives. Also, ours is a single stage end-to-end differentiable training routine. In INNs, we can assign to label encodings the role of a 1d attention map(Xu et al., 2015). For image classification, the existing approaches based on attention(Wang et al., 2017; Woo et al., 2018; Hu et al., 2018; Bello et al., 2019; Jetley et al., 2018) introduce spatial or channel-wise attention at different depth of a traditional neural network. In contrast to our proposed model, this modification is made to the image encoder. We can easily replace INN’s image encoder with the one equipped with such an attention mechanism. This will incorporate a dual attention mechanism at the level of label fusion and image embedding. However, INN depict one of the simplest ways of modelling the pursued query structure and it is this formulation which gives rise to attention. Attention based approaches as mentioned above focus on answering the query ’What category does the image belong to?’. Moreover, we focus our work to compare different approaches for modelling the classification task rather than different mechanism of performing a traditional classification task. 5 IMPLEMENTATION DETAILS Datasets: Throughout our paper, we refer to a size of a dataset for the number of unique categories it contains. For small datasets we use CIFAR-10, STL-10, BMW-10(Ultra fine-grain cars dataset), CUB-20(formed using 20 categories of CUB-200-2011), and Oxford-IIIT Pets. Study involving larger dataset utilizes CUB-200. Table provided in appendix A.2 shows the common statistics of the utilized datasets. Architectures: Here we provide brief details of selected baselines and INN. All the models are trained from scratch to provide an even ground for comparison. Detailed hyper-parameters are provided in appendix D. • Baseline-Traditional(B-T): We’ve selected Resnet-18(He et al., 2015) trained with categorical cross-entropy loss as our traditional baseline. It is a widely popular architecture and portrays the standard manner of training an image classifier(Khosla et al., 2020; Tan & Le, 2019). Evaluation with a VGG-11(Simonyan & Zisserman, 2015) model is shared in appendix A.4. • Baseline-Multi-Label(B-ML): We train the Resnet-18 as a multi-label classifierNam et al. (2014). Each of the Y ′ output units is treated independently with its own binary crossentropy computation. This allows us to use Y ′ − 1 output units as negative targets in training. • Supervised Contrastive Learning(SCL)(Khosla et al., 2020): In a much recently proposed approach, the authors make use of contrastive loss based supervised representation learning. As the second step, a linear classifier is trained on top of learnt representations by employing standard cross entropy loss. We train Resnet-18 using the official code1. 1https://github.com/HobbitLong/SupContrast • INN: We describe the implementation details of the different components of an INN below. – Image Encoder: We use a Resnet-18 without the fully connected final layer. – Label Encoder: We use a 2-layered MLP with no activation(see appendix C.1 for an ablation with activations). The number of units per layer are d/2 and d.2 – Predictor: z and ψ are combined to form h using element-wise product. h is then connected to the output units which forms the fully-connected final layer for prediction. 6 EXPERIMENT: WHAT DOES THE NETWORK SEE? Grad-CAM(Selvaraju et al., 2017) is an approach for interpreting the predictions of a network by qualitatively assessing the identified salient regions in the input image. It utilises the gradient of classification output w.r.t. feature map to generate coarse heatmaps, highlighting important spatial locations in the input image. Recently, Adebayo et al. (2018) assessed different approaches for interpreting a network’s prediction. As per their finding, Grad-CAMs generate meaningful heat maps and passed their meticulously constructed sanity tests. Grad-CAM has been utilised by many approaches (Yun et al., 2019; Woo et al., 2018) to emphasize on attended regions by the network. We use Grad-CAM for similar purpose and perform a qualitative and quantitative comparison w.r.t baselines. Quantitative analysis: Figure 3 shows the heatmaps produced for sample input images for the baseline and INN models. We can notice the significant difference in the spatial spread of salient regions. Comparing the baselines we observe the larger spread on heatmap for B-ML than B-T and SCL. The heatmaps generated for SCL and B-T appear to be localized to highly distinguishable regions. On the other hand, the visuals indicate INN to be looking at a wider region for making a label specific prediction. Qualitative analysis: To quantify the salient regions we scale the heatmaps between 0 and 1. We consider pixels with values greater than t = 0.5 as salient. We use the training set for this comparison. Since we are focused on assessing how the different attended regions vary across methods, the utilization of training data does not restrict us from this goal and moreover, provides us with a larger overlap of accurately predicted samples for computing the salient regions. Table 1 contain the proportion of an image on an average considered salient as per Grad-CAM. The results are in-line to qualitative assessments we made. For majority of the datasets B-ML and INN produce larger salient regions of the input image. We do not state that focusing on larger regions is 2Overall, INN introduces approximately d× d/2 additional parameters. For Resnet-18, d = 512 beneficial as compared to more focused distinguishable features. We only aim to support our hypothesis behind the working of an INN. As per our assumption, we hypothesized that the production of disjoint representations z and ψ allows for less discriminative features z. Here we interpreted increase in spatial spread of saliency as producing less discriminative features thereby supporting our hypothesis. 7 EXPERIMENT: IMAGE CLASSIFICATION We evaluate the performance of INNs against small datasets(Y ′ < 50). To train INNs, we use K1 = K2 = 1 as the value of scaling constants in equation 4. Results: The corresponding results reported in table 2 highlight the effectiveness of INNs. There are four key observations to be made. Firstly, B-T and B-ML show peculiar trend across datasets. In STL-10, B-ML outperforms B-T, we hypothesize that as the predictions are based on a larger input image region which proves beneficial where categories are visually dissimilar. Consequently, for fine-grained visual classification datasets, where the categories are highly similar, B-T performs better. Secondly, there is a significant difference in performance of the baselines and INN(N=9) for majority of the datasets. For CIFAR-10, the results are comparable. We believe that the small size of the input image does not provide much room for improvement. To verify this, we conduct an experiment in appendix A.5 with images of STL-10 resized to 32× 32. We observe a trend of limited improvement for resized STL-10 as we did for CIFAR-10, which supports our theory. Thirdly, as the value of N increases the performance of INN increases. We believe this is a direct consequence of providing more negative label examples for a given input image during training. By providing many more samples, the network can learn better(more compatible) representations. Lastly, INN out performs contrastive learning based approach, SCL. For CUB-20 and Pets, we expect further improvement in the performance of INN as the value forN is smaller than the maximum allowed for these datasets. 8 EXPERIMENT: IMPORTANCE OF z AND ψ To understand the relevance of z and ψ, we train a linear classifier on top of z in the traditional manner using multi-class cross entropy loss. We compare the accuracy of the model obtained with that of INN. This will help us understand the nature of z as well as improvements made by ψ. Implementation details: Using the train split of the data we gather ztrain from fθ1. Note, that the input ỹ chosen is irrelevant for producing z. Next, we train a multi-class logistic regression classifier using stochastic gradient descent on ztrain, ytrain. Additional training details are shared in appendix D.8. For inference, we pass the ztest to the learnt classifier and record the predicted class. The INN models selected for extracting z corresponds to INN(N = 9) in table 2. Results: Table 3 shows the performance of a classifier trained on top of z in comparison to INN(N = 9). We observe that for image classification datasets of CIFAR-10 and STL-10, the classification performance of the two approaches is highly comparable. However, we observe significant differences for the fine-grained visual classification datasets. We believe that due to high visual dissimilarity between categories in CIFAR-10 and STL-10, obtained z is sufficient to perform the task of classification. However, in fine-grained datasets since the categories are quite visually similar, ψ plays an important role in further refining the representations. These observations are inline to our hypothesis behind the working of the model. To further highlight the nature of z and ψ we perform additional experiments in appendix C.2. 9 EXTENSION TO LARGER DATASETS So far, we have observed that the approach of utilizing labels as an additional cue allows to perform the task of multi-class classification. However, the datasets considered only included few unique categories. In this section, we reflect upon the short comings of adopting our pursued approach and subsequently the failures of INNs. • For smaller datasets, larger the value of N , higher is the classification accuracy. If we extend this logic to larger datasets such as ImageNet(Deng et al., 2009), the best value of N will be close to 1000. Using a traditional batch size(b) of 128 will push the effective batch size to 128, 000, larger than the largest considered for large mini-batch training. methods(Goyal et al., 2017). To counter such large values of N , one can significantly reduce b which in turn will extend the training time from days to months. In order to draw relevant conclusions in a reasonable time frame, we limit the discussions in this section to CUB-200 which contains 200 unique categories. • Latent dimension plays an important in the predictive performance. We conducted experiments on CUB-200 and CUB-20 by varying the latent dimension of the model between 64, 128, 512, 1024 and observe that impact is more for CUB-200 than CUB-20. The details of the corresponding experiment are described in appendix C.3. • Large imbalance of positive and negative samples arising as a result of increasing N can destablize an INN training. For similar reasons, we observe B-ML training to collapse as well. We can balance the weights for positive and negative targets by adjusting their contribution to the loss, however, we find that this approach impedes INN performance. As an alternative, instead of training an INN from scratch on larger values of N , one can initialize the weights from an INN trained on a smaller valueN ′, whereN ′ < N . By doing this, we find that not only INN(N ) surpasses the accuracy of INN(N ′) but also performs comparable to the baseline. The corresponding experimentation details and results are provided in appendix C.4. 10 DISCUSSION & CONCLUSION As opposed to the traditional approach, we explored the applicability of a target driven method. Specifically, we modelled the question ‘Does the given image belong to category ỹ’. We showed that it is possible to tackle the multi-class classification problem from a non-traditional perspective. Our aim was not to show that the pursued approach is better, rather, we aimed to explore and highlight the pros and cons of this unexplored paradigm. Our approach adapts classical one-vs-rest approach in a modern deep learning setting. To achieve this goal, we introduced INNs which rely on a pair of input image and target label to produce a response. By inferring exhaustively with all the target categories we arrive at the final decision. Our study involving class activation maps revealed that INNs utilize much larger regions of the input image to generate features. We hypothesize the imposed independence on image embeddings and labels allow the image encoder to tend to larger regions than highly discriminative features from traditional approaches. We also explored the scenarios where learned image features are adequate to learn a traditional classifier on top. This observation was made for cases where the categories are visually dissimilar. Label embeddings refine the coarse image representations immensely for fine-grained tasks. By pitting INNs against strong baselines we were able to highlight the strength of our adopted approach in comparision. The INNs outperformed the baselines on all the datasets(Y ′ < 50) considered for image classification and fine-grained image classification. Additional experiments on Out-of-distribution(OOD, appendix C) and label embedding(appendix B) analysis helps to broaden our understanding following a one-vs-rest setting. OOD analysis shows that INN performs comparable to contrastive learning based SCL. An indicative qualitative result on learnt label embeddings show that similar categories often have nearby label embeddings. On the down side, we witnessed the difficulties of extending the method to larger datasets. We consider dependency on latent dimension and N the main reasons for this limitation. To make the approach scalable, we believe, constructing a smarter negative sampling approach will be the direction moving forward. We see numerous avenues for future research. Our proposed direction of training a neural network is comparable to classical one-vs-rest approaches(Sánchez et al., 2013). Due to the sudden outburst and adoption of deep learning approaches, the classical one-vs-rest direction has suddenly phased out. And, to cover and compare all the aspects of a traditionally trained neural network which evolved over the past years in a single work is not feasible. As a result, there are multitude of directions of adopting a one-vs-rest approach as devised in this work. Some directions include but are not limited to object detection(Ren et al., 2015), image segmentation (Chen et al., 2018), anomaly detection(Chandola et al., 2009). Our main focus will be to extend our experimentation theme(and not just the INN) to these problems and analyse its subsequent impact. We will publicly share the source code supplied in supplementary to facilitate brisk research. A APPENDIX A.1 NOTATIONS A.2 DATASET STATISTICS A.3 GRAD-CAM VISUALIZATIONS We provide more visualisations to compare the recognised salient regions across baselines in figure 4. A.4 EXPERIMENT: VGG IMAGE ENCODER In this section we replace the image encoder of the INN with a VGG-11(with batch normalisation) model. For an INN, we use the features from the last convolutional block after an adaptive average pooling. Results: Table 6 shows that VGG based INN outperforms the baselines by a large margin. For CIFAR-10, we suspect that similar to the Resnet based INN the small size of the input image restricts the added advantage of using target driven approach. A.5 EXPERIMENT: RESCALED STL-10 For this experiment, we downscale the STL-10 images to 32×32 to bring it down to the same size as that of CIFAR-10. For training, we use identical hyper-parameters as we did for training the model on unaltered STL-10 dataset. 3Created using 20 categories of CUB-200 Results: We notice in table 7 that the INN performance is quite similar to that of the baseline when the image size is small. Similar trend was observed in case of CIFAR-10 as well. We believe that INNs and the baseline both utilize equal portion of the input image to generate representations, which leads to similar performance in accuracy. B EXPERIMENT: LABEL EMBEDDINGS, ψ We have witnessed that INNs rely on ψ and z to make a correct prediction. Also, depending on the content of the dataset, ψ can play a vital role in further improving the performance. In this experimental set up, we aim to explore more about ψ. Specifically, how different encoded labels relate to each other. We believe that the visual content of images drives the learning of label embeddings, i.e. similar visual categories have nearby label representations. Though the results presented here are qualitative in nature, we believe they provide adequate evidence to back our claim. Implementation details: We select INN(N = 9) for CIFAR-10 in this study. We generate ψY ′ = {fθ2(ỹ) | ∀ỹ ∈ Y ′}. Next, we compute L2 distance between every pair of entry in ψY ′ as a measure of similarity. In table 8 we have reported the nearest matching labels(smallest distance) for all the categories in the dataset. Results: Though not perfect, for many source categories, the nearest matching categories tend to be visually similar. For example, the categories truck-car and bird-airplane. However, we also see some non-apparent pairings such as deer-car and frog-car. C EXPERIMENT: OUT-OF-DISTRIBUTION DETECTION In this section, we experiment the robustness of the learnt classifiers for detecting out-ofdistribution(OOD) images. The standard approach is to utilise the predicted confidence in distinguishing in- and out-of-distribution data(Hendrycks & Gimpel, 2017). Following this framework, we report the AU-ROC for models trained on the chosen datasets while tested on out-of-distribution datasets of LSUN(Yu et al., 2015), Tiny ImageNet(Le & Yang, 2015), Fashion-MNIST(Xiao et al., 2017). The out-distribution datasets are standardised using mean and standard deviation of the indistribution datasets. The INN models chosen correspond to INN(N = 9) in table 2. Results: The results reported in table 9 show that SCL and INN outperform the traditional baselines by a large margin for majority of the datasets. The comparatively lower performance of INN for CUB-20 and Pets can be attributed to its limited training. To recall, the corresponding INNs were trained withN = 9, and we expect OOD performance to improve as the values ofN used in training is increased. 90.81% 90.53% 86.5% 90.02 % 90.76% C.1 EXPERIMENT: DIFFERENT ACTIVATIONS FOR LABEL ENCODER In the main paper, the label encoder branch consisted of a 2 layered MLP with no activation. In this experiment, we apply the following 4 activations to the label encoder units and train INN(N = 9, b = 32) on the STL-10 dataset. 1. RELU(Glorot et al., 2011) 2. Leaky-RELU(Maas et al., 2013) 3. Sigmoid 4. Tanh Results: The results indicate maginally better accuracy for RELU and Leaky-RELU. Tanh and no activation based models closely follow the accuracy. For sigmoid, the performance is low. Our hypothesis is that, due to the limited scaling nature of the logistic function, the features of z are under refined. However, more extensive research is required to arrive at a stronger conclusion. We hope that our experiment provides an apt working ground for future research in this direction. To qualitatively assess the contributing regions of the image across activations, we provide GradCAM visualisations in figure 5. RELU, Leaky-RELU, Tanh, and No-activation are able to rely on relevant regions of the input image while making the prediction. In case of Sigmoid, we notice disorganised regions of attention. C.2 EXPERIMENT: COMPATIBILITY OF ψ & z To further highlight the fact that INNs do learn compatible representations and rely both on ψ & z to make an accurate prediction, we utilise the following 4 variations of ỹ for evaluating test accuracy on STL-10: 1. ỹ = y: We provide the correct class label as input. 2. ỹ : ỹ ∈ Y ′ − {y}: We provide a random incorrect class label as input. 3. ỹ = 1Y ′ : All the values are set to 1 in the input label vector. 4. ỹ = 0Y ′ : All the values are set to 0 in the input label vector. For evaluation, we record the argmax for each individual query between a yes-no response. If the representations are compatible we shall see a higher number of yes responses for case 1 than all the other variations. 85.2% 0.004% 0.0% 0.0% Results: Table 11 shows that label encoding ψ play a vital role in classification of the input images. Only when the image is paired with its corresponding ground-truth ỹ, INN makes the prediction of yes majority of the time. For ỹ corresponding to an incorrect class, the number of samples predicted as yes is quite insignificant. For the other two cases, INN never makes a yes prediction. This shows that INNs do rely on a compatible z and ψ to generate a correct class prediction. Visualisation: To further highlight the compatibility of ψ and z we generate a UMAP (McInnes et al., 2018) plot. UMAP is a non-linear dimension reduction technique which has been utilised in visualising high dimensional data. Figure 6 corresponds to the joint representations generated for training images(drawn as blobs) and a single test image of the STL-10 dataset(shown as star). For generating joint representations corresponding to the training set, htrain, ground-truth ytrain are utilised. Whereas, for generating test htest, we provide ỹ ∈ Y ′. Consequently, 10 points are generated for a single test image. The ground-truth label of the test image corresponds to airplane(integer label of 0). The figure shows that only when the input label is a one-hot encoded vector corresponding to the ground-truth label airplane, h for the test image overlaps with the training cluster(red dashed box). For other input labels, the test sample is further away from its corresponding ỹ cluster. C.3 EXPERIMENT: VARYING HIDDEN DIMENSION, d In this experiment we aim to determine the impact of latent dimension on the training of an INN. We conduct this experiment on CUB-200 and CUB-20 datasets with N = 1. The latent dimension is selected from the values {64, 128, 512, 1024} for a Resnet-18 based INN. Results: The results in figure 7 indicate the relevance of the dimensions of latent representations. The impact of the latent dimension is more for CUB-200 than CUB-20. For CUB-200 the accuracy increases with increase in dimensionality whereas, for CUB-20, the performance saturates roughly around d/Y ′ = 10 and decreases later on. The results indicate that for training larger datasets we are required to employ networks with comparatively larger latent dimensions. C.4 EXPERIMENT: CLASSIFICATION WITH CUB-200 In order to apply INN to CUB-200 we replace the Resnet-18 image encoder with Resnet-50. The latent dimension is 2048 for Resnet-50. The baseline for this study is B-T. For B-ML, we found that the network doesn’t train and obtains an accuracy of 0.5%, which is of a random chance. Even though, INN trains for small values of N , it fails to match its performance on larger values. In order to enable training for an INN when N is large, we initialise the weights from INN(N ′), where N ′ < N . For example, we first train the model with N ′ = 9 from scratch and for the subsequent fine-tuning we select the value N = 15. If we wish to train on a larger value of N such as 24, we initialize the weights from previously obtained INN(N = 15). In this study, we select N ∈ 15, 24, 31, 41, 51 and N ′ = 9. Results: Figure 8 shows the increase in accuracy for an INN with increasingN by applying iterative fine-tuning. The small increment in accuracy at each step is due to proportionally smaller increment of N . N = 41 is roughly 20% of the categories of CUB-200. We expect the INN to match and even surpass with higher values of N . However, we did observe the large jump in training time due to lowering of b to accommodate for increasing N . The per epoch time increases from 32 seconds for INN(N = 9) to 300 seconds for INN(N = 41). D TRAINING DETAILS We firstly cover B-T, B-ML and INN training hyper-parameters. Then we move on to the SCL training hyper-parameters. Baselines(B-T, B-ML) are referred to as N=0 in this section. Deep learning framework used is Pytorch(Paszke et al., 2017) version 1.2. D.1 CIFAR-10 • Training pre-processing: Random(cropping(32×32, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 75, 150, 225, 275 • Batch sizes: (N=0, b=256), (N=1, b=128), (N={3, 7, 9}, b=64) D.2 STL-10 • Training pre-processing: Random(cropping(96×96, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 200, 250, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N=3, b=64), (N={7,9}, b=32) D.3 BMW-10 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N={0, 1, 3, 7}, b=32), (N=9, b=16) D.4 CUB-20 • Categories: Black footed Albatross, Laysan Albatross, Sooty Albatross, Groove billed Ani, Crested Auklet, Least Auklet, Parakeet Auklet, Rhinoceros Auklet, Brewer Blackbird, Red winged Blackbird, Rusty Blackbird, Yellow headed Blackbird, Bobolink, Indigo Bunting, Lazuli Bunting, Painted Bunting, Cardinal, Spotted Catbird, Gray Catbird, Yellow breasted Chat These are the first 20 categories as they appeared in torchvision’s(Marcel & Rodriguez, 2010) implementation of CUB-200. • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 250, 300 • Batch sizes: (N={0, 1, 3, 7, 9}, b=32) D.5 PETS • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N={3, 7}, b=64), (N=9, b=32) D.6 CUB-200 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • N=0, Epochs=350, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[125, 200, 250, 300], batch size=128 • N=9, Epochs=500, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[100, 200, 300, 400, 450], batch size=64 • N=[15, 24, 31], Epochs=300, Start learning rate = 0.005, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=[32, 20, 16] • N=41, Epochs=300, Start learning rate = 0.0025, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=12 • N=51, Epochs=300, Start learning rate = 0.001, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=10 D.7 SCL TRAINING Image pre-processing steps are identical to those mentioned in the corresponding previous subsections. Common parameters: Temperature=0.1, decay(0.0001), cosine(True), and epochs=500. • CIFAR-10 – Learning rate: 0.05 – Batch size: 256 • STL-10 – Learning rate: 0.5 – Batch size: 256 • BMW-10 – Learning rate: 0.1 – Batch size: 128 • CUB-20 – Learning rate: 0.5 – Batch size: 128 • Pets – Learning rate: 0.1 – Batch size: 128 D.8 LINEAR CLASSIFICATION USING z We have used the SGDClassifier provided by sklearn(Pedregosa et al., 2011) library. Apart from the loss(loss=‘log’) and tol(tol=1e-5) we use the default values to train the model.
1. What is the focus of the paper regarding image classification? 2. What are the concerns regarding the proposed approach in comparison to previous works? 3. How does the reviewer assess the novelty and significance of the paper's content?
Review
Review The paper proposes indicator neural networks, a model that classifies images using label embedding. Experiments are performed on various datasets such as CIFAR-10, STL-10, Pets, and others. The main concern with the paper is that label embedding is something people have attempted before, with better or stronger results on larger datasets such as ImageNet. To the credit of the authors, section 3 cited many of these prior papers. I did not identify significant difference that warranties reasonable novelty for ICLR 2021.
ICLR
Title Exploring Target Driven Image Classification Abstract For a given image, traditional supervised image classification using deep neural networks is akin to answering the question ‘what object category does this image belong to?’. The model takes in an image as input and produces the most likely label for it. However, there is an alternate approach to arrive at the final answer which we investigate in this paper. We argue that, for any arbitrary category ỹ , the composed question ‘Is this image of an object category ỹ’ serves as a viable approach for image classification via. deep neural networks. The difference lies in the supplied additional information in form of the target along with the image. Motivated by the curiosity to unravel the advantages and limitations of the addressed approach, we propose Indicator Neural Networks(INN). It utilizes a pair of image and label as input and produces a image-label compatibility response. INN consists of 2 encoding components namely: label encoder and image encoder which learns latent representations for labels and images respectively. Predictor, the third component, combines the learnt individual label and image representations to make the final yes/no prediction. The network is trained end-to-end. We perform evaluations on image classification and fine-grained image classification datasets against strong baselines. We also investigate various components of INNs to understand their contribution in the final prediction of the model. Our probing of the modules reveals that, as opposed to traditionally trained deep counterpart, INN tends to much larger regions of the input image for generating the image features. The generated image feature is further refined by the generated label encoding prior to the final prediction. 1 INTRODUCTION Deep neural networks achieve state of the art in supervised classification across different tasks (Rawat & Wang, 2017; Girdhar et al., 2017; Yang et al., 2016). Our work focuses on supervised image classification. Conventionally, while training, the network fθ is provided as input a set of training images X and corresponding labels Y . It learns by predicting the class labels Ŷ = fθ(X) and minimising a predefined loss function L(Ŷ , Y ). During inference, the network predicts the most likely category for the input image. This approach is analogous to asking a person to name the object present in an image. An alternate approach is to present an image and a class category say cat and ask if the image is of a cat. However, under this scheme one has to exhaustively query every known category to arrive at a final answer. Figure 1 illustrates these scenarios in a natural setting. Prior to the dominance of deep learning based approaches, many methods relied on one-vs-rest SVM(Cortes & Vapnik, 1995) trained on handcrafted image features(Sánchez et al., 2013). The direction saught in this work has a big overlap with the idea of one-vs-rest classification. As we will see in the subsequent sections, we intend to perform a one-vs-rest classification with a single model. To the best of our knowledge, this alternate approach for supervised image classification has not yet been explored in the setting of deep neural networks. This paper is driven by the curiosity to understand the implications of adopting the plausible alternate strategy of framing the supervised classification task. Our core contributions are as follows: • We explore an alternate strategy of performing supervised image classification using labels as additional cues for inference. To the best of our knowledge this the first work which provides a unique re-interpretation of the multi-class classification problem. • To model such a strategy with deep neural networks, we propose a novel architecture termed as Indicator Neural Network(INN). INN produces a binary response conditioned jointly on the input image and query label. It performs multiple one-vs-rest classifications to arrive at the final label assignment. • Our experiments show that the INNs outperform strong baselines on various image classification datasets. These baselines depict ‘traditional’ route of training an image classifier. • We qualitatively and quantitatively investigate the various components of INN and highlight the differences arising due to our pursued structure of the problem. We have structured the paper as follows: we dive deeper into the motivation behind proposing a new architecture for supervised image classification in section 2. In section 3, we describe the said architecture and it’s train and test time methodology. We visit related work w.r.t the proposed architecture in section 4. Section 5 briefly covers the implementation details of the proposed model, selected baselines and chosen datasets. Through sections 6 – 9 we perform various experiments to obtain insights into strengths and weaknesses of the proposed model. We conclude in section 10 by summarising our efforts and discussing the research directions emanating from our work. 2 MOTIVATION FOR A NOVEL ARCHITECTURE The literature for supervised image classification is vast, as a result, we restrict the discussion to deep learning approaches. The existing solutions for image classification ranging from AlexNet(Krizhevsky et al., 2012) to EfficientNets(Tan & Le, 2019) take the ‘traditional’ direction for image classification. The traditional direction is depicted in figure 1(left) as a person predicting the category solely based on the input image. These deep learning solutions generate a probability distribution over all known categories as a response and ultimately select the category corresponding to the highest response. The learning of such solutions is backed by categorical cross-entropy loss(Baum & Wilczek, 1988; Solla et al., 1988) which allows a well established framework for training and inference. Other than changing the base architecture, approaches have also been proposed which utilize target transformations(Szegedy et al., 2016; Jarrett & van der Schaar, 2020; Sun et al., 2017), data augmentations(Hongyi Zhang, 2018; Yun et al., 2019) to aid supervised classification. However, these approaches also do not modify the query-response structure of the classifier. Arguably, predictions of a k-way classification model can be interpreted as answering a multi-cue query. This can be achieved by focusing on a single output unit. However, we have to understand that this response is still conditioned only on the input image. Moreover, the learning process ignores the supplied target label. A recently proposed approach(Khosla et al., 2020) tries to diverge from the norm by utilizing contrastive estimation(Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013) to perform the task of supervised image classification. In a two step process, it first computes an ideal embedding space using positive(images of the same category) and negative(images from other categories) samples. After learning the embedding function, it then trains a traditional classifier(based on cross-entropy loss) on the computed embeddings. The final response however, is yet again an answer to the query ‘Which category does this image belong to?’ conditioned only on the input image. As we noted from the above discussion, the existing methods do not provide us with an appropriate way to model supervised predictions conditioned on images and labels. Specifically, allowing us model the query ‘Is this image of a cat?’. As a result, we propose a novel architecture termed Indicator Neural Networks(INN), which we introduce in the subsequent section. 3 METHOD We consider a random image-label pair as (x, ỹ). We represent a deep neural network, fθ with learnable parameters θ. Let ỹ represent a one-hot encoded vector of a randomly sampled category. To infer the ground-truth category for an input image, all pairings of image and class categories are required to be queried. The class label corresponding to which a largest response is recorded, it can be assigned as the predicted category for the displayed image. Assuming there are Y ′ unique labels in the data, this would imply Y ′ queries for obtaining the predicted category for one image. We model this approach using INNs, fθ(x, ỹ). The naming is motivated by indicator functions (1ỹ=y) as for a single input of image and label, the aim of the model is to predict fθ(x, ỹ| y) = ŷ = { 1, if ỹ = y 0, otherwise. (1) where, y is the correct label corresponding to x. Realistically an INN will output ŷ ∈ [0, 1]. 3.1 INN ARCHITECTURE We break down fθ into its components which comprises of an image encoder, label encoder and predictor denoted respectively as: fθ1(x) = z ∈ Rd, fθ2(ỹ) = ψ ∈ Rd, fθ3(z, ψ) = ŷ ∈ [0, 1]1. (2) Here, d represents the dimensions of the embedded features. z and ψ are image and label encodings respectively. Note, that for generating z the input ỹ is irrelevant, and similarly for ψ the input image doesn’t matter. The predictor utilises z and ψ to generate the joint image-label representation h = z ◦ ψ ∈ Rd. ◦ is the element-wise multiplication. It then utilizes h to make the final linear classification decision. Figure 2 shows the pipeline as described above alongside last layers of a traditionally trained model for a visual comparison. Hypothesis: To have a better understanding of what the model is performing under the hood, we can consider ψ comparable to a 1d attention map. As a result, ψ will magnify or diminish certain features in z to produce a refined h. We suspect that this reduces the burden of image encoder to produce strong category discriminative features and allows the network to attend to larger regions of the input image. But what stops image encoder from focusing on irrelevant regions in the input image? To answer it, we have to change the perspective with which we observe h. We can also view h as a non-uniformly scaled label embedding(ψ scaled by z). Predictor is necessarily a linear classification head and for it to function appropriately, z extracted from different images of the same category should be similar. As, this will allow the predictor to learn meaningful classification boundaries. As an example, the image encoder will seek common characteristics in all the images of the category dog. 3.2 INN TRAINING To train the INN, we utilise positive and negative pairings of images and labels. The target of the model is to predict no(0) for an incorrect pairing whereas, yes(1) for a correct one. For a batch of correctly paired input data(sized b), we first extend the batch by concatenating randomly generated incorrect pairings to it. If N is the desired number of incorrect pairings per image per batch, the the resulting size of the input batch after the concatenation operation will be (N + 1) × b. By applying the i.i.d assumption for image-label pairs, we can write the empirical log-likelihood which the network aims to maximize as: log(P (Ŷ |X, Ỹ ; θ)) =⇒ log(Πb×(N+1)i=0 P (ŷi|xi, ỹi; θ)) =⇒ b×(N+1)∑ i=0 log(P (ŷi|xi, ỹi; θ)) (3) Alternatively, in terms of loss, for a single image(x), input query label(ỹ) and ground-truth class label(y) the corresponding loss is denoted as L(fθ(x, ỹ), 1ỹ=y). We employ binary cross-entropy for the implementation of loss. We extend the loss for a single image to the entire dataset as, L(X, Y ) = 1 |X| ∑ (x, y)∈(X, Y ) { 1 K1 L(fθ(x, y), 1) + 1 K2 i<N∑ ỹi∈Y ′−{y} L(fθ(x, ỹi), 0) } (4) 3.2.1 COMPARISON TO TRADITIONAL TRAINING It is relevant to point out the differences between an INN and traditional mode of training. 1. Traditionally, the networks designed for supervised classification maximise the likelihood P (Y |X; θ). In our case, the predictions are conditioned both on the input image and the randomly supplied target. 2. Negative labels are involved indirectly in the loss computation(cross-entropy) due to the softmax operation (Goodfellow et al., 2016, Chapter 6.2.2.3). The supplied target corresponds to the correct label and the resulting contribution to the loss is from the output unit corresponding to this target label. In our framework, the negative classes(stemming from incorrect pairings) are directly involved into the loss computation as we explicitly provide a dedicated target for them. 3. Backpropagating gradient ∂L∂h ∂h ∂z for the image encoder branch is scaled by ψ due the nature of bi-linear operation. Similarly for label encoder, the gradients are scaled by z. This aspect allows the model to eventually learn compatible representations to make the final prediction. 3.3 INN INFERENCE For inferring the class label of an input image x, we select the input label which yields the largest response. Formally, ŷ = arg max ỹ fθ(x, ỹ) ∀ỹ ∈ Y ′ (5) 4 RELATED WORK Two-stream models have been deployed successfully for the tasks of action recognition(Simonyan & Zisserman, 2014; Feichtenhofer et al., 2016), video classification(Wang et al., 2018), fine-grained image classification(Lin et al., 2015), multi-label image classification(Yu et al., 2019) and aerial scene classification(Yu & Liu, 2018) to name a few. Apart from the evident difference in the application of these models, the differences lie in the choice of inputs and the function for fusing the 2 stream outputs. Many approaches have been proposed which utilize labels as auxiliary inputs in image classification (Weston et al., 2010; Frome et al., 2013; Akata et al., 2016; Sun et al., 2017), text classification (Weinberger & Chapelle, 2009; Guoyin Wang, 2018; Dong et al., 2020), and text recognition(Rodriguez-Serrano et al., 2015). In computer vision, these approaches rely on a language model(Mikolov et al., 2013) trained on external data to obtain label embeddings. The main focus of these approaches (Frome et al., 2013; Gang Wang & Forsyth, 2009; Wang & Mori, 2010; Akata et al., 2016) is to use the pre-learnt embeddings to enforce high similarity between image representations of contextually similar categories. These methods are targeted towards zero-shot learning as they rely on enforced similarities to detect novel image categories. As opposed to the existing line of work, we use one-hot encodings as input to our classifier which removes the requirement to utilize any external data. Also, we work without explicitly enforcing similarity constraints on learnt embeddings. In our training we utilize negative pairing of images and labels. This idea is based on the principle of noise contrastive estimation(Gutmann & Hyvärinen, 2010). SCL(Khosla et al., 2020) also follows this direction to learn meaningful embeddings in their classification approach. Their positive and negative samples consists of images from same and different categories respectively. In contrast, we consider the correctly paired image-label combinations as positives and incorrectly paired image-labels as negatives. Also, ours is a single stage end-to-end differentiable training routine. In INNs, we can assign to label encodings the role of a 1d attention map(Xu et al., 2015). For image classification, the existing approaches based on attention(Wang et al., 2017; Woo et al., 2018; Hu et al., 2018; Bello et al., 2019; Jetley et al., 2018) introduce spatial or channel-wise attention at different depth of a traditional neural network. In contrast to our proposed model, this modification is made to the image encoder. We can easily replace INN’s image encoder with the one equipped with such an attention mechanism. This will incorporate a dual attention mechanism at the level of label fusion and image embedding. However, INN depict one of the simplest ways of modelling the pursued query structure and it is this formulation which gives rise to attention. Attention based approaches as mentioned above focus on answering the query ’What category does the image belong to?’. Moreover, we focus our work to compare different approaches for modelling the classification task rather than different mechanism of performing a traditional classification task. 5 IMPLEMENTATION DETAILS Datasets: Throughout our paper, we refer to a size of a dataset for the number of unique categories it contains. For small datasets we use CIFAR-10, STL-10, BMW-10(Ultra fine-grain cars dataset), CUB-20(formed using 20 categories of CUB-200-2011), and Oxford-IIIT Pets. Study involving larger dataset utilizes CUB-200. Table provided in appendix A.2 shows the common statistics of the utilized datasets. Architectures: Here we provide brief details of selected baselines and INN. All the models are trained from scratch to provide an even ground for comparison. Detailed hyper-parameters are provided in appendix D. • Baseline-Traditional(B-T): We’ve selected Resnet-18(He et al., 2015) trained with categorical cross-entropy loss as our traditional baseline. It is a widely popular architecture and portrays the standard manner of training an image classifier(Khosla et al., 2020; Tan & Le, 2019). Evaluation with a VGG-11(Simonyan & Zisserman, 2015) model is shared in appendix A.4. • Baseline-Multi-Label(B-ML): We train the Resnet-18 as a multi-label classifierNam et al. (2014). Each of the Y ′ output units is treated independently with its own binary crossentropy computation. This allows us to use Y ′ − 1 output units as negative targets in training. • Supervised Contrastive Learning(SCL)(Khosla et al., 2020): In a much recently proposed approach, the authors make use of contrastive loss based supervised representation learning. As the second step, a linear classifier is trained on top of learnt representations by employing standard cross entropy loss. We train Resnet-18 using the official code1. 1https://github.com/HobbitLong/SupContrast • INN: We describe the implementation details of the different components of an INN below. – Image Encoder: We use a Resnet-18 without the fully connected final layer. – Label Encoder: We use a 2-layered MLP with no activation(see appendix C.1 for an ablation with activations). The number of units per layer are d/2 and d.2 – Predictor: z and ψ are combined to form h using element-wise product. h is then connected to the output units which forms the fully-connected final layer for prediction. 6 EXPERIMENT: WHAT DOES THE NETWORK SEE? Grad-CAM(Selvaraju et al., 2017) is an approach for interpreting the predictions of a network by qualitatively assessing the identified salient regions in the input image. It utilises the gradient of classification output w.r.t. feature map to generate coarse heatmaps, highlighting important spatial locations in the input image. Recently, Adebayo et al. (2018) assessed different approaches for interpreting a network’s prediction. As per their finding, Grad-CAMs generate meaningful heat maps and passed their meticulously constructed sanity tests. Grad-CAM has been utilised by many approaches (Yun et al., 2019; Woo et al., 2018) to emphasize on attended regions by the network. We use Grad-CAM for similar purpose and perform a qualitative and quantitative comparison w.r.t baselines. Quantitative analysis: Figure 3 shows the heatmaps produced for sample input images for the baseline and INN models. We can notice the significant difference in the spatial spread of salient regions. Comparing the baselines we observe the larger spread on heatmap for B-ML than B-T and SCL. The heatmaps generated for SCL and B-T appear to be localized to highly distinguishable regions. On the other hand, the visuals indicate INN to be looking at a wider region for making a label specific prediction. Qualitative analysis: To quantify the salient regions we scale the heatmaps between 0 and 1. We consider pixels with values greater than t = 0.5 as salient. We use the training set for this comparison. Since we are focused on assessing how the different attended regions vary across methods, the utilization of training data does not restrict us from this goal and moreover, provides us with a larger overlap of accurately predicted samples for computing the salient regions. Table 1 contain the proportion of an image on an average considered salient as per Grad-CAM. The results are in-line to qualitative assessments we made. For majority of the datasets B-ML and INN produce larger salient regions of the input image. We do not state that focusing on larger regions is 2Overall, INN introduces approximately d× d/2 additional parameters. For Resnet-18, d = 512 beneficial as compared to more focused distinguishable features. We only aim to support our hypothesis behind the working of an INN. As per our assumption, we hypothesized that the production of disjoint representations z and ψ allows for less discriminative features z. Here we interpreted increase in spatial spread of saliency as producing less discriminative features thereby supporting our hypothesis. 7 EXPERIMENT: IMAGE CLASSIFICATION We evaluate the performance of INNs against small datasets(Y ′ < 50). To train INNs, we use K1 = K2 = 1 as the value of scaling constants in equation 4. Results: The corresponding results reported in table 2 highlight the effectiveness of INNs. There are four key observations to be made. Firstly, B-T and B-ML show peculiar trend across datasets. In STL-10, B-ML outperforms B-T, we hypothesize that as the predictions are based on a larger input image region which proves beneficial where categories are visually dissimilar. Consequently, for fine-grained visual classification datasets, where the categories are highly similar, B-T performs better. Secondly, there is a significant difference in performance of the baselines and INN(N=9) for majority of the datasets. For CIFAR-10, the results are comparable. We believe that the small size of the input image does not provide much room for improvement. To verify this, we conduct an experiment in appendix A.5 with images of STL-10 resized to 32× 32. We observe a trend of limited improvement for resized STL-10 as we did for CIFAR-10, which supports our theory. Thirdly, as the value of N increases the performance of INN increases. We believe this is a direct consequence of providing more negative label examples for a given input image during training. By providing many more samples, the network can learn better(more compatible) representations. Lastly, INN out performs contrastive learning based approach, SCL. For CUB-20 and Pets, we expect further improvement in the performance of INN as the value forN is smaller than the maximum allowed for these datasets. 8 EXPERIMENT: IMPORTANCE OF z AND ψ To understand the relevance of z and ψ, we train a linear classifier on top of z in the traditional manner using multi-class cross entropy loss. We compare the accuracy of the model obtained with that of INN. This will help us understand the nature of z as well as improvements made by ψ. Implementation details: Using the train split of the data we gather ztrain from fθ1. Note, that the input ỹ chosen is irrelevant for producing z. Next, we train a multi-class logistic regression classifier using stochastic gradient descent on ztrain, ytrain. Additional training details are shared in appendix D.8. For inference, we pass the ztest to the learnt classifier and record the predicted class. The INN models selected for extracting z corresponds to INN(N = 9) in table 2. Results: Table 3 shows the performance of a classifier trained on top of z in comparison to INN(N = 9). We observe that for image classification datasets of CIFAR-10 and STL-10, the classification performance of the two approaches is highly comparable. However, we observe significant differences for the fine-grained visual classification datasets. We believe that due to high visual dissimilarity between categories in CIFAR-10 and STL-10, obtained z is sufficient to perform the task of classification. However, in fine-grained datasets since the categories are quite visually similar, ψ plays an important role in further refining the representations. These observations are inline to our hypothesis behind the working of the model. To further highlight the nature of z and ψ we perform additional experiments in appendix C.2. 9 EXTENSION TO LARGER DATASETS So far, we have observed that the approach of utilizing labels as an additional cue allows to perform the task of multi-class classification. However, the datasets considered only included few unique categories. In this section, we reflect upon the short comings of adopting our pursued approach and subsequently the failures of INNs. • For smaller datasets, larger the value of N , higher is the classification accuracy. If we extend this logic to larger datasets such as ImageNet(Deng et al., 2009), the best value of N will be close to 1000. Using a traditional batch size(b) of 128 will push the effective batch size to 128, 000, larger than the largest considered for large mini-batch training. methods(Goyal et al., 2017). To counter such large values of N , one can significantly reduce b which in turn will extend the training time from days to months. In order to draw relevant conclusions in a reasonable time frame, we limit the discussions in this section to CUB-200 which contains 200 unique categories. • Latent dimension plays an important in the predictive performance. We conducted experiments on CUB-200 and CUB-20 by varying the latent dimension of the model between 64, 128, 512, 1024 and observe that impact is more for CUB-200 than CUB-20. The details of the corresponding experiment are described in appendix C.3. • Large imbalance of positive and negative samples arising as a result of increasing N can destablize an INN training. For similar reasons, we observe B-ML training to collapse as well. We can balance the weights for positive and negative targets by adjusting their contribution to the loss, however, we find that this approach impedes INN performance. As an alternative, instead of training an INN from scratch on larger values of N , one can initialize the weights from an INN trained on a smaller valueN ′, whereN ′ < N . By doing this, we find that not only INN(N ) surpasses the accuracy of INN(N ′) but also performs comparable to the baseline. The corresponding experimentation details and results are provided in appendix C.4. 10 DISCUSSION & CONCLUSION As opposed to the traditional approach, we explored the applicability of a target driven method. Specifically, we modelled the question ‘Does the given image belong to category ỹ’. We showed that it is possible to tackle the multi-class classification problem from a non-traditional perspective. Our aim was not to show that the pursued approach is better, rather, we aimed to explore and highlight the pros and cons of this unexplored paradigm. Our approach adapts classical one-vs-rest approach in a modern deep learning setting. To achieve this goal, we introduced INNs which rely on a pair of input image and target label to produce a response. By inferring exhaustively with all the target categories we arrive at the final decision. Our study involving class activation maps revealed that INNs utilize much larger regions of the input image to generate features. We hypothesize the imposed independence on image embeddings and labels allow the image encoder to tend to larger regions than highly discriminative features from traditional approaches. We also explored the scenarios where learned image features are adequate to learn a traditional classifier on top. This observation was made for cases where the categories are visually dissimilar. Label embeddings refine the coarse image representations immensely for fine-grained tasks. By pitting INNs against strong baselines we were able to highlight the strength of our adopted approach in comparision. The INNs outperformed the baselines on all the datasets(Y ′ < 50) considered for image classification and fine-grained image classification. Additional experiments on Out-of-distribution(OOD, appendix C) and label embedding(appendix B) analysis helps to broaden our understanding following a one-vs-rest setting. OOD analysis shows that INN performs comparable to contrastive learning based SCL. An indicative qualitative result on learnt label embeddings show that similar categories often have nearby label embeddings. On the down side, we witnessed the difficulties of extending the method to larger datasets. We consider dependency on latent dimension and N the main reasons for this limitation. To make the approach scalable, we believe, constructing a smarter negative sampling approach will be the direction moving forward. We see numerous avenues for future research. Our proposed direction of training a neural network is comparable to classical one-vs-rest approaches(Sánchez et al., 2013). Due to the sudden outburst and adoption of deep learning approaches, the classical one-vs-rest direction has suddenly phased out. And, to cover and compare all the aspects of a traditionally trained neural network which evolved over the past years in a single work is not feasible. As a result, there are multitude of directions of adopting a one-vs-rest approach as devised in this work. Some directions include but are not limited to object detection(Ren et al., 2015), image segmentation (Chen et al., 2018), anomaly detection(Chandola et al., 2009). Our main focus will be to extend our experimentation theme(and not just the INN) to these problems and analyse its subsequent impact. We will publicly share the source code supplied in supplementary to facilitate brisk research. A APPENDIX A.1 NOTATIONS A.2 DATASET STATISTICS A.3 GRAD-CAM VISUALIZATIONS We provide more visualisations to compare the recognised salient regions across baselines in figure 4. A.4 EXPERIMENT: VGG IMAGE ENCODER In this section we replace the image encoder of the INN with a VGG-11(with batch normalisation) model. For an INN, we use the features from the last convolutional block after an adaptive average pooling. Results: Table 6 shows that VGG based INN outperforms the baselines by a large margin. For CIFAR-10, we suspect that similar to the Resnet based INN the small size of the input image restricts the added advantage of using target driven approach. A.5 EXPERIMENT: RESCALED STL-10 For this experiment, we downscale the STL-10 images to 32×32 to bring it down to the same size as that of CIFAR-10. For training, we use identical hyper-parameters as we did for training the model on unaltered STL-10 dataset. 3Created using 20 categories of CUB-200 Results: We notice in table 7 that the INN performance is quite similar to that of the baseline when the image size is small. Similar trend was observed in case of CIFAR-10 as well. We believe that INNs and the baseline both utilize equal portion of the input image to generate representations, which leads to similar performance in accuracy. B EXPERIMENT: LABEL EMBEDDINGS, ψ We have witnessed that INNs rely on ψ and z to make a correct prediction. Also, depending on the content of the dataset, ψ can play a vital role in further improving the performance. In this experimental set up, we aim to explore more about ψ. Specifically, how different encoded labels relate to each other. We believe that the visual content of images drives the learning of label embeddings, i.e. similar visual categories have nearby label representations. Though the results presented here are qualitative in nature, we believe they provide adequate evidence to back our claim. Implementation details: We select INN(N = 9) for CIFAR-10 in this study. We generate ψY ′ = {fθ2(ỹ) | ∀ỹ ∈ Y ′}. Next, we compute L2 distance between every pair of entry in ψY ′ as a measure of similarity. In table 8 we have reported the nearest matching labels(smallest distance) for all the categories in the dataset. Results: Though not perfect, for many source categories, the nearest matching categories tend to be visually similar. For example, the categories truck-car and bird-airplane. However, we also see some non-apparent pairings such as deer-car and frog-car. C EXPERIMENT: OUT-OF-DISTRIBUTION DETECTION In this section, we experiment the robustness of the learnt classifiers for detecting out-ofdistribution(OOD) images. The standard approach is to utilise the predicted confidence in distinguishing in- and out-of-distribution data(Hendrycks & Gimpel, 2017). Following this framework, we report the AU-ROC for models trained on the chosen datasets while tested on out-of-distribution datasets of LSUN(Yu et al., 2015), Tiny ImageNet(Le & Yang, 2015), Fashion-MNIST(Xiao et al., 2017). The out-distribution datasets are standardised using mean and standard deviation of the indistribution datasets. The INN models chosen correspond to INN(N = 9) in table 2. Results: The results reported in table 9 show that SCL and INN outperform the traditional baselines by a large margin for majority of the datasets. The comparatively lower performance of INN for CUB-20 and Pets can be attributed to its limited training. To recall, the corresponding INNs were trained withN = 9, and we expect OOD performance to improve as the values ofN used in training is increased. 90.81% 90.53% 86.5% 90.02 % 90.76% C.1 EXPERIMENT: DIFFERENT ACTIVATIONS FOR LABEL ENCODER In the main paper, the label encoder branch consisted of a 2 layered MLP with no activation. In this experiment, we apply the following 4 activations to the label encoder units and train INN(N = 9, b = 32) on the STL-10 dataset. 1. RELU(Glorot et al., 2011) 2. Leaky-RELU(Maas et al., 2013) 3. Sigmoid 4. Tanh Results: The results indicate maginally better accuracy for RELU and Leaky-RELU. Tanh and no activation based models closely follow the accuracy. For sigmoid, the performance is low. Our hypothesis is that, due to the limited scaling nature of the logistic function, the features of z are under refined. However, more extensive research is required to arrive at a stronger conclusion. We hope that our experiment provides an apt working ground for future research in this direction. To qualitatively assess the contributing regions of the image across activations, we provide GradCAM visualisations in figure 5. RELU, Leaky-RELU, Tanh, and No-activation are able to rely on relevant regions of the input image while making the prediction. In case of Sigmoid, we notice disorganised regions of attention. C.2 EXPERIMENT: COMPATIBILITY OF ψ & z To further highlight the fact that INNs do learn compatible representations and rely both on ψ & z to make an accurate prediction, we utilise the following 4 variations of ỹ for evaluating test accuracy on STL-10: 1. ỹ = y: We provide the correct class label as input. 2. ỹ : ỹ ∈ Y ′ − {y}: We provide a random incorrect class label as input. 3. ỹ = 1Y ′ : All the values are set to 1 in the input label vector. 4. ỹ = 0Y ′ : All the values are set to 0 in the input label vector. For evaluation, we record the argmax for each individual query between a yes-no response. If the representations are compatible we shall see a higher number of yes responses for case 1 than all the other variations. 85.2% 0.004% 0.0% 0.0% Results: Table 11 shows that label encoding ψ play a vital role in classification of the input images. Only when the image is paired with its corresponding ground-truth ỹ, INN makes the prediction of yes majority of the time. For ỹ corresponding to an incorrect class, the number of samples predicted as yes is quite insignificant. For the other two cases, INN never makes a yes prediction. This shows that INNs do rely on a compatible z and ψ to generate a correct class prediction. Visualisation: To further highlight the compatibility of ψ and z we generate a UMAP (McInnes et al., 2018) plot. UMAP is a non-linear dimension reduction technique which has been utilised in visualising high dimensional data. Figure 6 corresponds to the joint representations generated for training images(drawn as blobs) and a single test image of the STL-10 dataset(shown as star). For generating joint representations corresponding to the training set, htrain, ground-truth ytrain are utilised. Whereas, for generating test htest, we provide ỹ ∈ Y ′. Consequently, 10 points are generated for a single test image. The ground-truth label of the test image corresponds to airplane(integer label of 0). The figure shows that only when the input label is a one-hot encoded vector corresponding to the ground-truth label airplane, h for the test image overlaps with the training cluster(red dashed box). For other input labels, the test sample is further away from its corresponding ỹ cluster. C.3 EXPERIMENT: VARYING HIDDEN DIMENSION, d In this experiment we aim to determine the impact of latent dimension on the training of an INN. We conduct this experiment on CUB-200 and CUB-20 datasets with N = 1. The latent dimension is selected from the values {64, 128, 512, 1024} for a Resnet-18 based INN. Results: The results in figure 7 indicate the relevance of the dimensions of latent representations. The impact of the latent dimension is more for CUB-200 than CUB-20. For CUB-200 the accuracy increases with increase in dimensionality whereas, for CUB-20, the performance saturates roughly around d/Y ′ = 10 and decreases later on. The results indicate that for training larger datasets we are required to employ networks with comparatively larger latent dimensions. C.4 EXPERIMENT: CLASSIFICATION WITH CUB-200 In order to apply INN to CUB-200 we replace the Resnet-18 image encoder with Resnet-50. The latent dimension is 2048 for Resnet-50. The baseline for this study is B-T. For B-ML, we found that the network doesn’t train and obtains an accuracy of 0.5%, which is of a random chance. Even though, INN trains for small values of N , it fails to match its performance on larger values. In order to enable training for an INN when N is large, we initialise the weights from INN(N ′), where N ′ < N . For example, we first train the model with N ′ = 9 from scratch and for the subsequent fine-tuning we select the value N = 15. If we wish to train on a larger value of N such as 24, we initialize the weights from previously obtained INN(N = 15). In this study, we select N ∈ 15, 24, 31, 41, 51 and N ′ = 9. Results: Figure 8 shows the increase in accuracy for an INN with increasingN by applying iterative fine-tuning. The small increment in accuracy at each step is due to proportionally smaller increment of N . N = 41 is roughly 20% of the categories of CUB-200. We expect the INN to match and even surpass with higher values of N . However, we did observe the large jump in training time due to lowering of b to accommodate for increasing N . The per epoch time increases from 32 seconds for INN(N = 9) to 300 seconds for INN(N = 41). D TRAINING DETAILS We firstly cover B-T, B-ML and INN training hyper-parameters. Then we move on to the SCL training hyper-parameters. Baselines(B-T, B-ML) are referred to as N=0 in this section. Deep learning framework used is Pytorch(Paszke et al., 2017) version 1.2. D.1 CIFAR-10 • Training pre-processing: Random(cropping(32×32, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 75, 150, 225, 275 • Batch sizes: (N=0, b=256), (N=1, b=128), (N={3, 7, 9}, b=64) D.2 STL-10 • Training pre-processing: Random(cropping(96×96, padding=4), rotation(±15), horizontal flipping), normalisation(train mean, std. dev). • Test pre-processing: Normalisation(train mean and std. dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 200, 250, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N=3, b=64), (N={7,9}, b=32) D.3 BMW-10 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N={0, 1, 3, 7}, b=32), (N=9, b=16) D.4 CUB-20 • Categories: Black footed Albatross, Laysan Albatross, Sooty Albatross, Groove billed Ani, Crested Auklet, Least Auklet, Parakeet Auklet, Rhinoceros Auklet, Brewer Blackbird, Red winged Blackbird, Rusty Blackbird, Yellow headed Blackbird, Bobolink, Indigo Bunting, Lazuli Bunting, Painted Bunting, Cardinal, Spotted Catbird, Gray Catbird, Yellow breasted Chat These are the first 20 categories as they appeared in torchvision’s(Marcel & Rodriguez, 2010) implementation of CUB-200. • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 250, 300 • Batch sizes: (N={0, 1, 3, 7, 9}, b=32) D.5 PETS • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • Epochs: 350 • Start learning rate: 0.1 • Learning rate drop factor: 0.2 • Learning rate drop epochs: 150, 225, 300 • Batch sizes: (N=0, b=128), (N=1, b=128), (N={3, 7}, b=64), (N=9, b=32) D.6 CUB-200 • Training pre-processing: Resized(300×300), Random(cropping(224×224), horizontal flipping), normalisation(train mean and std dev). • Test pre-processing: Center Cropping(cropping(224×224), Normalisation(train mean and std dev). • N=0, Epochs=350, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[125, 200, 250, 300], batch size=128 • N=9, Epochs=500, Start learning rate = 0.1, Drop factor = 0.2, Drop epochs=[100, 200, 300, 400, 450], batch size=64 • N=[15, 24, 31], Epochs=300, Start learning rate = 0.005, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=[32, 20, 16] • N=41, Epochs=300, Start learning rate = 0.0025, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=12 • N=51, Epochs=300, Start learning rate = 0.001, Drop factor = 0.2, Drop epochs=[100, 200, 250], batch size=10 D.7 SCL TRAINING Image pre-processing steps are identical to those mentioned in the corresponding previous subsections. Common parameters: Temperature=0.1, decay(0.0001), cosine(True), and epochs=500. • CIFAR-10 – Learning rate: 0.05 – Batch size: 256 • STL-10 – Learning rate: 0.5 – Batch size: 256 • BMW-10 – Learning rate: 0.1 – Batch size: 128 • CUB-20 – Learning rate: 0.5 – Batch size: 128 • Pets – Learning rate: 0.1 – Batch size: 128 D.8 LINEAR CLASSIFICATION USING z We have used the SGDClassifier provided by sklearn(Pedregosa et al., 2011) library. Apart from the loss(loss=‘log’) and tol(tol=1e-5) we use the default values to train the model.
1. What is the main contribution of the paper on image classification? 2. What are the strengths and weaknesses of the proposed approach compared to conventional baselines and multi-label classification? 3. How does the reviewer assess the experimental evaluation, and what improvements could be made? 4. How does the element-wise product followed by a linear classifier compare to a cosine classifier? 5. How can the activation scaling by psi be interpreted, and how might it improve the effectiveness of the internal CNN features? 6. What are some limitations of the Grad-Cam heat maps analysis, and how could it be improved? 7. Why did the authors not explore the scalability of their approach to datasets with many labels, and how might this be addressed? 8. Overall, what is the reviewer's impression of the paper, and what changes would make it ready for publication at ICLR?
Review
Review Summary This paper considers an image classification setup where the input is an image accompanied by a category label, the goal is to predict if the category label applies to the image or not. The difference of this problem formulation and binary multi-label classification is subtle. The approach consists in learning a label embedding of the same dimension as a vectorial image embedding provided by a CNN. The image and label embedding are then multiplied point-wise, and fed to a final fully-connected layer that produces a score to indicate if the label applies to the image yes or no. To classify an image across multiple classes, it is assigned to the class yielding the most confident score. To train the model, the authors optimise the binary cross entropy for the correct label, and a random subset of N negative labels. The model is also evaluated for out-of-distribution (OOD) detection, using the confidence score of the highest scoring class. The results of the model are compared against conventional baselines that are trained using the cross-entropy across all classes, or using a multi-label setting using a binary cross-entropy for each class in parallel. Finally results are compared against Supervised Contrastive Learning (Khosla et al, 2020). Qualitative experiments consider Grad-Cam heat maps, and relatedness of labels based on similarity of the learned label embeddings (both are quite anecdotal, see below). Quantitative experiments assess classification accuracy and OOD detection across a collection of five fairly small datasets (CIFAR-10, STL-10, BMW-10, CUB-20, Pets). Substantial improvements in classification accuracy are observed over the baselines (except for CIFAR-10 where results are comparable). For OOD detection significantly improved results are again observed. Image classification accuracy on the CUB-200 dataset, reported in supplementary, is worse than the multi-class cross-entropy trained baseline. Comments 1 - Overall I found this an interesting paper, but the experimental evaluation falls a bit short. Given how simple the idea is, I expected a more thorough experimental analysis, including evaluation on large benchmarks with many labels (see also below). 2 - I felt that the relation to binary multi-label classification was not highlighted clearly enough. In my view, the proposed approach is basically doing binary multi-label classification (predict for each label if it is applicable to the image yes/no). The used loss function with negative sampling and weighting is not specific to the proposed approach and applies to the multi-label baseline as well. Which is recognised to some extent in the last paragraph (bullet point?!) of Section 10. In a way, we can understand the proposed model as an alternative architecture for multi-label classification. Note that if we fix the last weight vector in the predictor to a vector of ones, then we obtain exactly the multi-class baseline (except that the label embeddings are low-rank by the two-layer label embedding branch). 3 - The element-wise product followed by linear classifier can also be interpreted as a generalised dot-product (by setting classifier weights to vector of 1's). From this perspective it seems natural to L2 normalize z and psi before the product, to get a cosine-classifier. Did you try that? 4 - The activation scaling by psi can also be seen as a special case of FiLM conditioning [a]. This suggests applying such modulations throughout the image encoder. In this manner the internal CNN features can be made class specific. This might improve its effectiveness, but will prevent shared computation of the CNN across multiple-labels, see below. 5 - In section 5 the GradCam heat maps are interpreted for the proposed INN method and baselines. This analysis is however, quite limited. (i) It is based on only 3 images. (ii) The analysis is only qualitative. (iii) It is not clear what the conclusion is except that the attended areas seem larger for B-ML and INN than the other baselines. I would suggest to include quantitative analysis (eg measure the area of attention), and perhaps correlate it with object locations using a dataset for which semantic segmentations are available. It would also be useful to give a brief description of how the heat maps are computed, and to what extent they are comparable across images and classifier networks. 6 - Section 6: do you expect further gains in classification accuracy beyond N=9 for CUB-20 and Pets? For these datasets the classification accuracy has not saturated yet with growing N, and more negative labels are available. It is not clear why this experiment was not included, given that for CUB-200 similar experiments are provided in the supplementary. 7 - Section 8: When considering the relatedness of the learned class embeddings, a comparison with the other methods would be useful. The weight vectors of the last fully connected layer of classifiers trained with cross-entropy loss are also known to correlate with semantic class relatedness. A quantitative evaluation would be welcome to assess this aspect. Imagenet might be useful, leveraging the class hierarchy. 8 - Section 10 discusses challenges to scale the presented approach to datasets with many labels. Which suggests that using N negative labels, scales the computational and memory cost of the model linearly, as if using a batch N times larger. This is, however, ignoring the fact that the most costly part is the CNN, which needs to be excused only once per image. After which the network can bifurcate to branch to the different loss terms for the labels. This suggests a scaling similar to the multi-class and multi-label baselines. The same can be done for testing. It is a pity the authors did not explore this in their paper. Overall impression The authors explore an interesting rephrasing of the conventional multi-class image classification setup, much related but slightly different from the multi-label setting. Experimental results are encouraging, but are lacking in some respects (disappointing results on CUB-200, lack of testing on ImageNet, Grad-cam and label relatedness experiments are very anecdotal and lacking quantitative assessment). The strong classification accuracy results given the simplicity of the approach and similarity to the multi-label setting, are appealing. My assessment, however, that the paper is not ready for publication at ICLR, and could be further improved. Detailed comments: Section 6: define the abbreviation "fgvc" and write it in capitals, or just write it full out, it's used only at one other place. In section 6 "... which supports our theory." Please tone this down, not theory was provided, perhaps use "hypothesis". Section 6: "Thirdly, as the value of N increases the performance of INN increases. We believe this is a direct consequence of providing more negative label examples for a given input image during training." Well yes, obviously since N is the nr of negative label samples used in training. This phrase is a bit vacuous. Typos Abstract: "... classification via. deep ..." [a] Perez, E.; Strub, F.; Vries, H. D.; Dumoulin, V. & Courville, A. FiLM: Visual Reasoning with a General Conditioning Layer. AAAI, 2018
ICLR
Title Ridgeless Interpolation with Shallow ReLU Networks in $1D$ is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions Abstract We prove a precise geometric description of all one layer ReLU networks z(x; ✓) with a single linear unit and input/output dimensions equal to one that interpolate a given dataset D = {(xi, f(xi))} and, among all such interpolants, minimize the `2-norm of the neuron weights. Such networks can intuitively be thought of as those that minimize the mean-squared error over D plus an infinitesimal weight decay penalty. We therefore refer to them as ridgeless ReLU interpolants. Our description proves that, to extrapolate values z(x; ✓) for inputs x 2 (xi, xi+1) lying between two consecutive datapoints, a ridgeless ReLU interpolant simply compares the signs of the discrete estimates for the curvature of f at xi and xi+1 derived from the dataset D. If the curvature estimates at xi and xi+1 have different signs, then z(x; ✓) must be linear on (xi, xi+1). If in contrast the curvature estimates at xi and xi+1 are both positive (resp. negative), then z(x; ✓) is convex (resp. concave) on (xi, xi+1). Our results show that ridgeless ReLU interpolants achieve the best possible generalization for learning 1d Lipschitz functions, up to universal constants. 1 INTRODUCTION The ability of overparameterized neural networks to simultaneously fit data (i.e. interpolate) and generalize to unseen data (i.e. extrapolate) is a robust empirical finding that spans the use of deep learning in tasks from computer vision Krizhevsky et al. (2012); He et al. (2016), natural language processing Brown et al. (2020), and reinforcement learning Silver et al. (2016); Vinyals et al. (2019); Jumper et al. (2021). This observation is surprising when viewed from the lens of traditional learning theory Vapnik & Chervonenkis (1971); Bartlett & Mendelson (2002), which advocates for capacity control of model classes and strong regularization to avoid overfitting. Part of the difficulty in explaining conceptually why neural networks are able to generalize is that it is unclear how to understand, concretely in terms of the network function, various forms of implicit and explicit regularization used in practice. For example, a well-chosen initialization for gradientbased optimizers can strongly impact for quality of the resulting learned network Mishkin & Matas (2015); He et al. (2015); Xiao et al. (2018). However, the specific geometric or analytic properties of the learned network ensured by a successful initialization scheme are hard to pin down. In a similar vein, it is standard practice to experiment with (weak) explicit regularizers such as weight decay, obtained by adding an `2 penalty on model parameters to the underlying empirical risk. While the effect of weight decay on parameters is transparent, it is typically challenging to reformulate this into properties of a learned non-linear model. In the simple setting of one layer ReLU networks this situation has recently become more clear. Specifically, starting with an observation in Neyshabur et al. (2014) the articles Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a;b; 2021) explore and develop the fact that `2 regularization on parameters in this setting is provably equivalent to penalizing the total variation of a certain Radon transform of the network function (cf eg Theorem 3.2). While the results in these articles hold for any input dimension, in this article we consider the simplest case of input dimension 1. In this setting, our main contributions are: 2 SETUP AND INFORMAL STATEMENT OF RESULTS Consider a one layer ReLU network z(x) = z(x; ✓) := ax+ b+ nX j=1 W (2)j h W (1)j x+ b (1) i i + , [t]+ := ReLU(t) = max {0, t} (1) with a single linear unit1 and input/output dimensions equal to one. For a given dataset D = {(xi, yi) , i = 1, . . . ,m} , 1 < x1 < · · · < xm < 1, yi 2 R, if the number of datapoints m is smaller than the network width n, there are infinitely many choices of the parameter vector ✓ for which z(x; ✓) interpolates (i.e. fits) the data: z(xi; ✓) = yi, 8 i = 1, . . . ,m. (2) Without further information about ✓, little can be said about the function z(x; ✓) for x in intervals (xi, xi+1) between consecutive datapoints when n is much larger than m. This precludes useful generalization guarantees uniformly over all ✓, subject only to the interpolation condition (2). In practice interpolants are not chosen arbitrary. Instead, they are learned by some variant of gradient descent starting from a random initialization. For a given architecture, initialization, optimizer, regularizer, and so on, understanding how the learned network uses the known labels {yi} to assign values of z(x; ✓) for x not in the dataset is an important open problem. To make progress, a fruitful line of inquiry in prior work has been to search for additional complexity measures based on margins Wei et al. (2018), PAC-Bayes estimates Dziugaite & Roy (2017; 2018); Nagarajan & Kolter (2019), weight matrix norms Neyshabur et al. (2015); Bartlett et al. (2017), information theoretic compression estimates Arora et al. (2018), Rachemacher complexity Golowich et al. (2018), etc (see Jiang et al. (2019) for a review and comparison). While perhaps not explicitly regularized, these complexity measures are hopefully small in trained networks, giving additional capacity constrains. In this article, we take a different approach. We do not seek results valid for any network architecture. Instead, our goal is to describe completely, in concrete geometrical terms, the properties of one layer ReLU networks z(x; ✓) that interpolate a dataset D with the minimal possible `2 penalty C(✓) = C(✓, n) = nX j=1 |W (1)j | 2 + |W (2)j | 2 on the neuron weights. More precisely, we study the space of ridgeless ReLU interpolants RidgelessReLU(D) := {z(x; ✓) | z(xi; ✓) = yi 8(xi, yi) 2 D, C(✓) = C⇤} , (3) of a dataset D, where C⇤ := inf ✓,n {C(✓, n) | z(xi;n, ✓) = yi 8(xi, yi) 2 D} . Intuitively, elements in RidgelessReLU(D) are ReLU nets that minimize a weakly penalized loss L(✓;D) + C(✓), ⌧ 1, (4) where L is an empirical loss, such as the mean squared error over D, and the strength of the weight decay penalty C(✓) is infinitesimal. It it plausible but by no means obvious that, with high probability, gradient descent from a random initialization and a weight decay penalty whose strength decreases to zero over training converges to an element in RidgelessReLU(D). This article does not study optimization, and we therefore leave this as an interesting open problem. Our main result is simple description of RidgelessReLU(D) and can informally be stated as follows: Theorem 2.1 (Informal Statement of Theorem 3.1). Fix a dataset D = {(xi, yi), i = 1, . . . ,m}. Each datapoint (xi, yi) gives an estimate ✏i := sgn (si si 1) , si := yi+1 yi xi+1 xi for the local curvature of the data (Figure 1). Among all continuous and piecewise linear functions f that fit D exactly, the ones in RidgelessReLU(D) are precisely those that: 1The linear term ax+ b is not really standard in practice but as in prior work Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a) leads a cleaner mathematical formulation of results. • Are convex (resp. concave) on intervals (xi, xi+1) at which neighboring datapoints agree on the local curvature in the sense that ✏i = ✏i+1 = 1 (resp. ✏i = ✏i+1 = 1). On such intervals f lies below (resp. above) the straight line interpolant of the data (Figs. 2 and 3). • Are linear (or more precisely affine) on intervals (xi, xi+1) when neighboring datapoints disagree on the local curvature in the sense that ✏i · ✏i+1 6= 1. Before giving a precise statement our results, we mention that, as described in detail below, the space RidgelessReLU(D) has been considered in a number of prior articles Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a). Our starting point will be the useful but abstract characterization of RidgelessReLU(D) they obtained in terms of the total variation of the derivative of z(x; ✓) (see (5)). We note also that the conclusions of Theorem 2.1 (and Theorem 3.1) also hold under seemingly very different hypotheses from ours. Namely, instead of `2-regularization on the parameters, Blanc et al. (2020) considers SGD training for mean squared error with iid noise added to labels. Their Theorem 2 shows (modulo some assumptions about interpreting the derivative of the ReLU) that, among all ReLU networks a linear unit that interpolate a dataset D, the only ones that minimize the implicit regularization induced by adding iid noise to SGD are precisely those that satisfy the conclusions of Theorem 2.1 and hence are exactly the networks in RidgelessReLU(D). This suggests that our results hold under much more general conditions. Further, our characterization of RidgelessReLU(D) in Theorem 3.1 immediately implies strong generalization guarantees uniformly over RidgelessReLU(D). We give a representative example in Corollary 3.3, which shows that such ReLU networks achieve the best possible generalization error of Lipschitz functions, up to constants. Finally, note that we allow networks z(x; ✓) of any width but that if the width n is too small relative to the dataset size m, then the interpolation condition (2) cannot be satisfied. Also, we point out that in our formulation of the cost C(✓) we have left both the linear term ax + b and the neuron biases unregularized. This is not standard practice but seems to yield the cleanest results. 3 STATEMENT OF RESULTS AND RELATION TO PRIOR WORK Every ReLU network z(x; ✓) is a continuous and piecewise linear function from R to R with a finite number of affine pieces. Let us denote by PL the space of all such functions and define PL(D) := {f 2 PL| f(xi) = yi 8i = 1, . . . ,m} to be the space of piecewise linear interpolants of D. Perhaps the most natural element in PL(D) is the “connect-the-dots interpolant” fD : R ! R given by fD(x) := 8 < : `1(x), x < x2 `i(x), xi < x < xi+1, i = 2, . . . ,m 2 `m 1(x), x > xm 1 , where for i = 1, . . . ,m 1, we’ve set `i(x) := (x xi)si + yi, si := yi+1 yi xi+1 xi . See Figure 1. In addition to fD, there are many other elements in RidgelessReLU(D). Theorem 3.1 gives a complete description of all of them phrased in terms of how they may behave on intervals (xi, xi+1) between consecutive datapoints. Our description is based on the signs ✏i = sgn (si si 1) , 2 i m of the (discrete) second derivatives of fD at the inputs xi from our dataset. Theorem 3.1. The space RidgelessReLU(D) consists of those f 2 PL(D) satisfying: 1. f coincides with fD on the following intervals: (1a) Near infinity, i.e. on the intervals ( 1, x2), (xm 1,1) (1b) Near datapoints that have zero discrete curvature, i.e. on intervals (xi 1, xi+1) with i = 2, . . . ,m 1 such that ✏i = 0. (1c) Between datapoints with opposite discrete curvature, i.e. on intervals (xi, xi+1) with i = 2, . . . ,m 1 such that ✏i · ✏i+1 = 1. 2. f is convex (resp. concave) and bounded above (resp. below) by fD between any consecutive datapoints at which the discrete curvature is positive (resp. negative). Specifically, suppose for some 3 i i + q m 2 that xi and xi+q are consecutive discrete inflection points in the sense that ✏i 1 6= ✏i, ✏i = · · · = ✏i+q, ✏i+q 6= ✏i+q+1. If ✏i = 1 (resp. ✏i = 1), then restricted to the interval (xi, xi+q), f is convex (resp. concave) and lies above (resp. below) the incoming and outgoing support lines and below (resp. above) fD: ✏i = 1 =) max {`i 1(x), `i+q(x)} f(x) fD(x) ✏i = 1 =) min {`i 1(x), `i+q(x)} f(x) fD(x) for all x 2 (xi, xi+q). We refer the reader to §A for a proof of Theorem 3.1. Before doing so, let us illustrate Theorem 3.1 as an algorithm that, given the dataset D, describes all elements in RidgelessReLU(D) (see Figures 2 and 3): Step 1 Linearly interpolate the endpoints: by property (1), f 2 RidgelessReLU(D) must agree with fD on ( 1, x2) and (xm 1,1). Step 2 Compute discrete curvature: for i = 2, . . . ,m 1 calculate the discrete curvature ✏i at the data point xi. Step 3 Linearly interpolate on intervals with zero curvature: for all i = 2, . . . ,m 1 at which ✏i = 0 property (1) guarantees that f coincides with the fD on (xi 1, xi+1). Step 4 Linearly interpolate on intervals with ambiguous curvature: for all i = 2, . . . ,m 1 at which ✏i · ✏i+1 = 1 property (1) guarantees that f coincides with fD on (xi, xi+1). Step 5 Determine convexity/concavity on remaining points: all intervals (xi, xi+1) on which f has not yet been determined occur in sequences (xi, xi+1), . . . , (xi+q 1, xi+q) on which ✏i+j = 1 or ✏i+j = 1 for all j = 0, . . . , q. If ✏i = 1 (resp. ✏i = 1), then f is any convex (resp. concave) function bounded below (resp. above) by fD and above (resp. below) the support lines `i(x), `i+q(x). The starting point for the proof of Theorem 3.1 comes from the prior articles Neyshabur et al. (2014); Savarese et al. (2019); Ongie et al. (2019), which obtained an insightful “function space” interpretation of RidgelessReLU(D) as a subset of PL(D). Specifically, a simple computation (cf e.g. Theorem 3.3 in Savarese et al. (2019) and also Lemma A.14 below) shows that fD achieves the smallest value of the total variation ||Df ||TV for the derivative Df among all f 2 PL(D). (The function Df is piecewise constant and ||Df ||TV is the sum of absolute values of its jumps.) Part of the content of the prior work Neyshabur et al. (2014); Savarese et al. (2019); Ongie et al. (2019) is the following result Theorem 3.2 (cf Lemma 1 in Ongie et al. (2019) and around equation (17) in Savarese et al. (2019)). For any dataset D we have RidgelessReLU(D) = {f 2 PL(D) | ||Df ||TV = ||DfD||TV } . (5) (a) Step 4 (b) Step 5. One possible choice of a convex interpolant on (x4, x5) and of a concave interpolant on (x6, x7) is shown. Thin dashed lines are the supporting lines that bound all interpolants below on (x4, x5) and above on (x6, x7). Figure 3: Steps 4 - 5 for generating RidgelessReLU(D) from the dataset D. Theorem 3.2 shows that RidgelessReLU(D) is precisely the space of functions in PL(D) that achieve the minimal possible total variation norm for the derivative. Thus, intuitively, functions in RidgelessReLU(D) are averse to oscillation in their slopes. The proof of this fact uses a simple idea introduced in Theorem 1 of Neyshabur et al. (2014) which leverages the homogeneity of the ReLU to translate between the regularizer C(✓) and the penalty ||Df ||TV . Theorem 3.1 yields strong generalization guarantees uniformly over RidgelessReLU(D). To state a representative example, suppose D is generated by a function f⇤ : R ! R: yj = f⇤(xj). Corollary 3.3 (Sharp generalization on Lipschitz Functions from Theorem 3.1). Fix a dataset D = {(xi, yi), i = 1, . . . ,m}. We have sup f2RidgelessReLU(D) ||f ||Lip ||f⇤||Lip . (6) Hence, if f⇤ is L Lipschitz and xi = i/m are uniformly spaced in [0, 1], then sup f2RidgelessReLU(D) sup x2[0,1] |f(x) f⇤(x)| 2L m . (7) Proof. Observe that for any i = 2, . . . ,m 1 and x 2 (xi, xi+1) at which Df(x) exists we have ✏i(si 1 si) ✏i(Df(x) si) ✏i(si+1 si). (8) Indeed, when ✏i = 0 the estimate (8) follows from property (1b) in Theorem 3.1. Otherwise, (8) follows immediately from the local convexity/concavity of f in property (2). Hence, combining (8) with property (1a) shows that for each i = 1, . . . ,m 1 ||Df ||L1(xi,xi+1) max {|si 1| , |si|} . Again using property (1a) and taking the maximum over i = 2, . . . ,m we find ||Df ||L1(R) max1im 1 |si| = ||fD||Lip . To complete the proof of (6) observe that for every i = 1, . . . ,m 1 |si| = yi+1 yi xi+1 xi = f⇤(xi+1) f⇤(xi) xi+1 xi ||f⇤||Lip =) ||fD||Lip ||f⇤||Lip . Given any x 2 [0, 1], let us write x0 for its nearest neighbor in {i/m, i = 1, . . . ,m}. We find |f(x) f⇤(x)| |f(x) f(x0)|+ |f⇤(x0) f⇤(x)| ⇣ ||f ||Lip + ||f⇤||Lip ⌘ |x x0| 2L m . Taking the supremum over f 2 RidgelessReLU(D) and x 2 [0, 1] proves (7). Corollary 3.3 gives the best possible generalization error of Lipschitz functions, up to a universal multiplicative constant, in the sense that if all we knew about f⇤ was that it was L-Lipschitz and were given its values on {i/m, i = 1, . . . ,m}, then we cannot recover f⇤ in L1 to accuracy that is better than a constant times L/m. Further, the same kind of result holds with high probability if xi are drawn independently at random from [0, 1], with the 2L/m on the right hand side replaced by C log(m)L/m for some universal constant C > 0. The appearance of the logarithm is due to the fact that among m iid points in [0, 1] the the largest spacing between consecutive points scales like C log(m)/m with high probability. Similar generalization results can easily be established, depending on the level of smoothness assumed for f⇤ and the uniformity of the datapoints xi. In writing this article, it at first appeared to the author that the generalization bounds (7) cannot be directly obtained from the relation (5) of prior work. The issue is that a priori the relation (5) gives bounds only on the global value of ||Df ||TV , suggesting perhaps that it does not provide strong constraints on local information about the behavior of ridgeless interpolants on small intervals (xi, xi+1). However, the relation (5) can actually be effectively localized to yield the estimates (6) and (7) but with worse constants. The idea is the following. Fix f 2 RidgelessReLU(D). For any i⇤ = 3, . . . ,m 2 define the left, right and central portions of D as follows: DL := {(xi, yi), i < i⇤} , DC := {(xi, yi), i⇤ 1 i i⇤ + 1} , DR := {(xi, yi), i⇤ < i} . Consider further the left, right, and central versions of f , defined by fL(x) = ⇢ f(x), x < xi⇤ `i⇤(x), x > xi⇤ , fR(x) = ⇢ f(x), x > xi⇤ `i⇤(x), x < xi⇤ and fC(x) = 8 < : f(x), xi⇤ 1 < x < xi⇤+1 `i⇤ 1(x), x < xi⇤ 1 `i⇤(x), x > xi⇤+1 . Using (5), we have ||DfD||TV = ||Df ||TV . Further, ||Df ||TV ||DfL||TV + ||DfC ||TV + ||DfR||TV , which, by again applying (5) but this time to DL,DR and fL, fR, yields the bound ||Df ||TV ||fDL ||TV + ||DfC ||TV + ||DfDR ||TV . Using that ||DfD||TV = mX i=2 |si si 1| , ||fDL ||TV = i⇤ 2X i=2 |si si 1| , ||DfDR ||TV = m 1X i=i⇤+2 |si si 1| we derive the localized estimate |si⇤+1 si⇤ |+ |si⇤ si⇤ 1|+ |si⇤ 1 si⇤ 2| ||DfC ||TV Note further that ||DfC ||TV max x2(xi,xi+1) Df(x) min x2(xi,xi+1) Df(x), where the max and min are taken over those x at which Df(x) exists. The interpolation condition f(xi) = yi and f(xi+1) = yi+1 yields that max x2(xi,xi+1) Df(x) si and min x2(xi,xi+1) Df(x) si. Putting together the previous three lines of inequalities (and checking the edge cases i = 2,m 1), we conclude that for any i = 2, . . . ,m 1 we have ||Df(x) si||L1(xi,xi+1) |si+1 si|+ |si si 1|+ |si 1 si 2| , where we set s0 = s1. Thus, as in the last few lines of the proof of Corollary 3.3, we conclude that ||f ||Lip 7 ||f⇤||Lip and |f(x) f⇤(x)| 14L m . 4 CONCLUSION AND FUTURE DIRECTIONS In this article, we completely characterized all possible ReLU networks that interpolate a given dataset D in the simple setting of weakly `2-regularized one layer ReLU networks with a single linear unit and input/output dimension 1. Moreover, our characterization shows that, to assign labels to unseen data such networks simply “look at the curvature of the nearest neighboring datapoints on each side,” in a way made precise in Theorem 3.1. This simple geometric description led to sharp generalization results for learning 1d Lipschitz functions in Corollary 3.3. This opens many direction for future investigation. Theorem 3.1 shows, for instance, that there are infinitely many ridgeless ReLU interpolants of a given dataset D. It would be interesting to understand which ones are actually learned by gradient descent from a random initialization and a weak (or even decaying) `2-penalty in time. Further, as already pointed out after the Theorem 2.1, the conclusions of Theorem 3.1 appear to hold under very different kinds of regularization (e.g. Theorem 2 in Blanc et al. (2020)). This raises the question: what is the most general kind of regularizer that is equivalent to weight decay, at least in our simple setup? It would also be quite natural to extend the results in this article to ReLU networks with higher input dimension, for which weight decay is known to correspond to regularization of a certain weighted Radon transform of the network function Ongie et al. (2019); Parhi & Nowak (2020a;b; 2021). Finally, extending the results in this article to deeper networks and beyond fully connected architectures are fascinating directions left to future work.
1. What is the focus and contribution of the paper on ReLU networks? 2. What are the strengths and weaknesses of the paper regarding its organization, literature review, illustration, and generalization property? 3. Do you have any concerns regarding the technical issues in the proof section, such as curvature and its transformation? 4. How do you assess the clarity and definiteness of the key terminologies used in the paper, such as network function, concave, convex, and Lipschitz function? 5. Is the title of the paper clear and consistent with its content? 6. Are there any suggestions for improving the paper, such as providing more detailed discussions or mathematical definitions?
Summary Of The Paper Review
Summary Of The Paper The paper characterized the ReLU networks that interpolate a given dataset D in the simple setting of weakly `2-regularized one layer ReLU networks with a single linear unit and input/output dimension 1. Moreover, the authors show that to assign labels to unseen data, such networks simply "look at the curvature of the nearest neighboring data points on each side". This geometric description led to sharp generalization results for learning 1d Lipschitz functions. The conclusion may open some directions for future investigation, such as extending the results to deeper networks and beyond fully connected architectures. Review Strong points: The paper is generally well organized. It is easy to understand the purpose and contribution of this paper. The literature review is clear. The relation to prior work is explained in detail. The author provides a clear illustration of ideas through a series of figures. Weak points: A.The setup in this paper is not general enough. It only works on Shallow Relu networks with 1D. (P8 Corrollary 3.3) The generalization property is only held in the domain from [0,1], which is different from the domain of f(x) in the main conclusion Thm 2.1 and Thm 3.1. In addition, x takes the value only in i/m. This makes the generalization not "strong enough". B. The theoretical contribution is not significant enough. Most parts of the proof section are concerning about technical issues about curvature and its transformation. C. Many terminologies are not strictly defined or clearly claimed. (p2) Network function is referred to before definition and setup. (p2) Concave and convex need mathematical definition, especially between concave and strictly concave. In mathematics, concave (convex) contains linear. The lack of mathematical definition makes the conclusion somewhat ambiguous. The paper discussed Lipschitz function several times. However, there is no definition and properties of Lipschitz function. It makes some of the conclusions not clear enough. The title claims, "RIDGELESS INTERPOLATION WITH SHALLOW RELU NETWORKS IN 1D IS NEAREST NEIGHBOR CURVATURE EXTRAPOLATION". However, the "NEAREST NEIGHBOR CURVATURE EXTRAPOLATION" is not clearly defined in the paper, which is not reader-friendly. Suggestions: Have a more detailed discussion about the theoretical contribution in the proof. Have more mathematical definitions about the key terminologies in the paper.
ICLR
Title Ridgeless Interpolation with Shallow ReLU Networks in $1D$ is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions Abstract We prove a precise geometric description of all one layer ReLU networks z(x; ✓) with a single linear unit and input/output dimensions equal to one that interpolate a given dataset D = {(xi, f(xi))} and, among all such interpolants, minimize the `2-norm of the neuron weights. Such networks can intuitively be thought of as those that minimize the mean-squared error over D plus an infinitesimal weight decay penalty. We therefore refer to them as ridgeless ReLU interpolants. Our description proves that, to extrapolate values z(x; ✓) for inputs x 2 (xi, xi+1) lying between two consecutive datapoints, a ridgeless ReLU interpolant simply compares the signs of the discrete estimates for the curvature of f at xi and xi+1 derived from the dataset D. If the curvature estimates at xi and xi+1 have different signs, then z(x; ✓) must be linear on (xi, xi+1). If in contrast the curvature estimates at xi and xi+1 are both positive (resp. negative), then z(x; ✓) is convex (resp. concave) on (xi, xi+1). Our results show that ridgeless ReLU interpolants achieve the best possible generalization for learning 1d Lipschitz functions, up to universal constants. 1 INTRODUCTION The ability of overparameterized neural networks to simultaneously fit data (i.e. interpolate) and generalize to unseen data (i.e. extrapolate) is a robust empirical finding that spans the use of deep learning in tasks from computer vision Krizhevsky et al. (2012); He et al. (2016), natural language processing Brown et al. (2020), and reinforcement learning Silver et al. (2016); Vinyals et al. (2019); Jumper et al. (2021). This observation is surprising when viewed from the lens of traditional learning theory Vapnik & Chervonenkis (1971); Bartlett & Mendelson (2002), which advocates for capacity control of model classes and strong regularization to avoid overfitting. Part of the difficulty in explaining conceptually why neural networks are able to generalize is that it is unclear how to understand, concretely in terms of the network function, various forms of implicit and explicit regularization used in practice. For example, a well-chosen initialization for gradientbased optimizers can strongly impact for quality of the resulting learned network Mishkin & Matas (2015); He et al. (2015); Xiao et al. (2018). However, the specific geometric or analytic properties of the learned network ensured by a successful initialization scheme are hard to pin down. In a similar vein, it is standard practice to experiment with (weak) explicit regularizers such as weight decay, obtained by adding an `2 penalty on model parameters to the underlying empirical risk. While the effect of weight decay on parameters is transparent, it is typically challenging to reformulate this into properties of a learned non-linear model. In the simple setting of one layer ReLU networks this situation has recently become more clear. Specifically, starting with an observation in Neyshabur et al. (2014) the articles Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a;b; 2021) explore and develop the fact that `2 regularization on parameters in this setting is provably equivalent to penalizing the total variation of a certain Radon transform of the network function (cf eg Theorem 3.2). While the results in these articles hold for any input dimension, in this article we consider the simplest case of input dimension 1. In this setting, our main contributions are: 2 SETUP AND INFORMAL STATEMENT OF RESULTS Consider a one layer ReLU network z(x) = z(x; ✓) := ax+ b+ nX j=1 W (2)j h W (1)j x+ b (1) i i + , [t]+ := ReLU(t) = max {0, t} (1) with a single linear unit1 and input/output dimensions equal to one. For a given dataset D = {(xi, yi) , i = 1, . . . ,m} , 1 < x1 < · · · < xm < 1, yi 2 R, if the number of datapoints m is smaller than the network width n, there are infinitely many choices of the parameter vector ✓ for which z(x; ✓) interpolates (i.e. fits) the data: z(xi; ✓) = yi, 8 i = 1, . . . ,m. (2) Without further information about ✓, little can be said about the function z(x; ✓) for x in intervals (xi, xi+1) between consecutive datapoints when n is much larger than m. This precludes useful generalization guarantees uniformly over all ✓, subject only to the interpolation condition (2). In practice interpolants are not chosen arbitrary. Instead, they are learned by some variant of gradient descent starting from a random initialization. For a given architecture, initialization, optimizer, regularizer, and so on, understanding how the learned network uses the known labels {yi} to assign values of z(x; ✓) for x not in the dataset is an important open problem. To make progress, a fruitful line of inquiry in prior work has been to search for additional complexity measures based on margins Wei et al. (2018), PAC-Bayes estimates Dziugaite & Roy (2017; 2018); Nagarajan & Kolter (2019), weight matrix norms Neyshabur et al. (2015); Bartlett et al. (2017), information theoretic compression estimates Arora et al. (2018), Rachemacher complexity Golowich et al. (2018), etc (see Jiang et al. (2019) for a review and comparison). While perhaps not explicitly regularized, these complexity measures are hopefully small in trained networks, giving additional capacity constrains. In this article, we take a different approach. We do not seek results valid for any network architecture. Instead, our goal is to describe completely, in concrete geometrical terms, the properties of one layer ReLU networks z(x; ✓) that interpolate a dataset D with the minimal possible `2 penalty C(✓) = C(✓, n) = nX j=1 |W (1)j | 2 + |W (2)j | 2 on the neuron weights. More precisely, we study the space of ridgeless ReLU interpolants RidgelessReLU(D) := {z(x; ✓) | z(xi; ✓) = yi 8(xi, yi) 2 D, C(✓) = C⇤} , (3) of a dataset D, where C⇤ := inf ✓,n {C(✓, n) | z(xi;n, ✓) = yi 8(xi, yi) 2 D} . Intuitively, elements in RidgelessReLU(D) are ReLU nets that minimize a weakly penalized loss L(✓;D) + C(✓), ⌧ 1, (4) where L is an empirical loss, such as the mean squared error over D, and the strength of the weight decay penalty C(✓) is infinitesimal. It it plausible but by no means obvious that, with high probability, gradient descent from a random initialization and a weight decay penalty whose strength decreases to zero over training converges to an element in RidgelessReLU(D). This article does not study optimization, and we therefore leave this as an interesting open problem. Our main result is simple description of RidgelessReLU(D) and can informally be stated as follows: Theorem 2.1 (Informal Statement of Theorem 3.1). Fix a dataset D = {(xi, yi), i = 1, . . . ,m}. Each datapoint (xi, yi) gives an estimate ✏i := sgn (si si 1) , si := yi+1 yi xi+1 xi for the local curvature of the data (Figure 1). Among all continuous and piecewise linear functions f that fit D exactly, the ones in RidgelessReLU(D) are precisely those that: 1The linear term ax+ b is not really standard in practice but as in prior work Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a) leads a cleaner mathematical formulation of results. • Are convex (resp. concave) on intervals (xi, xi+1) at which neighboring datapoints agree on the local curvature in the sense that ✏i = ✏i+1 = 1 (resp. ✏i = ✏i+1 = 1). On such intervals f lies below (resp. above) the straight line interpolant of the data (Figs. 2 and 3). • Are linear (or more precisely affine) on intervals (xi, xi+1) when neighboring datapoints disagree on the local curvature in the sense that ✏i · ✏i+1 6= 1. Before giving a precise statement our results, we mention that, as described in detail below, the space RidgelessReLU(D) has been considered in a number of prior articles Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a). Our starting point will be the useful but abstract characterization of RidgelessReLU(D) they obtained in terms of the total variation of the derivative of z(x; ✓) (see (5)). We note also that the conclusions of Theorem 2.1 (and Theorem 3.1) also hold under seemingly very different hypotheses from ours. Namely, instead of `2-regularization on the parameters, Blanc et al. (2020) considers SGD training for mean squared error with iid noise added to labels. Their Theorem 2 shows (modulo some assumptions about interpreting the derivative of the ReLU) that, among all ReLU networks a linear unit that interpolate a dataset D, the only ones that minimize the implicit regularization induced by adding iid noise to SGD are precisely those that satisfy the conclusions of Theorem 2.1 and hence are exactly the networks in RidgelessReLU(D). This suggests that our results hold under much more general conditions. Further, our characterization of RidgelessReLU(D) in Theorem 3.1 immediately implies strong generalization guarantees uniformly over RidgelessReLU(D). We give a representative example in Corollary 3.3, which shows that such ReLU networks achieve the best possible generalization error of Lipschitz functions, up to constants. Finally, note that we allow networks z(x; ✓) of any width but that if the width n is too small relative to the dataset size m, then the interpolation condition (2) cannot be satisfied. Also, we point out that in our formulation of the cost C(✓) we have left both the linear term ax + b and the neuron biases unregularized. This is not standard practice but seems to yield the cleanest results. 3 STATEMENT OF RESULTS AND RELATION TO PRIOR WORK Every ReLU network z(x; ✓) is a continuous and piecewise linear function from R to R with a finite number of affine pieces. Let us denote by PL the space of all such functions and define PL(D) := {f 2 PL| f(xi) = yi 8i = 1, . . . ,m} to be the space of piecewise linear interpolants of D. Perhaps the most natural element in PL(D) is the “connect-the-dots interpolant” fD : R ! R given by fD(x) := 8 < : `1(x), x < x2 `i(x), xi < x < xi+1, i = 2, . . . ,m 2 `m 1(x), x > xm 1 , where for i = 1, . . . ,m 1, we’ve set `i(x) := (x xi)si + yi, si := yi+1 yi xi+1 xi . See Figure 1. In addition to fD, there are many other elements in RidgelessReLU(D). Theorem 3.1 gives a complete description of all of them phrased in terms of how they may behave on intervals (xi, xi+1) between consecutive datapoints. Our description is based on the signs ✏i = sgn (si si 1) , 2 i m of the (discrete) second derivatives of fD at the inputs xi from our dataset. Theorem 3.1. The space RidgelessReLU(D) consists of those f 2 PL(D) satisfying: 1. f coincides with fD on the following intervals: (1a) Near infinity, i.e. on the intervals ( 1, x2), (xm 1,1) (1b) Near datapoints that have zero discrete curvature, i.e. on intervals (xi 1, xi+1) with i = 2, . . . ,m 1 such that ✏i = 0. (1c) Between datapoints with opposite discrete curvature, i.e. on intervals (xi, xi+1) with i = 2, . . . ,m 1 such that ✏i · ✏i+1 = 1. 2. f is convex (resp. concave) and bounded above (resp. below) by fD between any consecutive datapoints at which the discrete curvature is positive (resp. negative). Specifically, suppose for some 3 i i + q m 2 that xi and xi+q are consecutive discrete inflection points in the sense that ✏i 1 6= ✏i, ✏i = · · · = ✏i+q, ✏i+q 6= ✏i+q+1. If ✏i = 1 (resp. ✏i = 1), then restricted to the interval (xi, xi+q), f is convex (resp. concave) and lies above (resp. below) the incoming and outgoing support lines and below (resp. above) fD: ✏i = 1 =) max {`i 1(x), `i+q(x)} f(x) fD(x) ✏i = 1 =) min {`i 1(x), `i+q(x)} f(x) fD(x) for all x 2 (xi, xi+q). We refer the reader to §A for a proof of Theorem 3.1. Before doing so, let us illustrate Theorem 3.1 as an algorithm that, given the dataset D, describes all elements in RidgelessReLU(D) (see Figures 2 and 3): Step 1 Linearly interpolate the endpoints: by property (1), f 2 RidgelessReLU(D) must agree with fD on ( 1, x2) and (xm 1,1). Step 2 Compute discrete curvature: for i = 2, . . . ,m 1 calculate the discrete curvature ✏i at the data point xi. Step 3 Linearly interpolate on intervals with zero curvature: for all i = 2, . . . ,m 1 at which ✏i = 0 property (1) guarantees that f coincides with the fD on (xi 1, xi+1). Step 4 Linearly interpolate on intervals with ambiguous curvature: for all i = 2, . . . ,m 1 at which ✏i · ✏i+1 = 1 property (1) guarantees that f coincides with fD on (xi, xi+1). Step 5 Determine convexity/concavity on remaining points: all intervals (xi, xi+1) on which f has not yet been determined occur in sequences (xi, xi+1), . . . , (xi+q 1, xi+q) on which ✏i+j = 1 or ✏i+j = 1 for all j = 0, . . . , q. If ✏i = 1 (resp. ✏i = 1), then f is any convex (resp. concave) function bounded below (resp. above) by fD and above (resp. below) the support lines `i(x), `i+q(x). The starting point for the proof of Theorem 3.1 comes from the prior articles Neyshabur et al. (2014); Savarese et al. (2019); Ongie et al. (2019), which obtained an insightful “function space” interpretation of RidgelessReLU(D) as a subset of PL(D). Specifically, a simple computation (cf e.g. Theorem 3.3 in Savarese et al. (2019) and also Lemma A.14 below) shows that fD achieves the smallest value of the total variation ||Df ||TV for the derivative Df among all f 2 PL(D). (The function Df is piecewise constant and ||Df ||TV is the sum of absolute values of its jumps.) Part of the content of the prior work Neyshabur et al. (2014); Savarese et al. (2019); Ongie et al. (2019) is the following result Theorem 3.2 (cf Lemma 1 in Ongie et al. (2019) and around equation (17) in Savarese et al. (2019)). For any dataset D we have RidgelessReLU(D) = {f 2 PL(D) | ||Df ||TV = ||DfD||TV } . (5) (a) Step 4 (b) Step 5. One possible choice of a convex interpolant on (x4, x5) and of a concave interpolant on (x6, x7) is shown. Thin dashed lines are the supporting lines that bound all interpolants below on (x4, x5) and above on (x6, x7). Figure 3: Steps 4 - 5 for generating RidgelessReLU(D) from the dataset D. Theorem 3.2 shows that RidgelessReLU(D) is precisely the space of functions in PL(D) that achieve the minimal possible total variation norm for the derivative. Thus, intuitively, functions in RidgelessReLU(D) are averse to oscillation in their slopes. The proof of this fact uses a simple idea introduced in Theorem 1 of Neyshabur et al. (2014) which leverages the homogeneity of the ReLU to translate between the regularizer C(✓) and the penalty ||Df ||TV . Theorem 3.1 yields strong generalization guarantees uniformly over RidgelessReLU(D). To state a representative example, suppose D is generated by a function f⇤ : R ! R: yj = f⇤(xj). Corollary 3.3 (Sharp generalization on Lipschitz Functions from Theorem 3.1). Fix a dataset D = {(xi, yi), i = 1, . . . ,m}. We have sup f2RidgelessReLU(D) ||f ||Lip ||f⇤||Lip . (6) Hence, if f⇤ is L Lipschitz and xi = i/m are uniformly spaced in [0, 1], then sup f2RidgelessReLU(D) sup x2[0,1] |f(x) f⇤(x)| 2L m . (7) Proof. Observe that for any i = 2, . . . ,m 1 and x 2 (xi, xi+1) at which Df(x) exists we have ✏i(si 1 si) ✏i(Df(x) si) ✏i(si+1 si). (8) Indeed, when ✏i = 0 the estimate (8) follows from property (1b) in Theorem 3.1. Otherwise, (8) follows immediately from the local convexity/concavity of f in property (2). Hence, combining (8) with property (1a) shows that for each i = 1, . . . ,m 1 ||Df ||L1(xi,xi+1) max {|si 1| , |si|} . Again using property (1a) and taking the maximum over i = 2, . . . ,m we find ||Df ||L1(R) max1im 1 |si| = ||fD||Lip . To complete the proof of (6) observe that for every i = 1, . . . ,m 1 |si| = yi+1 yi xi+1 xi = f⇤(xi+1) f⇤(xi) xi+1 xi ||f⇤||Lip =) ||fD||Lip ||f⇤||Lip . Given any x 2 [0, 1], let us write x0 for its nearest neighbor in {i/m, i = 1, . . . ,m}. We find |f(x) f⇤(x)| |f(x) f(x0)|+ |f⇤(x0) f⇤(x)| ⇣ ||f ||Lip + ||f⇤||Lip ⌘ |x x0| 2L m . Taking the supremum over f 2 RidgelessReLU(D) and x 2 [0, 1] proves (7). Corollary 3.3 gives the best possible generalization error of Lipschitz functions, up to a universal multiplicative constant, in the sense that if all we knew about f⇤ was that it was L-Lipschitz and were given its values on {i/m, i = 1, . . . ,m}, then we cannot recover f⇤ in L1 to accuracy that is better than a constant times L/m. Further, the same kind of result holds with high probability if xi are drawn independently at random from [0, 1], with the 2L/m on the right hand side replaced by C log(m)L/m for some universal constant C > 0. The appearance of the logarithm is due to the fact that among m iid points in [0, 1] the the largest spacing between consecutive points scales like C log(m)/m with high probability. Similar generalization results can easily be established, depending on the level of smoothness assumed for f⇤ and the uniformity of the datapoints xi. In writing this article, it at first appeared to the author that the generalization bounds (7) cannot be directly obtained from the relation (5) of prior work. The issue is that a priori the relation (5) gives bounds only on the global value of ||Df ||TV , suggesting perhaps that it does not provide strong constraints on local information about the behavior of ridgeless interpolants on small intervals (xi, xi+1). However, the relation (5) can actually be effectively localized to yield the estimates (6) and (7) but with worse constants. The idea is the following. Fix f 2 RidgelessReLU(D). For any i⇤ = 3, . . . ,m 2 define the left, right and central portions of D as follows: DL := {(xi, yi), i < i⇤} , DC := {(xi, yi), i⇤ 1 i i⇤ + 1} , DR := {(xi, yi), i⇤ < i} . Consider further the left, right, and central versions of f , defined by fL(x) = ⇢ f(x), x < xi⇤ `i⇤(x), x > xi⇤ , fR(x) = ⇢ f(x), x > xi⇤ `i⇤(x), x < xi⇤ and fC(x) = 8 < : f(x), xi⇤ 1 < x < xi⇤+1 `i⇤ 1(x), x < xi⇤ 1 `i⇤(x), x > xi⇤+1 . Using (5), we have ||DfD||TV = ||Df ||TV . Further, ||Df ||TV ||DfL||TV + ||DfC ||TV + ||DfR||TV , which, by again applying (5) but this time to DL,DR and fL, fR, yields the bound ||Df ||TV ||fDL ||TV + ||DfC ||TV + ||DfDR ||TV . Using that ||DfD||TV = mX i=2 |si si 1| , ||fDL ||TV = i⇤ 2X i=2 |si si 1| , ||DfDR ||TV = m 1X i=i⇤+2 |si si 1| we derive the localized estimate |si⇤+1 si⇤ |+ |si⇤ si⇤ 1|+ |si⇤ 1 si⇤ 2| ||DfC ||TV Note further that ||DfC ||TV max x2(xi,xi+1) Df(x) min x2(xi,xi+1) Df(x), where the max and min are taken over those x at which Df(x) exists. The interpolation condition f(xi) = yi and f(xi+1) = yi+1 yields that max x2(xi,xi+1) Df(x) si and min x2(xi,xi+1) Df(x) si. Putting together the previous three lines of inequalities (and checking the edge cases i = 2,m 1), we conclude that for any i = 2, . . . ,m 1 we have ||Df(x) si||L1(xi,xi+1) |si+1 si|+ |si si 1|+ |si 1 si 2| , where we set s0 = s1. Thus, as in the last few lines of the proof of Corollary 3.3, we conclude that ||f ||Lip 7 ||f⇤||Lip and |f(x) f⇤(x)| 14L m . 4 CONCLUSION AND FUTURE DIRECTIONS In this article, we completely characterized all possible ReLU networks that interpolate a given dataset D in the simple setting of weakly `2-regularized one layer ReLU networks with a single linear unit and input/output dimension 1. Moreover, our characterization shows that, to assign labels to unseen data such networks simply “look at the curvature of the nearest neighboring datapoints on each side,” in a way made precise in Theorem 3.1. This simple geometric description led to sharp generalization results for learning 1d Lipschitz functions in Corollary 3.3. This opens many direction for future investigation. Theorem 3.1 shows, for instance, that there are infinitely many ridgeless ReLU interpolants of a given dataset D. It would be interesting to understand which ones are actually learned by gradient descent from a random initialization and a weak (or even decaying) `2-penalty in time. Further, as already pointed out after the Theorem 2.1, the conclusions of Theorem 3.1 appear to hold under very different kinds of regularization (e.g. Theorem 2 in Blanc et al. (2020)). This raises the question: what is the most general kind of regularizer that is equivalent to weight decay, at least in our simple setup? It would also be quite natural to extend the results in this article to ReLU networks with higher input dimension, for which weight decay is known to correspond to regularization of a certain weighted Radon transform of the network function Ongie et al. (2019); Parhi & Nowak (2020a;b; 2021). Finally, extending the results in this article to deeper networks and beyond fully connected architectures are fascinating directions left to future work.
1. What is the focus of the paper, and what are the authors' contributions to the field of neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its oversimplification and lack of realism? 3. Do you have any concerns about the method used to minimize the l_2-norm of all neuron weights, and how does it relate to the intuitive understanding of the network's behavior? 4. How does the generalization error bound provided by Corollary 3.3 limit the practicality of the results, and what would be required to extend it to more realistic scenarios? 5. What are your thoughts on the complexity and counterintuitiveness of the description of the learnable space RidgelessReLU(D), and how might one improve it? 6. Would including simulation results enhance the credibility of the conclusions drawn from the theoretical analysis?
Summary Of The Paper Review
Summary Of The Paper In this paper, a one-layer ReLU network with 1-D input is considered. Authors study an interpolant of all training samples that minimizes the l 2 norm of all weights. A description of the function learned by such a model is provided. The description depends on specific values of training data and consists of several steps. First, intervals between two successive training data inputs are divided into several categories according to the discrete second derivative of training samples. Then, based on different categories of intervals between two adjacent training inputs, the function should be convex, or concave, or affine. Based on such description, this paper provides a generalization upper bound on Lipschitz functions when the 1-D input is uniformly spaced in [0, 1]. Review Although this paper shows some new results, the current presentation of this paper is not satisfactory. My overall feeling is that the setup (1-D input with 1-layer ReLU) is oversimplified and thus lacks enough useful insights on realistic problems. The following are some detailed comments. Minimizing the l_2-norm of all neuron weights may lack motivation. In the abstract, the authors claim that "such networks can intuitively be thought of as those that minimized the MSE plus an infinitesimal weight decay penalty". I highly suspect such a statement and I don't find any evidence in this paper, neither theoretical nor numerical. The generalization error bound shown by Corollary 3.3 is extremely restrictive, since it only works for the deterministic situation that all training inputs are uniformly spaced in [0, 1]. In reality, training input is usually random and cannot be precisely uniformly spaced. Thus, the importance of Corollary 3.3 is questionable. The description of the learnable space RidgelessReLU(D) shown by Theorem 3.1 is very complex and counter-intuitive. In order to determine whether a function is in RidgelessReLU(D) by the methods shown in Theorem 3.1, the discrete second derivation of every training sample has to be calculated and treated differently. Therefore, I find it difficult to determine which "categories" of functions belong to RidgelessReLU(D). For example, does quadratic functions always in RidgelessReLU(D), regardless of D? There exists no simulation result in this paper, which makes readers doubt the correctness of the conclusion. Fig. 2 and Fig. 3 take too much space and are repetitive, yet do not provide too much useful information.
ICLR
Title Ridgeless Interpolation with Shallow ReLU Networks in $1D$ is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions Abstract We prove a precise geometric description of all one layer ReLU networks z(x; ✓) with a single linear unit and input/output dimensions equal to one that interpolate a given dataset D = {(xi, f(xi))} and, among all such interpolants, minimize the `2-norm of the neuron weights. Such networks can intuitively be thought of as those that minimize the mean-squared error over D plus an infinitesimal weight decay penalty. We therefore refer to them as ridgeless ReLU interpolants. Our description proves that, to extrapolate values z(x; ✓) for inputs x 2 (xi, xi+1) lying between two consecutive datapoints, a ridgeless ReLU interpolant simply compares the signs of the discrete estimates for the curvature of f at xi and xi+1 derived from the dataset D. If the curvature estimates at xi and xi+1 have different signs, then z(x; ✓) must be linear on (xi, xi+1). If in contrast the curvature estimates at xi and xi+1 are both positive (resp. negative), then z(x; ✓) is convex (resp. concave) on (xi, xi+1). Our results show that ridgeless ReLU interpolants achieve the best possible generalization for learning 1d Lipschitz functions, up to universal constants. 1 INTRODUCTION The ability of overparameterized neural networks to simultaneously fit data (i.e. interpolate) and generalize to unseen data (i.e. extrapolate) is a robust empirical finding that spans the use of deep learning in tasks from computer vision Krizhevsky et al. (2012); He et al. (2016), natural language processing Brown et al. (2020), and reinforcement learning Silver et al. (2016); Vinyals et al. (2019); Jumper et al. (2021). This observation is surprising when viewed from the lens of traditional learning theory Vapnik & Chervonenkis (1971); Bartlett & Mendelson (2002), which advocates for capacity control of model classes and strong regularization to avoid overfitting. Part of the difficulty in explaining conceptually why neural networks are able to generalize is that it is unclear how to understand, concretely in terms of the network function, various forms of implicit and explicit regularization used in practice. For example, a well-chosen initialization for gradientbased optimizers can strongly impact for quality of the resulting learned network Mishkin & Matas (2015); He et al. (2015); Xiao et al. (2018). However, the specific geometric or analytic properties of the learned network ensured by a successful initialization scheme are hard to pin down. In a similar vein, it is standard practice to experiment with (weak) explicit regularizers such as weight decay, obtained by adding an `2 penalty on model parameters to the underlying empirical risk. While the effect of weight decay on parameters is transparent, it is typically challenging to reformulate this into properties of a learned non-linear model. In the simple setting of one layer ReLU networks this situation has recently become more clear. Specifically, starting with an observation in Neyshabur et al. (2014) the articles Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a;b; 2021) explore and develop the fact that `2 regularization on parameters in this setting is provably equivalent to penalizing the total variation of a certain Radon transform of the network function (cf eg Theorem 3.2). While the results in these articles hold for any input dimension, in this article we consider the simplest case of input dimension 1. In this setting, our main contributions are: 2 SETUP AND INFORMAL STATEMENT OF RESULTS Consider a one layer ReLU network z(x) = z(x; ✓) := ax+ b+ nX j=1 W (2)j h W (1)j x+ b (1) i i + , [t]+ := ReLU(t) = max {0, t} (1) with a single linear unit1 and input/output dimensions equal to one. For a given dataset D = {(xi, yi) , i = 1, . . . ,m} , 1 < x1 < · · · < xm < 1, yi 2 R, if the number of datapoints m is smaller than the network width n, there are infinitely many choices of the parameter vector ✓ for which z(x; ✓) interpolates (i.e. fits) the data: z(xi; ✓) = yi, 8 i = 1, . . . ,m. (2) Without further information about ✓, little can be said about the function z(x; ✓) for x in intervals (xi, xi+1) between consecutive datapoints when n is much larger than m. This precludes useful generalization guarantees uniformly over all ✓, subject only to the interpolation condition (2). In practice interpolants are not chosen arbitrary. Instead, they are learned by some variant of gradient descent starting from a random initialization. For a given architecture, initialization, optimizer, regularizer, and so on, understanding how the learned network uses the known labels {yi} to assign values of z(x; ✓) for x not in the dataset is an important open problem. To make progress, a fruitful line of inquiry in prior work has been to search for additional complexity measures based on margins Wei et al. (2018), PAC-Bayes estimates Dziugaite & Roy (2017; 2018); Nagarajan & Kolter (2019), weight matrix norms Neyshabur et al. (2015); Bartlett et al. (2017), information theoretic compression estimates Arora et al. (2018), Rachemacher complexity Golowich et al. (2018), etc (see Jiang et al. (2019) for a review and comparison). While perhaps not explicitly regularized, these complexity measures are hopefully small in trained networks, giving additional capacity constrains. In this article, we take a different approach. We do not seek results valid for any network architecture. Instead, our goal is to describe completely, in concrete geometrical terms, the properties of one layer ReLU networks z(x; ✓) that interpolate a dataset D with the minimal possible `2 penalty C(✓) = C(✓, n) = nX j=1 |W (1)j | 2 + |W (2)j | 2 on the neuron weights. More precisely, we study the space of ridgeless ReLU interpolants RidgelessReLU(D) := {z(x; ✓) | z(xi; ✓) = yi 8(xi, yi) 2 D, C(✓) = C⇤} , (3) of a dataset D, where C⇤ := inf ✓,n {C(✓, n) | z(xi;n, ✓) = yi 8(xi, yi) 2 D} . Intuitively, elements in RidgelessReLU(D) are ReLU nets that minimize a weakly penalized loss L(✓;D) + C(✓), ⌧ 1, (4) where L is an empirical loss, such as the mean squared error over D, and the strength of the weight decay penalty C(✓) is infinitesimal. It it plausible but by no means obvious that, with high probability, gradient descent from a random initialization and a weight decay penalty whose strength decreases to zero over training converges to an element in RidgelessReLU(D). This article does not study optimization, and we therefore leave this as an interesting open problem. Our main result is simple description of RidgelessReLU(D) and can informally be stated as follows: Theorem 2.1 (Informal Statement of Theorem 3.1). Fix a dataset D = {(xi, yi), i = 1, . . . ,m}. Each datapoint (xi, yi) gives an estimate ✏i := sgn (si si 1) , si := yi+1 yi xi+1 xi for the local curvature of the data (Figure 1). Among all continuous and piecewise linear functions f that fit D exactly, the ones in RidgelessReLU(D) are precisely those that: 1The linear term ax+ b is not really standard in practice but as in prior work Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a) leads a cleaner mathematical formulation of results. • Are convex (resp. concave) on intervals (xi, xi+1) at which neighboring datapoints agree on the local curvature in the sense that ✏i = ✏i+1 = 1 (resp. ✏i = ✏i+1 = 1). On such intervals f lies below (resp. above) the straight line interpolant of the data (Figs. 2 and 3). • Are linear (or more precisely affine) on intervals (xi, xi+1) when neighboring datapoints disagree on the local curvature in the sense that ✏i · ✏i+1 6= 1. Before giving a precise statement our results, we mention that, as described in detail below, the space RidgelessReLU(D) has been considered in a number of prior articles Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a). Our starting point will be the useful but abstract characterization of RidgelessReLU(D) they obtained in terms of the total variation of the derivative of z(x; ✓) (see (5)). We note also that the conclusions of Theorem 2.1 (and Theorem 3.1) also hold under seemingly very different hypotheses from ours. Namely, instead of `2-regularization on the parameters, Blanc et al. (2020) considers SGD training for mean squared error with iid noise added to labels. Their Theorem 2 shows (modulo some assumptions about interpreting the derivative of the ReLU) that, among all ReLU networks a linear unit that interpolate a dataset D, the only ones that minimize the implicit regularization induced by adding iid noise to SGD are precisely those that satisfy the conclusions of Theorem 2.1 and hence are exactly the networks in RidgelessReLU(D). This suggests that our results hold under much more general conditions. Further, our characterization of RidgelessReLU(D) in Theorem 3.1 immediately implies strong generalization guarantees uniformly over RidgelessReLU(D). We give a representative example in Corollary 3.3, which shows that such ReLU networks achieve the best possible generalization error of Lipschitz functions, up to constants. Finally, note that we allow networks z(x; ✓) of any width but that if the width n is too small relative to the dataset size m, then the interpolation condition (2) cannot be satisfied. Also, we point out that in our formulation of the cost C(✓) we have left both the linear term ax + b and the neuron biases unregularized. This is not standard practice but seems to yield the cleanest results. 3 STATEMENT OF RESULTS AND RELATION TO PRIOR WORK Every ReLU network z(x; ✓) is a continuous and piecewise linear function from R to R with a finite number of affine pieces. Let us denote by PL the space of all such functions and define PL(D) := {f 2 PL| f(xi) = yi 8i = 1, . . . ,m} to be the space of piecewise linear interpolants of D. Perhaps the most natural element in PL(D) is the “connect-the-dots interpolant” fD : R ! R given by fD(x) := 8 < : `1(x), x < x2 `i(x), xi < x < xi+1, i = 2, . . . ,m 2 `m 1(x), x > xm 1 , where for i = 1, . . . ,m 1, we’ve set `i(x) := (x xi)si + yi, si := yi+1 yi xi+1 xi . See Figure 1. In addition to fD, there are many other elements in RidgelessReLU(D). Theorem 3.1 gives a complete description of all of them phrased in terms of how they may behave on intervals (xi, xi+1) between consecutive datapoints. Our description is based on the signs ✏i = sgn (si si 1) , 2 i m of the (discrete) second derivatives of fD at the inputs xi from our dataset. Theorem 3.1. The space RidgelessReLU(D) consists of those f 2 PL(D) satisfying: 1. f coincides with fD on the following intervals: (1a) Near infinity, i.e. on the intervals ( 1, x2), (xm 1,1) (1b) Near datapoints that have zero discrete curvature, i.e. on intervals (xi 1, xi+1) with i = 2, . . . ,m 1 such that ✏i = 0. (1c) Between datapoints with opposite discrete curvature, i.e. on intervals (xi, xi+1) with i = 2, . . . ,m 1 such that ✏i · ✏i+1 = 1. 2. f is convex (resp. concave) and bounded above (resp. below) by fD between any consecutive datapoints at which the discrete curvature is positive (resp. negative). Specifically, suppose for some 3 i i + q m 2 that xi and xi+q are consecutive discrete inflection points in the sense that ✏i 1 6= ✏i, ✏i = · · · = ✏i+q, ✏i+q 6= ✏i+q+1. If ✏i = 1 (resp. ✏i = 1), then restricted to the interval (xi, xi+q), f is convex (resp. concave) and lies above (resp. below) the incoming and outgoing support lines and below (resp. above) fD: ✏i = 1 =) max {`i 1(x), `i+q(x)} f(x) fD(x) ✏i = 1 =) min {`i 1(x), `i+q(x)} f(x) fD(x) for all x 2 (xi, xi+q). We refer the reader to §A for a proof of Theorem 3.1. Before doing so, let us illustrate Theorem 3.1 as an algorithm that, given the dataset D, describes all elements in RidgelessReLU(D) (see Figures 2 and 3): Step 1 Linearly interpolate the endpoints: by property (1), f 2 RidgelessReLU(D) must agree with fD on ( 1, x2) and (xm 1,1). Step 2 Compute discrete curvature: for i = 2, . . . ,m 1 calculate the discrete curvature ✏i at the data point xi. Step 3 Linearly interpolate on intervals with zero curvature: for all i = 2, . . . ,m 1 at which ✏i = 0 property (1) guarantees that f coincides with the fD on (xi 1, xi+1). Step 4 Linearly interpolate on intervals with ambiguous curvature: for all i = 2, . . . ,m 1 at which ✏i · ✏i+1 = 1 property (1) guarantees that f coincides with fD on (xi, xi+1). Step 5 Determine convexity/concavity on remaining points: all intervals (xi, xi+1) on which f has not yet been determined occur in sequences (xi, xi+1), . . . , (xi+q 1, xi+q) on which ✏i+j = 1 or ✏i+j = 1 for all j = 0, . . . , q. If ✏i = 1 (resp. ✏i = 1), then f is any convex (resp. concave) function bounded below (resp. above) by fD and above (resp. below) the support lines `i(x), `i+q(x). The starting point for the proof of Theorem 3.1 comes from the prior articles Neyshabur et al. (2014); Savarese et al. (2019); Ongie et al. (2019), which obtained an insightful “function space” interpretation of RidgelessReLU(D) as a subset of PL(D). Specifically, a simple computation (cf e.g. Theorem 3.3 in Savarese et al. (2019) and also Lemma A.14 below) shows that fD achieves the smallest value of the total variation ||Df ||TV for the derivative Df among all f 2 PL(D). (The function Df is piecewise constant and ||Df ||TV is the sum of absolute values of its jumps.) Part of the content of the prior work Neyshabur et al. (2014); Savarese et al. (2019); Ongie et al. (2019) is the following result Theorem 3.2 (cf Lemma 1 in Ongie et al. (2019) and around equation (17) in Savarese et al. (2019)). For any dataset D we have RidgelessReLU(D) = {f 2 PL(D) | ||Df ||TV = ||DfD||TV } . (5) (a) Step 4 (b) Step 5. One possible choice of a convex interpolant on (x4, x5) and of a concave interpolant on (x6, x7) is shown. Thin dashed lines are the supporting lines that bound all interpolants below on (x4, x5) and above on (x6, x7). Figure 3: Steps 4 - 5 for generating RidgelessReLU(D) from the dataset D. Theorem 3.2 shows that RidgelessReLU(D) is precisely the space of functions in PL(D) that achieve the minimal possible total variation norm for the derivative. Thus, intuitively, functions in RidgelessReLU(D) are averse to oscillation in their slopes. The proof of this fact uses a simple idea introduced in Theorem 1 of Neyshabur et al. (2014) which leverages the homogeneity of the ReLU to translate between the regularizer C(✓) and the penalty ||Df ||TV . Theorem 3.1 yields strong generalization guarantees uniformly over RidgelessReLU(D). To state a representative example, suppose D is generated by a function f⇤ : R ! R: yj = f⇤(xj). Corollary 3.3 (Sharp generalization on Lipschitz Functions from Theorem 3.1). Fix a dataset D = {(xi, yi), i = 1, . . . ,m}. We have sup f2RidgelessReLU(D) ||f ||Lip ||f⇤||Lip . (6) Hence, if f⇤ is L Lipschitz and xi = i/m are uniformly spaced in [0, 1], then sup f2RidgelessReLU(D) sup x2[0,1] |f(x) f⇤(x)| 2L m . (7) Proof. Observe that for any i = 2, . . . ,m 1 and x 2 (xi, xi+1) at which Df(x) exists we have ✏i(si 1 si) ✏i(Df(x) si) ✏i(si+1 si). (8) Indeed, when ✏i = 0 the estimate (8) follows from property (1b) in Theorem 3.1. Otherwise, (8) follows immediately from the local convexity/concavity of f in property (2). Hence, combining (8) with property (1a) shows that for each i = 1, . . . ,m 1 ||Df ||L1(xi,xi+1) max {|si 1| , |si|} . Again using property (1a) and taking the maximum over i = 2, . . . ,m we find ||Df ||L1(R) max1im 1 |si| = ||fD||Lip . To complete the proof of (6) observe that for every i = 1, . . . ,m 1 |si| = yi+1 yi xi+1 xi = f⇤(xi+1) f⇤(xi) xi+1 xi ||f⇤||Lip =) ||fD||Lip ||f⇤||Lip . Given any x 2 [0, 1], let us write x0 for its nearest neighbor in {i/m, i = 1, . . . ,m}. We find |f(x) f⇤(x)| |f(x) f(x0)|+ |f⇤(x0) f⇤(x)| ⇣ ||f ||Lip + ||f⇤||Lip ⌘ |x x0| 2L m . Taking the supremum over f 2 RidgelessReLU(D) and x 2 [0, 1] proves (7). Corollary 3.3 gives the best possible generalization error of Lipschitz functions, up to a universal multiplicative constant, in the sense that if all we knew about f⇤ was that it was L-Lipschitz and were given its values on {i/m, i = 1, . . . ,m}, then we cannot recover f⇤ in L1 to accuracy that is better than a constant times L/m. Further, the same kind of result holds with high probability if xi are drawn independently at random from [0, 1], with the 2L/m on the right hand side replaced by C log(m)L/m for some universal constant C > 0. The appearance of the logarithm is due to the fact that among m iid points in [0, 1] the the largest spacing between consecutive points scales like C log(m)/m with high probability. Similar generalization results can easily be established, depending on the level of smoothness assumed for f⇤ and the uniformity of the datapoints xi. In writing this article, it at first appeared to the author that the generalization bounds (7) cannot be directly obtained from the relation (5) of prior work. The issue is that a priori the relation (5) gives bounds only on the global value of ||Df ||TV , suggesting perhaps that it does not provide strong constraints on local information about the behavior of ridgeless interpolants on small intervals (xi, xi+1). However, the relation (5) can actually be effectively localized to yield the estimates (6) and (7) but with worse constants. The idea is the following. Fix f 2 RidgelessReLU(D). For any i⇤ = 3, . . . ,m 2 define the left, right and central portions of D as follows: DL := {(xi, yi), i < i⇤} , DC := {(xi, yi), i⇤ 1 i i⇤ + 1} , DR := {(xi, yi), i⇤ < i} . Consider further the left, right, and central versions of f , defined by fL(x) = ⇢ f(x), x < xi⇤ `i⇤(x), x > xi⇤ , fR(x) = ⇢ f(x), x > xi⇤ `i⇤(x), x < xi⇤ and fC(x) = 8 < : f(x), xi⇤ 1 < x < xi⇤+1 `i⇤ 1(x), x < xi⇤ 1 `i⇤(x), x > xi⇤+1 . Using (5), we have ||DfD||TV = ||Df ||TV . Further, ||Df ||TV ||DfL||TV + ||DfC ||TV + ||DfR||TV , which, by again applying (5) but this time to DL,DR and fL, fR, yields the bound ||Df ||TV ||fDL ||TV + ||DfC ||TV + ||DfDR ||TV . Using that ||DfD||TV = mX i=2 |si si 1| , ||fDL ||TV = i⇤ 2X i=2 |si si 1| , ||DfDR ||TV = m 1X i=i⇤+2 |si si 1| we derive the localized estimate |si⇤+1 si⇤ |+ |si⇤ si⇤ 1|+ |si⇤ 1 si⇤ 2| ||DfC ||TV Note further that ||DfC ||TV max x2(xi,xi+1) Df(x) min x2(xi,xi+1) Df(x), where the max and min are taken over those x at which Df(x) exists. The interpolation condition f(xi) = yi and f(xi+1) = yi+1 yields that max x2(xi,xi+1) Df(x) si and min x2(xi,xi+1) Df(x) si. Putting together the previous three lines of inequalities (and checking the edge cases i = 2,m 1), we conclude that for any i = 2, . . . ,m 1 we have ||Df(x) si||L1(xi,xi+1) |si+1 si|+ |si si 1|+ |si 1 si 2| , where we set s0 = s1. Thus, as in the last few lines of the proof of Corollary 3.3, we conclude that ||f ||Lip 7 ||f⇤||Lip and |f(x) f⇤(x)| 14L m . 4 CONCLUSION AND FUTURE DIRECTIONS In this article, we completely characterized all possible ReLU networks that interpolate a given dataset D in the simple setting of weakly `2-regularized one layer ReLU networks with a single linear unit and input/output dimension 1. Moreover, our characterization shows that, to assign labels to unseen data such networks simply “look at the curvature of the nearest neighboring datapoints on each side,” in a way made precise in Theorem 3.1. This simple geometric description led to sharp generalization results for learning 1d Lipschitz functions in Corollary 3.3. This opens many direction for future investigation. Theorem 3.1 shows, for instance, that there are infinitely many ridgeless ReLU interpolants of a given dataset D. It would be interesting to understand which ones are actually learned by gradient descent from a random initialization and a weak (or even decaying) `2-penalty in time. Further, as already pointed out after the Theorem 2.1, the conclusions of Theorem 3.1 appear to hold under very different kinds of regularization (e.g. Theorem 2 in Blanc et al. (2020)). This raises the question: what is the most general kind of regularizer that is equivalent to weight decay, at least in our simple setup? It would also be quite natural to extend the results in this article to ReLU networks with higher input dimension, for which weight decay is known to correspond to regularization of a certain weighted Radon transform of the network function Ongie et al. (2019); Parhi & Nowak (2020a;b; 2021). Finally, extending the results in this article to deeper networks and beyond fully connected architectures are fascinating directions left to future work.
1. What is the main contribution of the paper regarding two-layer ReLU networks? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the characterization provided in the paper differ from existing results? 4. Can the results of the paper be applied to other neural network architectures or problems? 5. Are there any open questions or directions for future research related to this work?
Summary Of The Paper Review
Summary Of The Paper This paper studies two-layer ReLU networks (with an additional residual unit) that interpolate a one-dimensional dataset with the minimum possible ℓ 2 penalty (not applied to the residual unit or to the biases). Existing results by Neyshabur et al. (2014), Savarese et al. (2019) and Ongie et al. (2019) have showed that these networks minimise the total variation of the derivative of the resulting predictor. This paper goes one step forward and proves a more concrete characterisation: if the curvature estimates at two consecutive training data points have different signs, then the estimator is linear; if both the curvature estimates are positive (resp. negative), then the estimator is convex (resp. concave). As a consequence, the authors give a (relatively simple, but still insightful) generalization bound on Lipschitz functions, which is tight up to a universal multiplicative constant. The authors also note that this bound could have been obtained just from the existing characterisation, albeit with a sub-optimal multiplicative constant. Review STRENGTHS The paper looks at an interesting problem (understanding how the predictor resulting from a two-layer ReLU network that fits the data with minimum norm of the weights looks like), and it proves a novel characterisation which is more concrete than the ones existing in the literature. As an immediate consequence, a generalization bound is also provided. WEAKNESSES (1) The paper is incremental in nature. The authors take an existing characterisation (based on the minimisation of the total variation of the first derivative), and prove a new one essentially via a case-by-case analysis. The fact that the predictor satisfying conditions (1)-(2) of Thm. 3.1 minimizes the total variation is a rather simple observation (formalised in Proposition A.13). The converse statement (any f minimising total variation satisfies properties (1)-(2)) requires a more elaborate proof, but no fundamentally new ingredient seems to be necessary. (2) The characterisation is still somewhat implicit and many open question remains. E.g., does gradient descent implicitly select one of the functions characterized by Theorem 3.1 and, if so, which one? Does the same result hold if we regularise also the biases and/or the parameters of the residual unit? Furthermore, as a consequence of their result, the authors give a rather simple generalization bound. I wonder if there are any other interesting consequences of the characterisation presented here. I realise that addressing these questions is somewhat out of scope, but pursuing (any of) these directions would lead to a less incremental paper. (3) The proof of Corollary 3.3 and the discussion in page 9 appear to contain some typos and, in general, are a bit compressed. I will now elaborate on the typos: (i) How do you get (8)? There seems to be a typo here, perhaps on the RHS you have ϵ i + 1 instead of ϵ i ? Right now, the formula is trivial for ϵ i = 0 and it does not seem to depend on either ϵ i + 1 or ϵ i − 1 . (ii) How do you get the L ∞ bound on D f below Eq. (8)? Are you ever using (either here or in the proof of Eq. (8)) the property (2) of Theorem 3.1? I could not follow here. (iii) A picture would help for the visualisation of the left, right, and central portions of D , and for the left, right, and central versions of f as well. (iv) How do you get the lower bound on the TV of D f in terms of the sum of the TVs of D f L , D f R and D f C ? Furthermore, are you sure about how f L , f R and f C are defined? Right now, the changes in D f in the central part ( x ∈ ( x i ∗ − 1 , x i ∗ + 1 ) ) seem to be repeated three times in the RHS of the bound mentioned above, while they appear only once in the LHS. Are you using any other property of f to obtain this lower bound, or does it hold for any f ? (v) Please elaborate on how you obtain also the next lower bound on the TV of D f (in terms of f D L , f C and f D R ). In particular, I guess that the TV of f D L should actually be the TV of D f D L . Furthermore, I can see that the TV of D f R is at least the TV of D f D R . However, I do not see how one can relate the TV of D f L to the TV of D f D L , unless i ∗ is replaced by i ∗ − 1 in the definition of f L . Let me conclude by mentioning that, if the authors need additional space to clarify the points raised above, they could just scratch the informal statement in Theorem 2.1: anyway the formal statement (Theorem 3.1) is quite similar, and it appears just in the following page.
ICLR
Title Ridgeless Interpolation with Shallow ReLU Networks in $1D$ is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions Abstract We prove a precise geometric description of all one layer ReLU networks z(x; ✓) with a single linear unit and input/output dimensions equal to one that interpolate a given dataset D = {(xi, f(xi))} and, among all such interpolants, minimize the `2-norm of the neuron weights. Such networks can intuitively be thought of as those that minimize the mean-squared error over D plus an infinitesimal weight decay penalty. We therefore refer to them as ridgeless ReLU interpolants. Our description proves that, to extrapolate values z(x; ✓) for inputs x 2 (xi, xi+1) lying between two consecutive datapoints, a ridgeless ReLU interpolant simply compares the signs of the discrete estimates for the curvature of f at xi and xi+1 derived from the dataset D. If the curvature estimates at xi and xi+1 have different signs, then z(x; ✓) must be linear on (xi, xi+1). If in contrast the curvature estimates at xi and xi+1 are both positive (resp. negative), then z(x; ✓) is convex (resp. concave) on (xi, xi+1). Our results show that ridgeless ReLU interpolants achieve the best possible generalization for learning 1d Lipschitz functions, up to universal constants. 1 INTRODUCTION The ability of overparameterized neural networks to simultaneously fit data (i.e. interpolate) and generalize to unseen data (i.e. extrapolate) is a robust empirical finding that spans the use of deep learning in tasks from computer vision Krizhevsky et al. (2012); He et al. (2016), natural language processing Brown et al. (2020), and reinforcement learning Silver et al. (2016); Vinyals et al. (2019); Jumper et al. (2021). This observation is surprising when viewed from the lens of traditional learning theory Vapnik & Chervonenkis (1971); Bartlett & Mendelson (2002), which advocates for capacity control of model classes and strong regularization to avoid overfitting. Part of the difficulty in explaining conceptually why neural networks are able to generalize is that it is unclear how to understand, concretely in terms of the network function, various forms of implicit and explicit regularization used in practice. For example, a well-chosen initialization for gradientbased optimizers can strongly impact for quality of the resulting learned network Mishkin & Matas (2015); He et al. (2015); Xiao et al. (2018). However, the specific geometric or analytic properties of the learned network ensured by a successful initialization scheme are hard to pin down. In a similar vein, it is standard practice to experiment with (weak) explicit regularizers such as weight decay, obtained by adding an `2 penalty on model parameters to the underlying empirical risk. While the effect of weight decay on parameters is transparent, it is typically challenging to reformulate this into properties of a learned non-linear model. In the simple setting of one layer ReLU networks this situation has recently become more clear. Specifically, starting with an observation in Neyshabur et al. (2014) the articles Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a;b; 2021) explore and develop the fact that `2 regularization on parameters in this setting is provably equivalent to penalizing the total variation of a certain Radon transform of the network function (cf eg Theorem 3.2). While the results in these articles hold for any input dimension, in this article we consider the simplest case of input dimension 1. In this setting, our main contributions are: 2 SETUP AND INFORMAL STATEMENT OF RESULTS Consider a one layer ReLU network z(x) = z(x; ✓) := ax+ b+ nX j=1 W (2)j h W (1)j x+ b (1) i i + , [t]+ := ReLU(t) = max {0, t} (1) with a single linear unit1 and input/output dimensions equal to one. For a given dataset D = {(xi, yi) , i = 1, . . . ,m} , 1 < x1 < · · · < xm < 1, yi 2 R, if the number of datapoints m is smaller than the network width n, there are infinitely many choices of the parameter vector ✓ for which z(x; ✓) interpolates (i.e. fits) the data: z(xi; ✓) = yi, 8 i = 1, . . . ,m. (2) Without further information about ✓, little can be said about the function z(x; ✓) for x in intervals (xi, xi+1) between consecutive datapoints when n is much larger than m. This precludes useful generalization guarantees uniformly over all ✓, subject only to the interpolation condition (2). In practice interpolants are not chosen arbitrary. Instead, they are learned by some variant of gradient descent starting from a random initialization. For a given architecture, initialization, optimizer, regularizer, and so on, understanding how the learned network uses the known labels {yi} to assign values of z(x; ✓) for x not in the dataset is an important open problem. To make progress, a fruitful line of inquiry in prior work has been to search for additional complexity measures based on margins Wei et al. (2018), PAC-Bayes estimates Dziugaite & Roy (2017; 2018); Nagarajan & Kolter (2019), weight matrix norms Neyshabur et al. (2015); Bartlett et al. (2017), information theoretic compression estimates Arora et al. (2018), Rachemacher complexity Golowich et al. (2018), etc (see Jiang et al. (2019) for a review and comparison). While perhaps not explicitly regularized, these complexity measures are hopefully small in trained networks, giving additional capacity constrains. In this article, we take a different approach. We do not seek results valid for any network architecture. Instead, our goal is to describe completely, in concrete geometrical terms, the properties of one layer ReLU networks z(x; ✓) that interpolate a dataset D with the minimal possible `2 penalty C(✓) = C(✓, n) = nX j=1 |W (1)j | 2 + |W (2)j | 2 on the neuron weights. More precisely, we study the space of ridgeless ReLU interpolants RidgelessReLU(D) := {z(x; ✓) | z(xi; ✓) = yi 8(xi, yi) 2 D, C(✓) = C⇤} , (3) of a dataset D, where C⇤ := inf ✓,n {C(✓, n) | z(xi;n, ✓) = yi 8(xi, yi) 2 D} . Intuitively, elements in RidgelessReLU(D) are ReLU nets that minimize a weakly penalized loss L(✓;D) + C(✓), ⌧ 1, (4) where L is an empirical loss, such as the mean squared error over D, and the strength of the weight decay penalty C(✓) is infinitesimal. It it plausible but by no means obvious that, with high probability, gradient descent from a random initialization and a weight decay penalty whose strength decreases to zero over training converges to an element in RidgelessReLU(D). This article does not study optimization, and we therefore leave this as an interesting open problem. Our main result is simple description of RidgelessReLU(D) and can informally be stated as follows: Theorem 2.1 (Informal Statement of Theorem 3.1). Fix a dataset D = {(xi, yi), i = 1, . . . ,m}. Each datapoint (xi, yi) gives an estimate ✏i := sgn (si si 1) , si := yi+1 yi xi+1 xi for the local curvature of the data (Figure 1). Among all continuous and piecewise linear functions f that fit D exactly, the ones in RidgelessReLU(D) are precisely those that: 1The linear term ax+ b is not really standard in practice but as in prior work Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a) leads a cleaner mathematical formulation of results. • Are convex (resp. concave) on intervals (xi, xi+1) at which neighboring datapoints agree on the local curvature in the sense that ✏i = ✏i+1 = 1 (resp. ✏i = ✏i+1 = 1). On such intervals f lies below (resp. above) the straight line interpolant of the data (Figs. 2 and 3). • Are linear (or more precisely affine) on intervals (xi, xi+1) when neighboring datapoints disagree on the local curvature in the sense that ✏i · ✏i+1 6= 1. Before giving a precise statement our results, we mention that, as described in detail below, the space RidgelessReLU(D) has been considered in a number of prior articles Savarese et al. (2019); Ongie et al. (2019); Parhi & Nowak (2020a). Our starting point will be the useful but abstract characterization of RidgelessReLU(D) they obtained in terms of the total variation of the derivative of z(x; ✓) (see (5)). We note also that the conclusions of Theorem 2.1 (and Theorem 3.1) also hold under seemingly very different hypotheses from ours. Namely, instead of `2-regularization on the parameters, Blanc et al. (2020) considers SGD training for mean squared error with iid noise added to labels. Their Theorem 2 shows (modulo some assumptions about interpreting the derivative of the ReLU) that, among all ReLU networks a linear unit that interpolate a dataset D, the only ones that minimize the implicit regularization induced by adding iid noise to SGD are precisely those that satisfy the conclusions of Theorem 2.1 and hence are exactly the networks in RidgelessReLU(D). This suggests that our results hold under much more general conditions. Further, our characterization of RidgelessReLU(D) in Theorem 3.1 immediately implies strong generalization guarantees uniformly over RidgelessReLU(D). We give a representative example in Corollary 3.3, which shows that such ReLU networks achieve the best possible generalization error of Lipschitz functions, up to constants. Finally, note that we allow networks z(x; ✓) of any width but that if the width n is too small relative to the dataset size m, then the interpolation condition (2) cannot be satisfied. Also, we point out that in our formulation of the cost C(✓) we have left both the linear term ax + b and the neuron biases unregularized. This is not standard practice but seems to yield the cleanest results. 3 STATEMENT OF RESULTS AND RELATION TO PRIOR WORK Every ReLU network z(x; ✓) is a continuous and piecewise linear function from R to R with a finite number of affine pieces. Let us denote by PL the space of all such functions and define PL(D) := {f 2 PL| f(xi) = yi 8i = 1, . . . ,m} to be the space of piecewise linear interpolants of D. Perhaps the most natural element in PL(D) is the “connect-the-dots interpolant” fD : R ! R given by fD(x) := 8 < : `1(x), x < x2 `i(x), xi < x < xi+1, i = 2, . . . ,m 2 `m 1(x), x > xm 1 , where for i = 1, . . . ,m 1, we’ve set `i(x) := (x xi)si + yi, si := yi+1 yi xi+1 xi . See Figure 1. In addition to fD, there are many other elements in RidgelessReLU(D). Theorem 3.1 gives a complete description of all of them phrased in terms of how they may behave on intervals (xi, xi+1) between consecutive datapoints. Our description is based on the signs ✏i = sgn (si si 1) , 2 i m of the (discrete) second derivatives of fD at the inputs xi from our dataset. Theorem 3.1. The space RidgelessReLU(D) consists of those f 2 PL(D) satisfying: 1. f coincides with fD on the following intervals: (1a) Near infinity, i.e. on the intervals ( 1, x2), (xm 1,1) (1b) Near datapoints that have zero discrete curvature, i.e. on intervals (xi 1, xi+1) with i = 2, . . . ,m 1 such that ✏i = 0. (1c) Between datapoints with opposite discrete curvature, i.e. on intervals (xi, xi+1) with i = 2, . . . ,m 1 such that ✏i · ✏i+1 = 1. 2. f is convex (resp. concave) and bounded above (resp. below) by fD between any consecutive datapoints at which the discrete curvature is positive (resp. negative). Specifically, suppose for some 3 i i + q m 2 that xi and xi+q are consecutive discrete inflection points in the sense that ✏i 1 6= ✏i, ✏i = · · · = ✏i+q, ✏i+q 6= ✏i+q+1. If ✏i = 1 (resp. ✏i = 1), then restricted to the interval (xi, xi+q), f is convex (resp. concave) and lies above (resp. below) the incoming and outgoing support lines and below (resp. above) fD: ✏i = 1 =) max {`i 1(x), `i+q(x)} f(x) fD(x) ✏i = 1 =) min {`i 1(x), `i+q(x)} f(x) fD(x) for all x 2 (xi, xi+q). We refer the reader to §A for a proof of Theorem 3.1. Before doing so, let us illustrate Theorem 3.1 as an algorithm that, given the dataset D, describes all elements in RidgelessReLU(D) (see Figures 2 and 3): Step 1 Linearly interpolate the endpoints: by property (1), f 2 RidgelessReLU(D) must agree with fD on ( 1, x2) and (xm 1,1). Step 2 Compute discrete curvature: for i = 2, . . . ,m 1 calculate the discrete curvature ✏i at the data point xi. Step 3 Linearly interpolate on intervals with zero curvature: for all i = 2, . . . ,m 1 at which ✏i = 0 property (1) guarantees that f coincides with the fD on (xi 1, xi+1). Step 4 Linearly interpolate on intervals with ambiguous curvature: for all i = 2, . . . ,m 1 at which ✏i · ✏i+1 = 1 property (1) guarantees that f coincides with fD on (xi, xi+1). Step 5 Determine convexity/concavity on remaining points: all intervals (xi, xi+1) on which f has not yet been determined occur in sequences (xi, xi+1), . . . , (xi+q 1, xi+q) on which ✏i+j = 1 or ✏i+j = 1 for all j = 0, . . . , q. If ✏i = 1 (resp. ✏i = 1), then f is any convex (resp. concave) function bounded below (resp. above) by fD and above (resp. below) the support lines `i(x), `i+q(x). The starting point for the proof of Theorem 3.1 comes from the prior articles Neyshabur et al. (2014); Savarese et al. (2019); Ongie et al. (2019), which obtained an insightful “function space” interpretation of RidgelessReLU(D) as a subset of PL(D). Specifically, a simple computation (cf e.g. Theorem 3.3 in Savarese et al. (2019) and also Lemma A.14 below) shows that fD achieves the smallest value of the total variation ||Df ||TV for the derivative Df among all f 2 PL(D). (The function Df is piecewise constant and ||Df ||TV is the sum of absolute values of its jumps.) Part of the content of the prior work Neyshabur et al. (2014); Savarese et al. (2019); Ongie et al. (2019) is the following result Theorem 3.2 (cf Lemma 1 in Ongie et al. (2019) and around equation (17) in Savarese et al. (2019)). For any dataset D we have RidgelessReLU(D) = {f 2 PL(D) | ||Df ||TV = ||DfD||TV } . (5) (a) Step 4 (b) Step 5. One possible choice of a convex interpolant on (x4, x5) and of a concave interpolant on (x6, x7) is shown. Thin dashed lines are the supporting lines that bound all interpolants below on (x4, x5) and above on (x6, x7). Figure 3: Steps 4 - 5 for generating RidgelessReLU(D) from the dataset D. Theorem 3.2 shows that RidgelessReLU(D) is precisely the space of functions in PL(D) that achieve the minimal possible total variation norm for the derivative. Thus, intuitively, functions in RidgelessReLU(D) are averse to oscillation in their slopes. The proof of this fact uses a simple idea introduced in Theorem 1 of Neyshabur et al. (2014) which leverages the homogeneity of the ReLU to translate between the regularizer C(✓) and the penalty ||Df ||TV . Theorem 3.1 yields strong generalization guarantees uniformly over RidgelessReLU(D). To state a representative example, suppose D is generated by a function f⇤ : R ! R: yj = f⇤(xj). Corollary 3.3 (Sharp generalization on Lipschitz Functions from Theorem 3.1). Fix a dataset D = {(xi, yi), i = 1, . . . ,m}. We have sup f2RidgelessReLU(D) ||f ||Lip ||f⇤||Lip . (6) Hence, if f⇤ is L Lipschitz and xi = i/m are uniformly spaced in [0, 1], then sup f2RidgelessReLU(D) sup x2[0,1] |f(x) f⇤(x)| 2L m . (7) Proof. Observe that for any i = 2, . . . ,m 1 and x 2 (xi, xi+1) at which Df(x) exists we have ✏i(si 1 si) ✏i(Df(x) si) ✏i(si+1 si). (8) Indeed, when ✏i = 0 the estimate (8) follows from property (1b) in Theorem 3.1. Otherwise, (8) follows immediately from the local convexity/concavity of f in property (2). Hence, combining (8) with property (1a) shows that for each i = 1, . . . ,m 1 ||Df ||L1(xi,xi+1) max {|si 1| , |si|} . Again using property (1a) and taking the maximum over i = 2, . . . ,m we find ||Df ||L1(R) max1im 1 |si| = ||fD||Lip . To complete the proof of (6) observe that for every i = 1, . . . ,m 1 |si| = yi+1 yi xi+1 xi = f⇤(xi+1) f⇤(xi) xi+1 xi ||f⇤||Lip =) ||fD||Lip ||f⇤||Lip . Given any x 2 [0, 1], let us write x0 for its nearest neighbor in {i/m, i = 1, . . . ,m}. We find |f(x) f⇤(x)| |f(x) f(x0)|+ |f⇤(x0) f⇤(x)| ⇣ ||f ||Lip + ||f⇤||Lip ⌘ |x x0| 2L m . Taking the supremum over f 2 RidgelessReLU(D) and x 2 [0, 1] proves (7). Corollary 3.3 gives the best possible generalization error of Lipschitz functions, up to a universal multiplicative constant, in the sense that if all we knew about f⇤ was that it was L-Lipschitz and were given its values on {i/m, i = 1, . . . ,m}, then we cannot recover f⇤ in L1 to accuracy that is better than a constant times L/m. Further, the same kind of result holds with high probability if xi are drawn independently at random from [0, 1], with the 2L/m on the right hand side replaced by C log(m)L/m for some universal constant C > 0. The appearance of the logarithm is due to the fact that among m iid points in [0, 1] the the largest spacing between consecutive points scales like C log(m)/m with high probability. Similar generalization results can easily be established, depending on the level of smoothness assumed for f⇤ and the uniformity of the datapoints xi. In writing this article, it at first appeared to the author that the generalization bounds (7) cannot be directly obtained from the relation (5) of prior work. The issue is that a priori the relation (5) gives bounds only on the global value of ||Df ||TV , suggesting perhaps that it does not provide strong constraints on local information about the behavior of ridgeless interpolants on small intervals (xi, xi+1). However, the relation (5) can actually be effectively localized to yield the estimates (6) and (7) but with worse constants. The idea is the following. Fix f 2 RidgelessReLU(D). For any i⇤ = 3, . . . ,m 2 define the left, right and central portions of D as follows: DL := {(xi, yi), i < i⇤} , DC := {(xi, yi), i⇤ 1 i i⇤ + 1} , DR := {(xi, yi), i⇤ < i} . Consider further the left, right, and central versions of f , defined by fL(x) = ⇢ f(x), x < xi⇤ `i⇤(x), x > xi⇤ , fR(x) = ⇢ f(x), x > xi⇤ `i⇤(x), x < xi⇤ and fC(x) = 8 < : f(x), xi⇤ 1 < x < xi⇤+1 `i⇤ 1(x), x < xi⇤ 1 `i⇤(x), x > xi⇤+1 . Using (5), we have ||DfD||TV = ||Df ||TV . Further, ||Df ||TV ||DfL||TV + ||DfC ||TV + ||DfR||TV , which, by again applying (5) but this time to DL,DR and fL, fR, yields the bound ||Df ||TV ||fDL ||TV + ||DfC ||TV + ||DfDR ||TV . Using that ||DfD||TV = mX i=2 |si si 1| , ||fDL ||TV = i⇤ 2X i=2 |si si 1| , ||DfDR ||TV = m 1X i=i⇤+2 |si si 1| we derive the localized estimate |si⇤+1 si⇤ |+ |si⇤ si⇤ 1|+ |si⇤ 1 si⇤ 2| ||DfC ||TV Note further that ||DfC ||TV max x2(xi,xi+1) Df(x) min x2(xi,xi+1) Df(x), where the max and min are taken over those x at which Df(x) exists. The interpolation condition f(xi) = yi and f(xi+1) = yi+1 yields that max x2(xi,xi+1) Df(x) si and min x2(xi,xi+1) Df(x) si. Putting together the previous three lines of inequalities (and checking the edge cases i = 2,m 1), we conclude that for any i = 2, . . . ,m 1 we have ||Df(x) si||L1(xi,xi+1) |si+1 si|+ |si si 1|+ |si 1 si 2| , where we set s0 = s1. Thus, as in the last few lines of the proof of Corollary 3.3, we conclude that ||f ||Lip 7 ||f⇤||Lip and |f(x) f⇤(x)| 14L m . 4 CONCLUSION AND FUTURE DIRECTIONS In this article, we completely characterized all possible ReLU networks that interpolate a given dataset D in the simple setting of weakly `2-regularized one layer ReLU networks with a single linear unit and input/output dimension 1. Moreover, our characterization shows that, to assign labels to unseen data such networks simply “look at the curvature of the nearest neighboring datapoints on each side,” in a way made precise in Theorem 3.1. This simple geometric description led to sharp generalization results for learning 1d Lipschitz functions in Corollary 3.3. This opens many direction for future investigation. Theorem 3.1 shows, for instance, that there are infinitely many ridgeless ReLU interpolants of a given dataset D. It would be interesting to understand which ones are actually learned by gradient descent from a random initialization and a weak (or even decaying) `2-penalty in time. Further, as already pointed out after the Theorem 2.1, the conclusions of Theorem 3.1 appear to hold under very different kinds of regularization (e.g. Theorem 2 in Blanc et al. (2020)). This raises the question: what is the most general kind of regularizer that is equivalent to weight decay, at least in our simple setup? It would also be quite natural to extend the results in this article to ReLU networks with higher input dimension, for which weight decay is known to correspond to regularization of a certain weighted Radon transform of the network function Ongie et al. (2019); Parhi & Nowak (2020a;b; 2021). Finally, extending the results in this article to deeper networks and beyond fully connected architectures are fascinating directions left to future work.
1. How does the paper characterize the induced functions of weakly regularized two-layer 1D ReLU networks? 2. What are the valuable insights provided by the paper regarding the function fit of these networks? 3. Can the same analysis apply to binary classification tasks with squared loss and weakly regularized weights? 4. Is there a specific lower bound on the number of neurons required for the function characterizations in the paper? 5. Do the authors have any intuition or insights regarding (weakly) regularized training problems with penalized bias terms? 6. Does the analysis hold for other loss functions beyond squared loss? 7. How does the paper's analysis relate to the linear spline interpolation results in [1]? 8. Are there any minor suggestions or corrections that could improve the clarity of the paper?
Summary Of The Paper Review
Summary Of The Paper This paper studies weakly regularized two-layer 1D ReLU networks and provide a characterization for the function fit based on geometric arguments. The authors also show that interpolating ReLU networks in the specified regime obtain the best possible generalization for learning 1d Lipschitz functions. Review The paper is mostly well written and seems theoretically sound. I also believe the paper contains valuable characterizations for the induced function as a result of training one hidden layer ReLU networks on 1D data. Please see my detailed comments below: What happens in the case of a binary classification framework. I think the same analysis should also apply to that (assuming there is squared loss + weakly regularized weights). Thus, the authors should be able to characterize the decision boundaries for such classification tasks. Could the authors comment on this? Is there a specific lower bound on the number of neurons to have the function characterizations in the paper? If so, the authors should explicitly state this assumption at the beginning of the paper, e.g., before the statement of Theorem 2.1? Do the authors have any intuition for (weakly) regularized training problems with penalized bias terms. I know that this is not a common case in practice but I am still wondering if regularizing bias terms makes any significant changes for the resulting function. If this is not possible, the authors should briefly explain the challenges preventing this analysis. Is this analysis valid only for squared loss? Could the authors explain the relation to the linear spline interpolation results in [1]? Minor comments: -At the top of page 3 b i should be b j -It is better to use the standard notation, i.e., n for the number of data samples and m for the number of neurons [1] Ergen, Tolga, and Mert Pilanci. "Convex geometry and duality of over-parameterized neural networks." Journal of Machine Learning Research 22.212 (2021): 1-63.
ICLR
Title BROS: A Pre-trained Language Model for Understanding Texts in Document Abstract Understanding document from their visual snapshots is an emerging and challenging problem that requires both advanced computer vision and NLP methods. Although the recent advance in OCR enables the accurate extraction of text segments, it is still challenging to extract key information from documents due to the diversity of layouts. To compensate for the difficulties, this paper introduces a pre-trained language model, BERT Relying On Spatiality (BROS), that represents and understands the semantics of spatially distributed texts. Different from previous pre-training methods on 1D text, BROS is pre-trained on large-scale semistructured documents with a novel area-masking strategy while efficiently including the spatial layout information of input documents. Also, to generate structured outputs in various document understanding tasks, BROS utilizes a powerful graphbased decoder that can capture the relation between text segments. BROS achieves state-of-the-art results on four benchmark tasks: FUNSD, SROIE*, CORD, and SciTSR. Our experimental settings and implementation codes will be publicly available. 1 INTRODUCTION Document intelligence (DI)1, which understands industrial documents from their visual appearance, is a critical application of AI in business. One of the important challenges of DI is a key information extraction task (KIE) (Huang et al., 2019; Jaume et al., 2019; Park et al., 2019) that extracts structured information from documents such as financial reports, invoices, business emails, insurance quotes, and many others. KIE requires a multi-disciplinary perspective spanning from computer vision for extracting text from document images to natural language processing for parsing key information from the identified texts. Optical character recognition (OCR) is a key component to extract texts in document images. As OCR provides a set of text blocks consisting of a text and its location, key information in documents can be represented as a single or a sequence of the text blocks (Schuster et al., 2013; Qian et al., 2019; Hwang et al., 2019; 2020). Although OCR alleviates the burden of processing images, understanding semantic relations between text blocks on diverse layouts remains a challenging problem. To solve this problem, existing works use a pre-trained language model to utilize its effective representation of text. Hwang et al. (2019) fine-tunes BERT by regarding KIE tasks as sequence tagging problems. Denk & Reisswig (2019) uses BERT to incorporate textual information into image pixels during their image segmentation tasks. However, since BERT is designed for text sequences, they artificially convert text blocks distributed in two dimensions into a single text sequence losing spatial layout information. Recently, Xu et al. (2020) proposes LayoutLM pre-trained on large-scale documents by utilizing spatial information of text blocks. They show the effectiveness of the pretraining approach by achieving high performance on several downstream tasks. Despite this success, LayoutLM has three limitations. First, LayoutLM embeds x- and y-axis individually using trainable parameters like the position embedding of BERT, ignoring the gap between positions in a sequence and 2D space. Second, its pre-training method is essentially identical to BERT that does not explicitly consider spatial relations between text blocks. Finally, in its downstream tasks, LayoutLM only conducts sequential tagging tasks (e.g. BIO tagging) that require serialization of text blocks. 1https://sites.google.com/view/di2019 These limitations indicate that LayoutLM fails not only to fully utilize spatial information but also to address KIE problems in practical scenarios when a serialization of text blocks is difficult. This paper introduces an advanced language model, BROS, pre-trained on large-scale documents, and provides a new guideline for KIE tasks. Specifically, to address the three limitations mentioned above, BROS combines three proposed methods: (1) a 2D positional encoding method that can represent the continuous property of 2D space, (2) a novel area-masking pre-training strategy that performs masked language modeling on 2D, and (3) a combination with a graph-based decoder for solving KIE tasks. We evaluated BROS on four public KIE datasets: FUNSD (form-like documents), SROIE* (receipts), CORD (receipts), and SciTSR (table structures) and observed that BROS achieved the best results on all tasks. Also, to address KIE problem under a more realistic setting we removed the order information between text blocks from the four benchmark datasets. BROS still shows the best performance on these modified datasets. Further ablation studies provide how each component contributes to the final performances of BROS. 2 RELATED WORK 2.1 PRE-TRAINED LANGUAGE MODELS BERT (Devlin et al., 2019) is a pre-trained language model using Transformer (Vaswani et al., 2017) that shows superior performance on various NLP tasks. The main strategy to train BERT is a masked language model (MLM) that masks and estimates randomly selected tokens to learn the semantics of language from large-scale corpora. Many variants of BERT have been introduced to learn transferable knowledge by modifying the pre-training strategy. XLNet (Yang et al., 2019) permutes tokens during the pre-training phase to reduce a discrepancy from the fine-tuning phase. XLNet also utilizes relative position encoding to handle long texts. StructBERT (Wang et al., 2020) shuffles tokens in text spans and adds sentence prediction tasks for recovering the order of words or sentences. SpanBERT (Joshi et al., 2020) masks span of tokens to extract better representation for span selection tasks such as question answering and co-reference resolution. ELECTRA (Clark et al., 2020) is trained to distinguish real and fake input tokens generated by another network for sample-efficient pre-training. Inspired by these previous works, BROS utilizes a new pre-training strategy that can capture complex spatial dependencies between text blocks distributed on two dimensions. Note that LayoutLM is the first pre-trained language model on spatial text blocks but they still employs the original MLM of BERT. 2.2 KEY INFORMATION EXTRACTION FROM DOCUMENTS Most of the existing approaches utilize a serializer to identify the text order of key information. POT (Hwang et al., 2019) applies BERT on serialized text blocks and extracts key contexts via a BIO tagging approach. CharGrid (Katti et al., 2018) and BERTGrid (Denk & Reisswig, 2019) map text blocks upon a grid space, identify the region of key information, and extract key contexts in the pre-determined order. Liu et al. (2019), Yu et al. (2020), and Qian et al. (2019) utilize graph convolutional networks to model dependencies between text blocks but their decoder that performs BIO tagging relies on a serialization. LayoutLM (Xu et al., 2020) is pre-trained on large-scale documents with spatial information of text blocks, but it also conducts BIO tagging for their downstream tasks. However, using a serializer and relying on the identified sequence has two limitations. First, the information represented in two dimensional layout can be lost by improper serialization. Second, there may even be no correct serialization order. A natural way to model key contexts from text blocks is a graph-based formulation that identifies all relationships between text blocks. SPADE (Hwang et al., 2020) proposes a graph-based decoder to extract key contexts from identified connectivity between text blocks without any serialization. Specifically, they utilize BERT without its sequential position embeddings and train the model while fine-tuning BERT. However, their performance is limited by the amount of data as all relations have to be learned from the beginning at the fine-tuning stage. To fully utilize the graph-based decoder, BROS is pre-trained on a large number of documents and is combined with the SPADE decoder to determine key contexts from text blocks. 3 BERT RELYING ON SPATIALITY (BROS) The main structure of BROS follows BERT, but there are three novel differences: (1) a spatial encoding metric that reflects the continuous property of 2D space, (2) a pre-training objective designed for text blocks on 2D space, and (3) a guideline for designing downstream models based on a graphbased formulation. Figure 1 shows visual description of BROS for document KIE tasks. 3.1 ENCODING SPATIAL INFORMATION INTO BERT 3.1.1 REPRESENTATION OF A TEXT BLOCK LOCATION The way to represent spatial information of text blocks is important to encode information from layouts. We utilize sinusoidal functions to encode continuous values of x- and y-axis, and merge them through a linear transformation to represent a point upon 2D space. For formal description, we use p = (x, y) to denote a point on 2D space and f sinu : R → RDs to represent a sinusoidal function. Ds is the dimensions of sinusoid embedding. BROS encodes a 2D point by applying the sinusoidal function to x- and y-axis and concatenating them as p̄ = [f sinu(x)⊕ f sinu(y)]. The ⊕ symbol indicates concatenation. The bounding box of a text block, bbi, consists of four vertices, such as ptli , p tr i , p br i , and p bl i that indicate top-left, top-right, bottom-right, and bottom-left points, respectively. The four points are converted into vectors such as p̄tli , p̄ tr i , p̄ br i , and p̄bli with f sinu. Finally, to represent a spatial embedding, bbi, BROS combines four identified vectors through a linear transformation, bbi = W tlp̄tli + W trp̄tri + W brp̄bri + W blp̄bli , (1) where W tl, W tr, W br, W bl ∈ RH×2Ds are linear transition metrics and H is a hidden size of BERT. The periodic property of the sinusoidal function can encode continuous 2D coordinates more natural than using point-specific embedding used in BERT and LayoutLM. In addition, by learning the linear transition parameters, BROS provides an effective representation of a bounding box. 3.1.2 ENCODING SPATIAL REPRESENTATION Position encoding methods affect how models utilize the position information. In BERT, position embedding is tied with the token through a point-wise summation. However, 2D spatial information is richer than 1D sequence due to the their continuous property and the high dimensionality. Moreover, text blocks can be placed over various locations on documents without significant changes in its semantic meaning. For example, locations of page numbers differ over multiple document snapshots even though they are captured from a single document. Therefore, more advanced approach is required to maximally include spatial information during encoding beyond the simple summation approach used in BERT. In BROS, the spatial information is directly encoded during the contextualization of text blocks. Specifically, BROS calculates an attention logit combining both semantic and spatial features. The former is the same as the original attention mechanism in Transformer (Vaswani et al., 2017), but the latter is a new component identifying the importance of the target location when the source context and location are given. Our proposed attention logit is formulated as follows, Ai,j = (W qti) >(W ktj) + (W qti W sq|qbbi)>(W sk|qbbj) + (W sqbbi)>(W skbbj), (2) where ti and tj are context representations for ith and jth tokens and W q, W k, W sq|q, W sk|q, W sq, W sk are linear transition matrices. The symbol indicates Hadamard product. The first term indicates an attention logit from contextual representations and the third term is from spatial representation. The second term is designed to model the spatial dependency given the source semantic representation, ti. The second and third terms are independently calculated at each layer because spatial dependencies might differ over layers. 3.2 PRE-TRAINING OBJECTIVE: AREA-MASKED LANGUAGE MODEL Pre-training diverse layouts from unlabeled documents is a key factor for document understanding tasks. To learn effective spatial representation including relationships between text blocks, we propose a novel pre-training objective. Inspired by SpanBERT (Joshi et al., 2020), we expand spans of a 1D sequence to consecutive text blocks in 2D space. Specifically, we select a few regions in a document layout, mask all tokens of text blocks in the selected regions, and estimate the masked tokens. The rules for masking tokens in area-masked language model are as the following procedure. (a) Select a text block randomly and get the top-left and bottom-right points (ptl and pbr) of the block. (b) Identify the width, height, and center of the block as (w, h) = |ptl − pbr| and c = (ptl + pbr)/2. (c) Expand the width and height as (ŵ, ĥ) = l ∗ (w, h) where l ∼ exp(λ) and λ is a distribution parameter. (d) Identify rectangular masking area of which top-left and bottom-right are p̂tl = ptl − (ŵ, ĥ), and p̂br = pbr + (ŵ, ĥ), respectively. (e) Mask all tokens of text blocks whose centers are allocated in the area. (f) Repeat (a)–(e) until 15% of tokens are masked. The rationale behind using exponential distribution is to convert the geometric distribution used in SpanBERT for a discrete domain into distribution for a continuous domain. Thus, we set λ = −ln(1 − p) where p = 0.2 used in SpanBERT. In addition, we truncated exponential distribution with 1 to prevent an infinity multiplier covering all space of the document. It should be noted that the masking area is expanded from a randomly selected text block since the area should be related to the text sizes and locations to represent text spans in 2D space. Figure 2 compares token- and area-masking on text blocks. Finally, the loss function for the area-masked language model is formed as; LAMLM = − ∑ x̂∈A(x) log p(x̂|x\A(x)), (3) where x, A(x), and x\A(x) denote tokens in given image, masked tokens of which text block is located in masking area, and the rest tokens, respectively. Similar to BERT (Devlin et al., 2019), the masked tokens are replaced by [MASK] token 80% of the time, a random token 10% of the time, or an unchanged token 10% of the time. 3.3 SPATIAL DEPENDENCY PARSERS FOR DOWNSTREAM TASKS Key information in a document (e.g. store address in a receipt) is represented as sub-sequences of text blocks. Although BIO tagging has been used to extract the sub-sequences from a text sequence, it cannot represent key texts in a document without the optimal order of text blocks. Therefore, BIO tagging cannot be applied when the optimal order is not available which often can appear in a practical scenario. To deal with the issue, BROS utilizes a decoder of SPADE (Hwang et al., 2020) that can infer a sequence of text blocks by employing a graph-based formulation. BROS supports two downstream tasks: (1) an entity extraction (EE) task and (2) an entity linking (EL) task. The EE identifies a sequence of text blocks for key information (e.g. extract address texts in a receipt) and the EL determines relations between target texts when target text blocks are known (e.g. identify key and value text pairs). For EE tasks, BROS divides the problem into two sub-tasks: starting token classification (Figure 3, a) and subsequent token classification (Figure 3, b). Let t̃i ∈ RH denote the ith token representation from the last Transformer layer of BROS. The starting token classification conducts a token-level tagging determining whether a token is a starting token of target information as follows, fstc(t̃i) = softmax(W stct̃i) ∈ RC+1, (4) where W stc ∈ R(C+1)×H is a linear transition matrix and C indicates the number of target classes. Here, the extra +1 dimension is considered to indicate non-starting tokens. The subsequent token classification is conducted by utilizing pair-wise token representations as follows, fntc(t̃i) = softmax((W ntc-st̃i)>(tntc ⊕W ntc-tt̃1 ⊕ · · · ⊕W ntc-tt̃N ))> ∈ RN+1, (5) where W ntc-s,W ntc-t ∈ RHntc×H are linear transition matrices, Hntc is a hidden feature dimension for the next token classification decoder andN is the maximum number of tokens. Here, tntc ∈ RHntc is a model parameter to classify tokens which do not have a next token or are not related to any class. It has a similar role with an end-of-sequence token, [EOS], in NLP. By solving these two sub-tasks, BROS can identify a sequence of text blocks by finding first tokens and connecting subsequent tokens. For EL tasks, BROS conducts a binary classification for all possible pairs of tokens (Figure 3, c) as follows, frel(t̃i, t̃j) = sigmoid((W rel-st̃i)>(W rel-tt̃j)), (6) where W rel-s,W rel-t ∈ RH rel×H are linear transition matrices andH rel is a hidden feature dimension. Compared to the subsequent token classification, a single token can hold multiple relations with other tokens to represent hierarchical structures of document layouts. For more detail about this graph-based formulation, see Appendix E. 4 KEY INFORMATION EXTRACTION TASKS Here, we describe three EE tasks and three EL tasks from four KIE benchmark datasets. • Form Understanding in Noisy Scanned Documents (FUNSD) (Jaume et al., 2019) is a set of documents with various forms. The dataset consists of 149 training and 50 testing examples. FUNSD has both EE and EL tasks. In the EE task, there are three semantic entities: Header, Question, and Answer. In the EL task, the semantic hierarchies are represented as relations between text blocks like header-question and question-answer pairs. • SROIE* is a variant of Task 3 of “Scanned Receipts OCR and Information Extraction” (SROIE)2 that consists of a set of store receipts. In the original SROIE task, semantic contents (Company, Date, Address, and Total price) are generated without explicit connection to the text blocks. To convert SROIE into a EE task, we developed SROIE* by matching ground truth contents with text blocks. We also split the original training set into 526 training and 100 testing examples because the ground truths are not given in the original test set. SROIE* will be publicly available. • Consolidated Receipt Dataset (CORD) (Park et al., 2019) is a set of store receipts with 800 training, 100 validation, and 100 testing examples. CORD consists of both EE and EL tasks. In the EE task, there are 30 semantic entities including menu name, menu price, and so on. In the EL task, the semantic entities are linked according to their layout structure. For example, menu name entities are linked to menu id, menu count, and menu price. • Complicated Table Structure Recognition (SciTSR) (Chi et al., 2019) is a EL task that connects cells in a table to recognize the table structure. There are two types of relations: vertical and horizontal connection between cells. The dataset consists of 12,000 training images and 3,000 test images. Although, these four datasets provide test beds for the EE and EL tasks, they represent the subset of real problems as the optimal order of text blocks is given. In real service, user can submit documents with a complex layout where the serialization of input texts are non-trivial. FUNSD provides the optimal orders of text blocks related to target classes in both training and testing examples. In SROIE*, CORD, and SciTSR, the text blocks are serialized in reading orders. To reveal the serialization problem in the EE and EL tasks, we randomly permuted text blocks of the datasets to remove their order information. We denote the permuted datasets as p-FUNSD, pSROIE*, p-CORD, and p-SciTSR and compare all models on them. For fair comparisons, we will open the permuted datasets. 5 EXPERIMENTS 5.1 EXPERIMENT SETTINGS For pre-training, IIT-CDIP Test Collection 1.03 (Lewis et al., 2006), which consists of approximatley 11M document images, is used but 400K RVL-CDIP dataset4 (Harley et al., 2015) is excluded following LayoutLM. In-house OCR engine was applied to obtain text blocks from unlabeled document images. The main Transformer structure of BROS is the same as BERT. By following BERTBASE, the hidden size, the number of self-attention heads, the feed-forward size, and the number of Transformer layers set to 768, 12, 3072, and 12, respectively. The same pre-training setting with LayoutLM is used for a fair comparison. 2https://rrc.cvc.uab.es/?ch=13 3https://ir.nist.gov/cdip/ 4https://www.cs.cmu.edu/ aharley/rvl-cdip/ BROS is trained by using AdamW optimizer (Loshchilov & Hutter, 2019) with a learning rate of 5e-5 with linear decay. First 10% of the total epochs are used for a warm-up. We initialized weights of BROS with those of BERTBASE and trained BROS on IIT-CDIP for 2 epochs with 80 of batch size, following LayoutLM. The pre-training takes 64 hours on 8 NVIDIA Tesla V100s with Distributed Data Parallel (DDP). During fine-tuning, the learning rate is set to 5e-5. The batch size is set to 16 for all tasks. The number of training epochs or steps is as follows: 100 epochs for FUNSD, 1K steps for SROIE* and CORD, and 7.5 epochs for SciTSR. The hidden feature dimensions, Hntc and H rel, of the SPADE decoder are set to 128 for FUNSD, 64 for SROIE*, and 256 for CORD and SciTSR. Although the authors of LayoutLM published their codes on GitHub5, the data and script file used for pre-training are not included. For a fair comparison, we made our own implementation, which we refer to LayoutLM†, on the same pre-training data and script file used for BROS pre-training. We verified LayoutLM† by comparing its performances on FUNSD from the reported scores in (Xu et al., 2020). See Appendix A for more information. 5.2 EXPERIMENTAL RESULTS WITH OPTIMAL ORDER INFORMATION Table 1 shows the results on four KIE datasets with given optimal order information of text blocks. For EL tasks, we applied SPADE decoders to all pre-trained models such as BERT, LayoutLM, and BROS. In all tasks, we observed that BERT shows lower scores than LayoutLM and BROS presumably due to the loss of spatial information. BROS achieves the highest scores showing the effectiveness of our approach. Specifically, in FUNSD, BROS shows the state-of-the-art performances with a large margins of 2.32pp in the EE task and 19.63pp in the EL task. Moreover, it should be noted that BROS achieves higher F1 score than one of the LayoutLM variants, which utilizes visual features (81.21 > 79.27 (Xu et al., 2020)). In SROIE* and CORD, BROS also shows the best performances over all the EE and EL tasks. In SciTSR, LayoutLM and BROS show the importance of pre-training by exceeding other baselines with large margins which are trained by using either only spatial information of cells (Tabby and DeepDeSRT) or without pre-training spatial texts 5https://github.com/microsoft/unilm/tree/master/layoutlm (GraphTSR). These results prove that BROS captures better representations of text blocks for KIE downstream tasks. 5.3 EXPERIMENTAL RESULTS WITHOUT OPTIMAL ORDER INFORMATION It is an another challenging problem to arrange text blocks in the order that humans can understand (Li et al., 2020). Although most commercial OCR products provide an order of OCR text blocks, they are unable to reconcile the structural formatting of the texts precisely (See Appendix B). Therefore, the experiments in Section 5.2 cannot fully represent real KIE problems because they assume the optimal order of text blocks is given. To reveal the challenge, we removed the order information in all datasets by permuting the order of text blocks as mentioned in Section 4 and investigated how BERT, LayoutLM, and BROS work without the order information. We utilized a SPADE decoder for all models because BIO tagging on these permuted dataset cannot extract a sequence of text blocks in a correct order. Table 2 shows the results. Due to the lose of correct orders, BERT shows poor performances over all tasks. By utilizing spatial information of text blocks, LayoutLM† shows better performance but it suffers from huge performance degradation compared to the score computed with the optimal order. On the other hand, BROS shows comparable results compared the cases with the optimal order and achieves better performances than BERT and LayoutLM†. To systematically investigate how the order information affects the performance of the models, we construct variants of FUNSD by re-ordering text blocks with two sorting methods based on the topleft points. The text blocks of xy-FUNSD are sorted according to x-axis with ascending order of y-axis and those of yx-FUNSD are sorted according to y-axis with ascending order of x-axis. Table 3 shows performance on p-FUNSD, xy-FUNSD, yx-FUNSD, and the original FUNSD. All models utilize a SPADE decoder for a fair comparison. Interestingly, the performance of LayoutLM† is degraded in the order of FUNSD, yx-FUNSD, xy-FUNSD, and p-FUNSD as like the order of the reasonable serialization for text on 2D space. On the other hand, the performance of BROS is relatively consistent. These results show that BROS is applicable to real KIE problems without relying on an additional serialization method. 5.4 ABLATION STUDIES Table 4 provides the result of the ablative experiments computed while changing pre-training strategy, 2D position embedding and encoding methods, and a decoder for downstream tasks. The 2D embedding method represents how to treat spatial information of text blocks and the 2D encoding 2D position encoding method F1 Decoder for downstream tasks F1 method indicates how to merge the 2D embeddings into BERT. The results show that all modifications improve performance when comparing the methods of LayoutLM. Specifically, 2D position embedding and its encoding methods show huge performance gaps by 6.45pp and 31.98pp, respectively. These results represent a co-modality of our 2D continuous position embedding approach and its untied encoding method. LayoutLM and BROS are initialized with weights of BERT to utilize powerful knowledge of BERT that learns from large-scale corpora. However, BERT includes its 1D positional embeddings (1DPE) that might be harmful by making a sequence of text blocks even though there is no order information. To investigate the effectiveness of the 1D-PE, we conduct an additional ablative study. BROS without the 1D-PE shows the same F1 scores on both FUNSD and p-FUNSD (70.07), but BROS with the 1D-PE shows performance degradation when the dataset loses the optimal order information (81.21 on FUNSD→ 75.14 on p-FUNSD). Nevertheless, BROS with the 1D-PE shows better performances on both datasets. This might be because the 1D-PE preserves the token order in a single text block. Based on this result, we decided to incorporate the 1D-PE in our model. 6 CONCLUSION We present a novel pre-trained language model, BROS, for understanding semi-structured documents. BROS encodes 2D continuous position of text blocks and learns natural language from text blocks with an area-driven training strategy. To extract key contexts from text blocks without the order information, BROS adapts a graph-based decoder that identifies text sequences for EE tasks and layout relationships for EL tasks. In our extensive experiments on three EE and three EL tasks, BROS consistently shows better performances as well as the robustness on perturbed orders of text blocks compared to the existing approaches. A REPRODUCING THE LAYOUTLM As mentioned in the paper, to compare BROS from LayoutLM in diverse experimental settings, we implement LayoutLM in our experimental pipeline. Table 5 compares our implementations from the reported scores in Xu et al. (2020). As can be seen, multiple experiments are conducted according to the number of pre-training data. Our implementation, referred to LayoutLM†, shows comparable performances over all settings. B VISUALIZATION OF SERIALIZED OCR BLOCKS With the developments in the field of machine learning, the performance of commercial OCR has improved over the years. However, it is still hard to entrust the ordering of commercial OCR block outputs Li et al. (2020). Figure 4 shows the gap between the comprehensive reading order and outputs of commercial OCR. Specifically, the figure contrasts how the words in the OCR results should be serialized (Figure 4a) but most commercially available OCR technologies are unable to reconcile the structural formatting of the text – leading to them ordering the words horizontally (Figure 4b). This cursory example illustrates that as advanced as commercial OCR solutions have become, there are still ways to improve and our proposed method is one way in which this can be done. C ABLATION STUDIES Here, we provide more ablation studies on the components proposed in the paper. In the following tables, the number of pre-training data is 512K and the scores (F1) are the average of 5 experimental results. And for all the EL tasks, since BIO tagging cannot address the problem, SPADE decoder is applied to all models. C.1 FURTHER ABLATION STUDIES ON ALL DOWNSTREAM TASKS Table 6 and Table 7 are the extension of Table 4 and show the F1 scores for all downstream EE and EL tasks measured by changing each components one by one in BROS. From these tables, we can see that the settings of BROS show the best performance in most cases. C.2 GRADUALLY ADDING PROPOSED COMPONENTS TO THE ORIGINAL LAYOUTLM To evaluate performance improvements from LayoutLM, we provide the experimental results when gradually adding each new component. Table 8 and Table 9 provide performance changes of F1 score for EE and EL tasks, respectively. In most cases, our proposed methods show performance improvements over all tasks. C.3 PROPOSED COMPONENTS ON THE ORIGINAL LAYOUTLM For apples-to-apples comparison, we provides performance changes when adding each proposed component on LayoutLM. The results are shown in Table 10 and Table 11. When changing the original module to ours, the performances are solely increased except for the case of the positional embedding (sinusoid & linear). Interestingly, when combining our positional embedding and encoding (untied), the performance is dramatically increased. This result shows the benefits of using our proposed embedding and encoding methods together. D RESOURCE ANALYSIS Table 12 shows the resource and speed analysis of LayoutLM and BROS. The F1 scores of LayoutLM are referred from (Xu et al., 2020) and all pre-training models are trained with 1 epoch of 11M data. As can be seen, BROS shows better performance than LayoutLMLARGE even though requiring fewer parameters and less inference time. E GRAPH-BASED FORMALIZATION FOR EE AND EL TASKS Document KIE is a task that extracts structural information from documents. In this paper, we defined EE task that identifies text sequences for target classes and EL task that links the head of the text sequences to determine structural information of documents. These EE and EL tasks can be interpreted as tasks that identify a directional graph structure between OCR text blocks. In this formalization, all tokens are treated as nodes in a graph and the links between the nodes indicate the structural relationships between tokens in a document. Figure 5 shows examples of FUNSD, SROIE, CORE, and SciTSR with the graph-based formalization. F SAMPLE INFERENCE RESULTS OF THE FUNSD EE AND EL TASKS Figure 6 shows the inference results of LayoutLM and BROS and the ground truth of a same FUNSD image. Even though the document has a complex layout, BROS identified key contexts and relations reasonably. However, we observed that LayoutLM tends to link unrelated contexts that are spatially far in the layout.
1. What is the main contribution of the paper in terms of pre-training strategies for semi-structured documents? 2. What are the strengths and weaknesses of the proposed BERT relying on Spatiality (BROS) method compared to past work like LayoutLM? 3. How does the reviewer assess the effectiveness and adaptability of BROS based on the experiments conducted in the paper? 4. Are there any questions or concerns regarding the design choices made in the BROS method, such as the use of continuous 2D positional encoding, area-masking pre-training, and graph-based decoder? 5. How does the reviewer evaluate the explanations and analyses provided in the paper, particularly in Section 3.2 and 3.3? 6. Are there any suggestions for improving the clarity and completeness of the paper's content, such as providing more details on the pre-training objective for the area-masked language model or explaining the performance difference between original LayoutLM and BROS in Table 4?
Review
Review Summary: The paper studies the problem of large-scale pre-training for semi-structured documents. It proposes a new pre-training strategy called BERT relying on Spatiality (BROS) with area-masking and utilizes a graph-based decoder to capture the semantic relation between text blocks to alleviate the serialization problem of LayoutLM. It points out that LayoutLM fails to fully utilize spatial information of text blocks and will face difficulties when text blocks cannot be easily serialized. The three drawbacks of LayoutLM are listed: X-axis and Y-axis are treated individually with point-specific embedding Pre-training is identical to BERT so does not consider spatial relations between text blocks Suffer from the serialization problem The proposed three corresponding methods of BROS are: Continuous 2D positional encoding Area-masking pre-training on 2D language modeling Graph-based decoder for solving EE & EL tasks Strength: The paper makes incremental advances over past work (LayoutLM) and the proposed BROS models achieves SOTA performance on four EE/EL datasets (i.e., FUNSD, SORIE*, CORD, and SciTSR) The paper is generally easy to follow and could be better if provide more important details in Section 3.2 & 3.3 The experiment and discussion for Section 5.3 are quite convincing. BROS could achieve robust and consistent performances across all the four permuted version datasets, which demonstrates that BROS is adaptive to documents from the practical scenarios. Major concerns: For Section 3.2, the author didn’t even provide the pre-training objective for the area-masked language model. I think the author should include the objective and define the exponential distribution explicitly. I’m confused about why the performance difference in Table 4 between original LayoutLM and BROS is larger than that in Table 1. In the original LayoutLM, the 2D position encoding method is tied with token embedding. This applies to both Table 1 and Table 4. However, in Table 4 the performance difference on FUNSD EE is 42.46 vs 74.44, while in Table 1 the performance comparison is 78.89 vs 81.21. Could the author give some explanations on this? In Table 4, it is better for the author to clearly indicate each ablation’s components. The author should also give the performance of the original LayoutLM and performances after gradually adding each new component to the original LayoutLM: such as original LayoutLM + Sinusoid & Linear, original LayoutLM + Sinusoid & Linear + untied with token embedding, etc. For encoder design in Eq. (2), the second term is used to model the spatial dependency given the source semantic representation. How about adding a symmetric term to model the semantic dependency given the source spatial representation. My question is simply that why not adopt a symmetric design for Eq. (2)? Can the author give the reason behind the design of t n t c in Eq.(4)? I’m not so clear about the function of t n t c . Does the f n t c output a distribution of the probability to be the next token over all N tokens? Could the author give a detailed analysis on which strategy contributes the most to BROS’ robustness against permuted order information? From the results of Table 4, it is not the SPADE decoder and the most important factor seems to be calculating semantic and position attentions separately, i.e., untied with token embedding and explicitly modeling semantic/position relations between text blocks. Do the authors agree with my conjecture? Minor concerns: Although SPADE is not the most important component of BROS, I believe including details of the grade-based decoder will help the readers to understand the model much better. I’m curious about the performance of SpanBERT on the four datasets since the author said that area-masking of BROS is inspired by SpanBERT. In Table 3, the value of LayoutLM - FUNSD should be 78.89 since all other p-FUNSD & FUNSD related values are consistent with Table 1 & 2.
ICLR
Title BROS: A Pre-trained Language Model for Understanding Texts in Document Abstract Understanding document from their visual snapshots is an emerging and challenging problem that requires both advanced computer vision and NLP methods. Although the recent advance in OCR enables the accurate extraction of text segments, it is still challenging to extract key information from documents due to the diversity of layouts. To compensate for the difficulties, this paper introduces a pre-trained language model, BERT Relying On Spatiality (BROS), that represents and understands the semantics of spatially distributed texts. Different from previous pre-training methods on 1D text, BROS is pre-trained on large-scale semistructured documents with a novel area-masking strategy while efficiently including the spatial layout information of input documents. Also, to generate structured outputs in various document understanding tasks, BROS utilizes a powerful graphbased decoder that can capture the relation between text segments. BROS achieves state-of-the-art results on four benchmark tasks: FUNSD, SROIE*, CORD, and SciTSR. Our experimental settings and implementation codes will be publicly available. 1 INTRODUCTION Document intelligence (DI)1, which understands industrial documents from their visual appearance, is a critical application of AI in business. One of the important challenges of DI is a key information extraction task (KIE) (Huang et al., 2019; Jaume et al., 2019; Park et al., 2019) that extracts structured information from documents such as financial reports, invoices, business emails, insurance quotes, and many others. KIE requires a multi-disciplinary perspective spanning from computer vision for extracting text from document images to natural language processing for parsing key information from the identified texts. Optical character recognition (OCR) is a key component to extract texts in document images. As OCR provides a set of text blocks consisting of a text and its location, key information in documents can be represented as a single or a sequence of the text blocks (Schuster et al., 2013; Qian et al., 2019; Hwang et al., 2019; 2020). Although OCR alleviates the burden of processing images, understanding semantic relations between text blocks on diverse layouts remains a challenging problem. To solve this problem, existing works use a pre-trained language model to utilize its effective representation of text. Hwang et al. (2019) fine-tunes BERT by regarding KIE tasks as sequence tagging problems. Denk & Reisswig (2019) uses BERT to incorporate textual information into image pixels during their image segmentation tasks. However, since BERT is designed for text sequences, they artificially convert text blocks distributed in two dimensions into a single text sequence losing spatial layout information. Recently, Xu et al. (2020) proposes LayoutLM pre-trained on large-scale documents by utilizing spatial information of text blocks. They show the effectiveness of the pretraining approach by achieving high performance on several downstream tasks. Despite this success, LayoutLM has three limitations. First, LayoutLM embeds x- and y-axis individually using trainable parameters like the position embedding of BERT, ignoring the gap between positions in a sequence and 2D space. Second, its pre-training method is essentially identical to BERT that does not explicitly consider spatial relations between text blocks. Finally, in its downstream tasks, LayoutLM only conducts sequential tagging tasks (e.g. BIO tagging) that require serialization of text blocks. 1https://sites.google.com/view/di2019 These limitations indicate that LayoutLM fails not only to fully utilize spatial information but also to address KIE problems in practical scenarios when a serialization of text blocks is difficult. This paper introduces an advanced language model, BROS, pre-trained on large-scale documents, and provides a new guideline for KIE tasks. Specifically, to address the three limitations mentioned above, BROS combines three proposed methods: (1) a 2D positional encoding method that can represent the continuous property of 2D space, (2) a novel area-masking pre-training strategy that performs masked language modeling on 2D, and (3) a combination with a graph-based decoder for solving KIE tasks. We evaluated BROS on four public KIE datasets: FUNSD (form-like documents), SROIE* (receipts), CORD (receipts), and SciTSR (table structures) and observed that BROS achieved the best results on all tasks. Also, to address KIE problem under a more realistic setting we removed the order information between text blocks from the four benchmark datasets. BROS still shows the best performance on these modified datasets. Further ablation studies provide how each component contributes to the final performances of BROS. 2 RELATED WORK 2.1 PRE-TRAINED LANGUAGE MODELS BERT (Devlin et al., 2019) is a pre-trained language model using Transformer (Vaswani et al., 2017) that shows superior performance on various NLP tasks. The main strategy to train BERT is a masked language model (MLM) that masks and estimates randomly selected tokens to learn the semantics of language from large-scale corpora. Many variants of BERT have been introduced to learn transferable knowledge by modifying the pre-training strategy. XLNet (Yang et al., 2019) permutes tokens during the pre-training phase to reduce a discrepancy from the fine-tuning phase. XLNet also utilizes relative position encoding to handle long texts. StructBERT (Wang et al., 2020) shuffles tokens in text spans and adds sentence prediction tasks for recovering the order of words or sentences. SpanBERT (Joshi et al., 2020) masks span of tokens to extract better representation for span selection tasks such as question answering and co-reference resolution. ELECTRA (Clark et al., 2020) is trained to distinguish real and fake input tokens generated by another network for sample-efficient pre-training. Inspired by these previous works, BROS utilizes a new pre-training strategy that can capture complex spatial dependencies between text blocks distributed on two dimensions. Note that LayoutLM is the first pre-trained language model on spatial text blocks but they still employs the original MLM of BERT. 2.2 KEY INFORMATION EXTRACTION FROM DOCUMENTS Most of the existing approaches utilize a serializer to identify the text order of key information. POT (Hwang et al., 2019) applies BERT on serialized text blocks and extracts key contexts via a BIO tagging approach. CharGrid (Katti et al., 2018) and BERTGrid (Denk & Reisswig, 2019) map text blocks upon a grid space, identify the region of key information, and extract key contexts in the pre-determined order. Liu et al. (2019), Yu et al. (2020), and Qian et al. (2019) utilize graph convolutional networks to model dependencies between text blocks but their decoder that performs BIO tagging relies on a serialization. LayoutLM (Xu et al., 2020) is pre-trained on large-scale documents with spatial information of text blocks, but it also conducts BIO tagging for their downstream tasks. However, using a serializer and relying on the identified sequence has two limitations. First, the information represented in two dimensional layout can be lost by improper serialization. Second, there may even be no correct serialization order. A natural way to model key contexts from text blocks is a graph-based formulation that identifies all relationships between text blocks. SPADE (Hwang et al., 2020) proposes a graph-based decoder to extract key contexts from identified connectivity between text blocks without any serialization. Specifically, they utilize BERT without its sequential position embeddings and train the model while fine-tuning BERT. However, their performance is limited by the amount of data as all relations have to be learned from the beginning at the fine-tuning stage. To fully utilize the graph-based decoder, BROS is pre-trained on a large number of documents and is combined with the SPADE decoder to determine key contexts from text blocks. 3 BERT RELYING ON SPATIALITY (BROS) The main structure of BROS follows BERT, but there are three novel differences: (1) a spatial encoding metric that reflects the continuous property of 2D space, (2) a pre-training objective designed for text blocks on 2D space, and (3) a guideline for designing downstream models based on a graphbased formulation. Figure 1 shows visual description of BROS for document KIE tasks. 3.1 ENCODING SPATIAL INFORMATION INTO BERT 3.1.1 REPRESENTATION OF A TEXT BLOCK LOCATION The way to represent spatial information of text blocks is important to encode information from layouts. We utilize sinusoidal functions to encode continuous values of x- and y-axis, and merge them through a linear transformation to represent a point upon 2D space. For formal description, we use p = (x, y) to denote a point on 2D space and f sinu : R → RDs to represent a sinusoidal function. Ds is the dimensions of sinusoid embedding. BROS encodes a 2D point by applying the sinusoidal function to x- and y-axis and concatenating them as p̄ = [f sinu(x)⊕ f sinu(y)]. The ⊕ symbol indicates concatenation. The bounding box of a text block, bbi, consists of four vertices, such as ptli , p tr i , p br i , and p bl i that indicate top-left, top-right, bottom-right, and bottom-left points, respectively. The four points are converted into vectors such as p̄tli , p̄ tr i , p̄ br i , and p̄bli with f sinu. Finally, to represent a spatial embedding, bbi, BROS combines four identified vectors through a linear transformation, bbi = W tlp̄tli + W trp̄tri + W brp̄bri + W blp̄bli , (1) where W tl, W tr, W br, W bl ∈ RH×2Ds are linear transition metrics and H is a hidden size of BERT. The periodic property of the sinusoidal function can encode continuous 2D coordinates more natural than using point-specific embedding used in BERT and LayoutLM. In addition, by learning the linear transition parameters, BROS provides an effective representation of a bounding box. 3.1.2 ENCODING SPATIAL REPRESENTATION Position encoding methods affect how models utilize the position information. In BERT, position embedding is tied with the token through a point-wise summation. However, 2D spatial information is richer than 1D sequence due to the their continuous property and the high dimensionality. Moreover, text blocks can be placed over various locations on documents without significant changes in its semantic meaning. For example, locations of page numbers differ over multiple document snapshots even though they are captured from a single document. Therefore, more advanced approach is required to maximally include spatial information during encoding beyond the simple summation approach used in BERT. In BROS, the spatial information is directly encoded during the contextualization of text blocks. Specifically, BROS calculates an attention logit combining both semantic and spatial features. The former is the same as the original attention mechanism in Transformer (Vaswani et al., 2017), but the latter is a new component identifying the importance of the target location when the source context and location are given. Our proposed attention logit is formulated as follows, Ai,j = (W qti) >(W ktj) + (W qti W sq|qbbi)>(W sk|qbbj) + (W sqbbi)>(W skbbj), (2) where ti and tj are context representations for ith and jth tokens and W q, W k, W sq|q, W sk|q, W sq, W sk are linear transition matrices. The symbol indicates Hadamard product. The first term indicates an attention logit from contextual representations and the third term is from spatial representation. The second term is designed to model the spatial dependency given the source semantic representation, ti. The second and third terms are independently calculated at each layer because spatial dependencies might differ over layers. 3.2 PRE-TRAINING OBJECTIVE: AREA-MASKED LANGUAGE MODEL Pre-training diverse layouts from unlabeled documents is a key factor for document understanding tasks. To learn effective spatial representation including relationships between text blocks, we propose a novel pre-training objective. Inspired by SpanBERT (Joshi et al., 2020), we expand spans of a 1D sequence to consecutive text blocks in 2D space. Specifically, we select a few regions in a document layout, mask all tokens of text blocks in the selected regions, and estimate the masked tokens. The rules for masking tokens in area-masked language model are as the following procedure. (a) Select a text block randomly and get the top-left and bottom-right points (ptl and pbr) of the block. (b) Identify the width, height, and center of the block as (w, h) = |ptl − pbr| and c = (ptl + pbr)/2. (c) Expand the width and height as (ŵ, ĥ) = l ∗ (w, h) where l ∼ exp(λ) and λ is a distribution parameter. (d) Identify rectangular masking area of which top-left and bottom-right are p̂tl = ptl − (ŵ, ĥ), and p̂br = pbr + (ŵ, ĥ), respectively. (e) Mask all tokens of text blocks whose centers are allocated in the area. (f) Repeat (a)–(e) until 15% of tokens are masked. The rationale behind using exponential distribution is to convert the geometric distribution used in SpanBERT for a discrete domain into distribution for a continuous domain. Thus, we set λ = −ln(1 − p) where p = 0.2 used in SpanBERT. In addition, we truncated exponential distribution with 1 to prevent an infinity multiplier covering all space of the document. It should be noted that the masking area is expanded from a randomly selected text block since the area should be related to the text sizes and locations to represent text spans in 2D space. Figure 2 compares token- and area-masking on text blocks. Finally, the loss function for the area-masked language model is formed as; LAMLM = − ∑ x̂∈A(x) log p(x̂|x\A(x)), (3) where x, A(x), and x\A(x) denote tokens in given image, masked tokens of which text block is located in masking area, and the rest tokens, respectively. Similar to BERT (Devlin et al., 2019), the masked tokens are replaced by [MASK] token 80% of the time, a random token 10% of the time, or an unchanged token 10% of the time. 3.3 SPATIAL DEPENDENCY PARSERS FOR DOWNSTREAM TASKS Key information in a document (e.g. store address in a receipt) is represented as sub-sequences of text blocks. Although BIO tagging has been used to extract the sub-sequences from a text sequence, it cannot represent key texts in a document without the optimal order of text blocks. Therefore, BIO tagging cannot be applied when the optimal order is not available which often can appear in a practical scenario. To deal with the issue, BROS utilizes a decoder of SPADE (Hwang et al., 2020) that can infer a sequence of text blocks by employing a graph-based formulation. BROS supports two downstream tasks: (1) an entity extraction (EE) task and (2) an entity linking (EL) task. The EE identifies a sequence of text blocks for key information (e.g. extract address texts in a receipt) and the EL determines relations between target texts when target text blocks are known (e.g. identify key and value text pairs). For EE tasks, BROS divides the problem into two sub-tasks: starting token classification (Figure 3, a) and subsequent token classification (Figure 3, b). Let t̃i ∈ RH denote the ith token representation from the last Transformer layer of BROS. The starting token classification conducts a token-level tagging determining whether a token is a starting token of target information as follows, fstc(t̃i) = softmax(W stct̃i) ∈ RC+1, (4) where W stc ∈ R(C+1)×H is a linear transition matrix and C indicates the number of target classes. Here, the extra +1 dimension is considered to indicate non-starting tokens. The subsequent token classification is conducted by utilizing pair-wise token representations as follows, fntc(t̃i) = softmax((W ntc-st̃i)>(tntc ⊕W ntc-tt̃1 ⊕ · · · ⊕W ntc-tt̃N ))> ∈ RN+1, (5) where W ntc-s,W ntc-t ∈ RHntc×H are linear transition matrices, Hntc is a hidden feature dimension for the next token classification decoder andN is the maximum number of tokens. Here, tntc ∈ RHntc is a model parameter to classify tokens which do not have a next token or are not related to any class. It has a similar role with an end-of-sequence token, [EOS], in NLP. By solving these two sub-tasks, BROS can identify a sequence of text blocks by finding first tokens and connecting subsequent tokens. For EL tasks, BROS conducts a binary classification for all possible pairs of tokens (Figure 3, c) as follows, frel(t̃i, t̃j) = sigmoid((W rel-st̃i)>(W rel-tt̃j)), (6) where W rel-s,W rel-t ∈ RH rel×H are linear transition matrices andH rel is a hidden feature dimension. Compared to the subsequent token classification, a single token can hold multiple relations with other tokens to represent hierarchical structures of document layouts. For more detail about this graph-based formulation, see Appendix E. 4 KEY INFORMATION EXTRACTION TASKS Here, we describe three EE tasks and three EL tasks from four KIE benchmark datasets. • Form Understanding in Noisy Scanned Documents (FUNSD) (Jaume et al., 2019) is a set of documents with various forms. The dataset consists of 149 training and 50 testing examples. FUNSD has both EE and EL tasks. In the EE task, there are three semantic entities: Header, Question, and Answer. In the EL task, the semantic hierarchies are represented as relations between text blocks like header-question and question-answer pairs. • SROIE* is a variant of Task 3 of “Scanned Receipts OCR and Information Extraction” (SROIE)2 that consists of a set of store receipts. In the original SROIE task, semantic contents (Company, Date, Address, and Total price) are generated without explicit connection to the text blocks. To convert SROIE into a EE task, we developed SROIE* by matching ground truth contents with text blocks. We also split the original training set into 526 training and 100 testing examples because the ground truths are not given in the original test set. SROIE* will be publicly available. • Consolidated Receipt Dataset (CORD) (Park et al., 2019) is a set of store receipts with 800 training, 100 validation, and 100 testing examples. CORD consists of both EE and EL tasks. In the EE task, there are 30 semantic entities including menu name, menu price, and so on. In the EL task, the semantic entities are linked according to their layout structure. For example, menu name entities are linked to menu id, menu count, and menu price. • Complicated Table Structure Recognition (SciTSR) (Chi et al., 2019) is a EL task that connects cells in a table to recognize the table structure. There are two types of relations: vertical and horizontal connection between cells. The dataset consists of 12,000 training images and 3,000 test images. Although, these four datasets provide test beds for the EE and EL tasks, they represent the subset of real problems as the optimal order of text blocks is given. In real service, user can submit documents with a complex layout where the serialization of input texts are non-trivial. FUNSD provides the optimal orders of text blocks related to target classes in both training and testing examples. In SROIE*, CORD, and SciTSR, the text blocks are serialized in reading orders. To reveal the serialization problem in the EE and EL tasks, we randomly permuted text blocks of the datasets to remove their order information. We denote the permuted datasets as p-FUNSD, pSROIE*, p-CORD, and p-SciTSR and compare all models on them. For fair comparisons, we will open the permuted datasets. 5 EXPERIMENTS 5.1 EXPERIMENT SETTINGS For pre-training, IIT-CDIP Test Collection 1.03 (Lewis et al., 2006), which consists of approximatley 11M document images, is used but 400K RVL-CDIP dataset4 (Harley et al., 2015) is excluded following LayoutLM. In-house OCR engine was applied to obtain text blocks from unlabeled document images. The main Transformer structure of BROS is the same as BERT. By following BERTBASE, the hidden size, the number of self-attention heads, the feed-forward size, and the number of Transformer layers set to 768, 12, 3072, and 12, respectively. The same pre-training setting with LayoutLM is used for a fair comparison. 2https://rrc.cvc.uab.es/?ch=13 3https://ir.nist.gov/cdip/ 4https://www.cs.cmu.edu/ aharley/rvl-cdip/ BROS is trained by using AdamW optimizer (Loshchilov & Hutter, 2019) with a learning rate of 5e-5 with linear decay. First 10% of the total epochs are used for a warm-up. We initialized weights of BROS with those of BERTBASE and trained BROS on IIT-CDIP for 2 epochs with 80 of batch size, following LayoutLM. The pre-training takes 64 hours on 8 NVIDIA Tesla V100s with Distributed Data Parallel (DDP). During fine-tuning, the learning rate is set to 5e-5. The batch size is set to 16 for all tasks. The number of training epochs or steps is as follows: 100 epochs for FUNSD, 1K steps for SROIE* and CORD, and 7.5 epochs for SciTSR. The hidden feature dimensions, Hntc and H rel, of the SPADE decoder are set to 128 for FUNSD, 64 for SROIE*, and 256 for CORD and SciTSR. Although the authors of LayoutLM published their codes on GitHub5, the data and script file used for pre-training are not included. For a fair comparison, we made our own implementation, which we refer to LayoutLM†, on the same pre-training data and script file used for BROS pre-training. We verified LayoutLM† by comparing its performances on FUNSD from the reported scores in (Xu et al., 2020). See Appendix A for more information. 5.2 EXPERIMENTAL RESULTS WITH OPTIMAL ORDER INFORMATION Table 1 shows the results on four KIE datasets with given optimal order information of text blocks. For EL tasks, we applied SPADE decoders to all pre-trained models such as BERT, LayoutLM, and BROS. In all tasks, we observed that BERT shows lower scores than LayoutLM and BROS presumably due to the loss of spatial information. BROS achieves the highest scores showing the effectiveness of our approach. Specifically, in FUNSD, BROS shows the state-of-the-art performances with a large margins of 2.32pp in the EE task and 19.63pp in the EL task. Moreover, it should be noted that BROS achieves higher F1 score than one of the LayoutLM variants, which utilizes visual features (81.21 > 79.27 (Xu et al., 2020)). In SROIE* and CORD, BROS also shows the best performances over all the EE and EL tasks. In SciTSR, LayoutLM and BROS show the importance of pre-training by exceeding other baselines with large margins which are trained by using either only spatial information of cells (Tabby and DeepDeSRT) or without pre-training spatial texts 5https://github.com/microsoft/unilm/tree/master/layoutlm (GraphTSR). These results prove that BROS captures better representations of text blocks for KIE downstream tasks. 5.3 EXPERIMENTAL RESULTS WITHOUT OPTIMAL ORDER INFORMATION It is an another challenging problem to arrange text blocks in the order that humans can understand (Li et al., 2020). Although most commercial OCR products provide an order of OCR text blocks, they are unable to reconcile the structural formatting of the texts precisely (See Appendix B). Therefore, the experiments in Section 5.2 cannot fully represent real KIE problems because they assume the optimal order of text blocks is given. To reveal the challenge, we removed the order information in all datasets by permuting the order of text blocks as mentioned in Section 4 and investigated how BERT, LayoutLM, and BROS work without the order information. We utilized a SPADE decoder for all models because BIO tagging on these permuted dataset cannot extract a sequence of text blocks in a correct order. Table 2 shows the results. Due to the lose of correct orders, BERT shows poor performances over all tasks. By utilizing spatial information of text blocks, LayoutLM† shows better performance but it suffers from huge performance degradation compared to the score computed with the optimal order. On the other hand, BROS shows comparable results compared the cases with the optimal order and achieves better performances than BERT and LayoutLM†. To systematically investigate how the order information affects the performance of the models, we construct variants of FUNSD by re-ordering text blocks with two sorting methods based on the topleft points. The text blocks of xy-FUNSD are sorted according to x-axis with ascending order of y-axis and those of yx-FUNSD are sorted according to y-axis with ascending order of x-axis. Table 3 shows performance on p-FUNSD, xy-FUNSD, yx-FUNSD, and the original FUNSD. All models utilize a SPADE decoder for a fair comparison. Interestingly, the performance of LayoutLM† is degraded in the order of FUNSD, yx-FUNSD, xy-FUNSD, and p-FUNSD as like the order of the reasonable serialization for text on 2D space. On the other hand, the performance of BROS is relatively consistent. These results show that BROS is applicable to real KIE problems without relying on an additional serialization method. 5.4 ABLATION STUDIES Table 4 provides the result of the ablative experiments computed while changing pre-training strategy, 2D position embedding and encoding methods, and a decoder for downstream tasks. The 2D embedding method represents how to treat spatial information of text blocks and the 2D encoding 2D position encoding method F1 Decoder for downstream tasks F1 method indicates how to merge the 2D embeddings into BERT. The results show that all modifications improve performance when comparing the methods of LayoutLM. Specifically, 2D position embedding and its encoding methods show huge performance gaps by 6.45pp and 31.98pp, respectively. These results represent a co-modality of our 2D continuous position embedding approach and its untied encoding method. LayoutLM and BROS are initialized with weights of BERT to utilize powerful knowledge of BERT that learns from large-scale corpora. However, BERT includes its 1D positional embeddings (1DPE) that might be harmful by making a sequence of text blocks even though there is no order information. To investigate the effectiveness of the 1D-PE, we conduct an additional ablative study. BROS without the 1D-PE shows the same F1 scores on both FUNSD and p-FUNSD (70.07), but BROS with the 1D-PE shows performance degradation when the dataset loses the optimal order information (81.21 on FUNSD→ 75.14 on p-FUNSD). Nevertheless, BROS with the 1D-PE shows better performances on both datasets. This might be because the 1D-PE preserves the token order in a single text block. Based on this result, we decided to incorporate the 1D-PE in our model. 6 CONCLUSION We present a novel pre-trained language model, BROS, for understanding semi-structured documents. BROS encodes 2D continuous position of text blocks and learns natural language from text blocks with an area-driven training strategy. To extract key contexts from text blocks without the order information, BROS adapts a graph-based decoder that identifies text sequences for EE tasks and layout relationships for EL tasks. In our extensive experiments on three EE and three EL tasks, BROS consistently shows better performances as well as the robustness on perturbed orders of text blocks compared to the existing approaches. A REPRODUCING THE LAYOUTLM As mentioned in the paper, to compare BROS from LayoutLM in diverse experimental settings, we implement LayoutLM in our experimental pipeline. Table 5 compares our implementations from the reported scores in Xu et al. (2020). As can be seen, multiple experiments are conducted according to the number of pre-training data. Our implementation, referred to LayoutLM†, shows comparable performances over all settings. B VISUALIZATION OF SERIALIZED OCR BLOCKS With the developments in the field of machine learning, the performance of commercial OCR has improved over the years. However, it is still hard to entrust the ordering of commercial OCR block outputs Li et al. (2020). Figure 4 shows the gap between the comprehensive reading order and outputs of commercial OCR. Specifically, the figure contrasts how the words in the OCR results should be serialized (Figure 4a) but most commercially available OCR technologies are unable to reconcile the structural formatting of the text – leading to them ordering the words horizontally (Figure 4b). This cursory example illustrates that as advanced as commercial OCR solutions have become, there are still ways to improve and our proposed method is one way in which this can be done. C ABLATION STUDIES Here, we provide more ablation studies on the components proposed in the paper. In the following tables, the number of pre-training data is 512K and the scores (F1) are the average of 5 experimental results. And for all the EL tasks, since BIO tagging cannot address the problem, SPADE decoder is applied to all models. C.1 FURTHER ABLATION STUDIES ON ALL DOWNSTREAM TASKS Table 6 and Table 7 are the extension of Table 4 and show the F1 scores for all downstream EE and EL tasks measured by changing each components one by one in BROS. From these tables, we can see that the settings of BROS show the best performance in most cases. C.2 GRADUALLY ADDING PROPOSED COMPONENTS TO THE ORIGINAL LAYOUTLM To evaluate performance improvements from LayoutLM, we provide the experimental results when gradually adding each new component. Table 8 and Table 9 provide performance changes of F1 score for EE and EL tasks, respectively. In most cases, our proposed methods show performance improvements over all tasks. C.3 PROPOSED COMPONENTS ON THE ORIGINAL LAYOUTLM For apples-to-apples comparison, we provides performance changes when adding each proposed component on LayoutLM. The results are shown in Table 10 and Table 11. When changing the original module to ours, the performances are solely increased except for the case of the positional embedding (sinusoid & linear). Interestingly, when combining our positional embedding and encoding (untied), the performance is dramatically increased. This result shows the benefits of using our proposed embedding and encoding methods together. D RESOURCE ANALYSIS Table 12 shows the resource and speed analysis of LayoutLM and BROS. The F1 scores of LayoutLM are referred from (Xu et al., 2020) and all pre-training models are trained with 1 epoch of 11M data. As can be seen, BROS shows better performance than LayoutLMLARGE even though requiring fewer parameters and less inference time. E GRAPH-BASED FORMALIZATION FOR EE AND EL TASKS Document KIE is a task that extracts structural information from documents. In this paper, we defined EE task that identifies text sequences for target classes and EL task that links the head of the text sequences to determine structural information of documents. These EE and EL tasks can be interpreted as tasks that identify a directional graph structure between OCR text blocks. In this formalization, all tokens are treated as nodes in a graph and the links between the nodes indicate the structural relationships between tokens in a document. Figure 5 shows examples of FUNSD, SROIE, CORE, and SciTSR with the graph-based formalization. F SAMPLE INFERENCE RESULTS OF THE FUNSD EE AND EL TASKS Figure 6 shows the inference results of LayoutLM and BROS and the ground truth of a same FUNSD image. Even though the document has a complex layout, BROS identified key contexts and relations reasonably. However, we observed that LayoutLM tends to link unrelated contexts that are spatially far in the layout.
1. What are the strengths and weaknesses of the proposed pre-trained language model BROS? 2. How does the area-masking strategy in BROS differ from traditional masking language models? 3. What are the limitations of integrating spatial information into the attention mechanism as a pair-wise bias term? 4. Why is the benefit from the graph decoder in BROS marginal for commercial OCR tools? 5. What are the similarities and differences between the graph-based decoder in BROS and SPADE? 6. Is the comparison between sinusoid & linear functions and learnable embeddings in BROS appropriate or biased? 7. Does the paper provide sufficient novelty and contributions to qualify for the ICLR conference?
Review
Review The paper proposes the pre-trained language model BROS which aims to leverage both text and spatial information to improve information extraction on documents. Using the graph-based decoder from SPADE, BROS achieves the SOTA performance on some entity extraction and entity linking downstream tasks. However, the area-masking strategy does not show significant improvement over the LayoutLM and the graph decoder is proposed in SPADE which is not new. In addition, as most commercial OCR tools have already got very good reading order information, the benefit from the graph decoder might be marginal. Pros - The paper introduces the area-masking pre-training strategy that can be seen as a natural generalization of masking language model in the 2D plane. - The authors integrate spatial information into the attention mechanism as a pair-wise bias term, which is reasonable. - BROS utilizes the graph-based decoder from SPADE and improves performance on downstream tasks. Cons - The area-masking strategy is to mask small area centered at some tokens, which is actually similar to masking the center token only. Also, given that the FUNSD dataset is small, the area-masking strategy does not show significant improvement over vanilla MLM. - This paper shows that sinusoid & linear functions can encode 2D position efficiently. However, it is not reasonable to compare sinusoid & linear and learnable embeddings on small data, since learnable embeddings could leverage large amount of data and get more gains. - The graph-based decoder part is identical to which in SPADE so it is not suitable to appear as the contributions of this paper. In summary, this paper largely overlaps with the previous research work. I do not think it is qualified for the ICLR conference.
ICLR
Title BROS: A Pre-trained Language Model for Understanding Texts in Document Abstract Understanding document from their visual snapshots is an emerging and challenging problem that requires both advanced computer vision and NLP methods. Although the recent advance in OCR enables the accurate extraction of text segments, it is still challenging to extract key information from documents due to the diversity of layouts. To compensate for the difficulties, this paper introduces a pre-trained language model, BERT Relying On Spatiality (BROS), that represents and understands the semantics of spatially distributed texts. Different from previous pre-training methods on 1D text, BROS is pre-trained on large-scale semistructured documents with a novel area-masking strategy while efficiently including the spatial layout information of input documents. Also, to generate structured outputs in various document understanding tasks, BROS utilizes a powerful graphbased decoder that can capture the relation between text segments. BROS achieves state-of-the-art results on four benchmark tasks: FUNSD, SROIE*, CORD, and SciTSR. Our experimental settings and implementation codes will be publicly available. 1 INTRODUCTION Document intelligence (DI)1, which understands industrial documents from their visual appearance, is a critical application of AI in business. One of the important challenges of DI is a key information extraction task (KIE) (Huang et al., 2019; Jaume et al., 2019; Park et al., 2019) that extracts structured information from documents such as financial reports, invoices, business emails, insurance quotes, and many others. KIE requires a multi-disciplinary perspective spanning from computer vision for extracting text from document images to natural language processing for parsing key information from the identified texts. Optical character recognition (OCR) is a key component to extract texts in document images. As OCR provides a set of text blocks consisting of a text and its location, key information in documents can be represented as a single or a sequence of the text blocks (Schuster et al., 2013; Qian et al., 2019; Hwang et al., 2019; 2020). Although OCR alleviates the burden of processing images, understanding semantic relations between text blocks on diverse layouts remains a challenging problem. To solve this problem, existing works use a pre-trained language model to utilize its effective representation of text. Hwang et al. (2019) fine-tunes BERT by regarding KIE tasks as sequence tagging problems. Denk & Reisswig (2019) uses BERT to incorporate textual information into image pixels during their image segmentation tasks. However, since BERT is designed for text sequences, they artificially convert text blocks distributed in two dimensions into a single text sequence losing spatial layout information. Recently, Xu et al. (2020) proposes LayoutLM pre-trained on large-scale documents by utilizing spatial information of text blocks. They show the effectiveness of the pretraining approach by achieving high performance on several downstream tasks. Despite this success, LayoutLM has three limitations. First, LayoutLM embeds x- and y-axis individually using trainable parameters like the position embedding of BERT, ignoring the gap between positions in a sequence and 2D space. Second, its pre-training method is essentially identical to BERT that does not explicitly consider spatial relations between text blocks. Finally, in its downstream tasks, LayoutLM only conducts sequential tagging tasks (e.g. BIO tagging) that require serialization of text blocks. 1https://sites.google.com/view/di2019 These limitations indicate that LayoutLM fails not only to fully utilize spatial information but also to address KIE problems in practical scenarios when a serialization of text blocks is difficult. This paper introduces an advanced language model, BROS, pre-trained on large-scale documents, and provides a new guideline for KIE tasks. Specifically, to address the three limitations mentioned above, BROS combines three proposed methods: (1) a 2D positional encoding method that can represent the continuous property of 2D space, (2) a novel area-masking pre-training strategy that performs masked language modeling on 2D, and (3) a combination with a graph-based decoder for solving KIE tasks. We evaluated BROS on four public KIE datasets: FUNSD (form-like documents), SROIE* (receipts), CORD (receipts), and SciTSR (table structures) and observed that BROS achieved the best results on all tasks. Also, to address KIE problem under a more realistic setting we removed the order information between text blocks from the four benchmark datasets. BROS still shows the best performance on these modified datasets. Further ablation studies provide how each component contributes to the final performances of BROS. 2 RELATED WORK 2.1 PRE-TRAINED LANGUAGE MODELS BERT (Devlin et al., 2019) is a pre-trained language model using Transformer (Vaswani et al., 2017) that shows superior performance on various NLP tasks. The main strategy to train BERT is a masked language model (MLM) that masks and estimates randomly selected tokens to learn the semantics of language from large-scale corpora. Many variants of BERT have been introduced to learn transferable knowledge by modifying the pre-training strategy. XLNet (Yang et al., 2019) permutes tokens during the pre-training phase to reduce a discrepancy from the fine-tuning phase. XLNet also utilizes relative position encoding to handle long texts. StructBERT (Wang et al., 2020) shuffles tokens in text spans and adds sentence prediction tasks for recovering the order of words or sentences. SpanBERT (Joshi et al., 2020) masks span of tokens to extract better representation for span selection tasks such as question answering and co-reference resolution. ELECTRA (Clark et al., 2020) is trained to distinguish real and fake input tokens generated by another network for sample-efficient pre-training. Inspired by these previous works, BROS utilizes a new pre-training strategy that can capture complex spatial dependencies between text blocks distributed on two dimensions. Note that LayoutLM is the first pre-trained language model on spatial text blocks but they still employs the original MLM of BERT. 2.2 KEY INFORMATION EXTRACTION FROM DOCUMENTS Most of the existing approaches utilize a serializer to identify the text order of key information. POT (Hwang et al., 2019) applies BERT on serialized text blocks and extracts key contexts via a BIO tagging approach. CharGrid (Katti et al., 2018) and BERTGrid (Denk & Reisswig, 2019) map text blocks upon a grid space, identify the region of key information, and extract key contexts in the pre-determined order. Liu et al. (2019), Yu et al. (2020), and Qian et al. (2019) utilize graph convolutional networks to model dependencies between text blocks but their decoder that performs BIO tagging relies on a serialization. LayoutLM (Xu et al., 2020) is pre-trained on large-scale documents with spatial information of text blocks, but it also conducts BIO tagging for their downstream tasks. However, using a serializer and relying on the identified sequence has two limitations. First, the information represented in two dimensional layout can be lost by improper serialization. Second, there may even be no correct serialization order. A natural way to model key contexts from text blocks is a graph-based formulation that identifies all relationships between text blocks. SPADE (Hwang et al., 2020) proposes a graph-based decoder to extract key contexts from identified connectivity between text blocks without any serialization. Specifically, they utilize BERT without its sequential position embeddings and train the model while fine-tuning BERT. However, their performance is limited by the amount of data as all relations have to be learned from the beginning at the fine-tuning stage. To fully utilize the graph-based decoder, BROS is pre-trained on a large number of documents and is combined with the SPADE decoder to determine key contexts from text blocks. 3 BERT RELYING ON SPATIALITY (BROS) The main structure of BROS follows BERT, but there are three novel differences: (1) a spatial encoding metric that reflects the continuous property of 2D space, (2) a pre-training objective designed for text blocks on 2D space, and (3) a guideline for designing downstream models based on a graphbased formulation. Figure 1 shows visual description of BROS for document KIE tasks. 3.1 ENCODING SPATIAL INFORMATION INTO BERT 3.1.1 REPRESENTATION OF A TEXT BLOCK LOCATION The way to represent spatial information of text blocks is important to encode information from layouts. We utilize sinusoidal functions to encode continuous values of x- and y-axis, and merge them through a linear transformation to represent a point upon 2D space. For formal description, we use p = (x, y) to denote a point on 2D space and f sinu : R → RDs to represent a sinusoidal function. Ds is the dimensions of sinusoid embedding. BROS encodes a 2D point by applying the sinusoidal function to x- and y-axis and concatenating them as p̄ = [f sinu(x)⊕ f sinu(y)]. The ⊕ symbol indicates concatenation. The bounding box of a text block, bbi, consists of four vertices, such as ptli , p tr i , p br i , and p bl i that indicate top-left, top-right, bottom-right, and bottom-left points, respectively. The four points are converted into vectors such as p̄tli , p̄ tr i , p̄ br i , and p̄bli with f sinu. Finally, to represent a spatial embedding, bbi, BROS combines four identified vectors through a linear transformation, bbi = W tlp̄tli + W trp̄tri + W brp̄bri + W blp̄bli , (1) where W tl, W tr, W br, W bl ∈ RH×2Ds are linear transition metrics and H is a hidden size of BERT. The periodic property of the sinusoidal function can encode continuous 2D coordinates more natural than using point-specific embedding used in BERT and LayoutLM. In addition, by learning the linear transition parameters, BROS provides an effective representation of a bounding box. 3.1.2 ENCODING SPATIAL REPRESENTATION Position encoding methods affect how models utilize the position information. In BERT, position embedding is tied with the token through a point-wise summation. However, 2D spatial information is richer than 1D sequence due to the their continuous property and the high dimensionality. Moreover, text blocks can be placed over various locations on documents without significant changes in its semantic meaning. For example, locations of page numbers differ over multiple document snapshots even though they are captured from a single document. Therefore, more advanced approach is required to maximally include spatial information during encoding beyond the simple summation approach used in BERT. In BROS, the spatial information is directly encoded during the contextualization of text blocks. Specifically, BROS calculates an attention logit combining both semantic and spatial features. The former is the same as the original attention mechanism in Transformer (Vaswani et al., 2017), but the latter is a new component identifying the importance of the target location when the source context and location are given. Our proposed attention logit is formulated as follows, Ai,j = (W qti) >(W ktj) + (W qti W sq|qbbi)>(W sk|qbbj) + (W sqbbi)>(W skbbj), (2) where ti and tj are context representations for ith and jth tokens and W q, W k, W sq|q, W sk|q, W sq, W sk are linear transition matrices. The symbol indicates Hadamard product. The first term indicates an attention logit from contextual representations and the third term is from spatial representation. The second term is designed to model the spatial dependency given the source semantic representation, ti. The second and third terms are independently calculated at each layer because spatial dependencies might differ over layers. 3.2 PRE-TRAINING OBJECTIVE: AREA-MASKED LANGUAGE MODEL Pre-training diverse layouts from unlabeled documents is a key factor for document understanding tasks. To learn effective spatial representation including relationships between text blocks, we propose a novel pre-training objective. Inspired by SpanBERT (Joshi et al., 2020), we expand spans of a 1D sequence to consecutive text blocks in 2D space. Specifically, we select a few regions in a document layout, mask all tokens of text blocks in the selected regions, and estimate the masked tokens. The rules for masking tokens in area-masked language model are as the following procedure. (a) Select a text block randomly and get the top-left and bottom-right points (ptl and pbr) of the block. (b) Identify the width, height, and center of the block as (w, h) = |ptl − pbr| and c = (ptl + pbr)/2. (c) Expand the width and height as (ŵ, ĥ) = l ∗ (w, h) where l ∼ exp(λ) and λ is a distribution parameter. (d) Identify rectangular masking area of which top-left and bottom-right are p̂tl = ptl − (ŵ, ĥ), and p̂br = pbr + (ŵ, ĥ), respectively. (e) Mask all tokens of text blocks whose centers are allocated in the area. (f) Repeat (a)–(e) until 15% of tokens are masked. The rationale behind using exponential distribution is to convert the geometric distribution used in SpanBERT for a discrete domain into distribution for a continuous domain. Thus, we set λ = −ln(1 − p) where p = 0.2 used in SpanBERT. In addition, we truncated exponential distribution with 1 to prevent an infinity multiplier covering all space of the document. It should be noted that the masking area is expanded from a randomly selected text block since the area should be related to the text sizes and locations to represent text spans in 2D space. Figure 2 compares token- and area-masking on text blocks. Finally, the loss function for the area-masked language model is formed as; LAMLM = − ∑ x̂∈A(x) log p(x̂|x\A(x)), (3) where x, A(x), and x\A(x) denote tokens in given image, masked tokens of which text block is located in masking area, and the rest tokens, respectively. Similar to BERT (Devlin et al., 2019), the masked tokens are replaced by [MASK] token 80% of the time, a random token 10% of the time, or an unchanged token 10% of the time. 3.3 SPATIAL DEPENDENCY PARSERS FOR DOWNSTREAM TASKS Key information in a document (e.g. store address in a receipt) is represented as sub-sequences of text blocks. Although BIO tagging has been used to extract the sub-sequences from a text sequence, it cannot represent key texts in a document without the optimal order of text blocks. Therefore, BIO tagging cannot be applied when the optimal order is not available which often can appear in a practical scenario. To deal with the issue, BROS utilizes a decoder of SPADE (Hwang et al., 2020) that can infer a sequence of text blocks by employing a graph-based formulation. BROS supports two downstream tasks: (1) an entity extraction (EE) task and (2) an entity linking (EL) task. The EE identifies a sequence of text blocks for key information (e.g. extract address texts in a receipt) and the EL determines relations between target texts when target text blocks are known (e.g. identify key and value text pairs). For EE tasks, BROS divides the problem into two sub-tasks: starting token classification (Figure 3, a) and subsequent token classification (Figure 3, b). Let t̃i ∈ RH denote the ith token representation from the last Transformer layer of BROS. The starting token classification conducts a token-level tagging determining whether a token is a starting token of target information as follows, fstc(t̃i) = softmax(W stct̃i) ∈ RC+1, (4) where W stc ∈ R(C+1)×H is a linear transition matrix and C indicates the number of target classes. Here, the extra +1 dimension is considered to indicate non-starting tokens. The subsequent token classification is conducted by utilizing pair-wise token representations as follows, fntc(t̃i) = softmax((W ntc-st̃i)>(tntc ⊕W ntc-tt̃1 ⊕ · · · ⊕W ntc-tt̃N ))> ∈ RN+1, (5) where W ntc-s,W ntc-t ∈ RHntc×H are linear transition matrices, Hntc is a hidden feature dimension for the next token classification decoder andN is the maximum number of tokens. Here, tntc ∈ RHntc is a model parameter to classify tokens which do not have a next token or are not related to any class. It has a similar role with an end-of-sequence token, [EOS], in NLP. By solving these two sub-tasks, BROS can identify a sequence of text blocks by finding first tokens and connecting subsequent tokens. For EL tasks, BROS conducts a binary classification for all possible pairs of tokens (Figure 3, c) as follows, frel(t̃i, t̃j) = sigmoid((W rel-st̃i)>(W rel-tt̃j)), (6) where W rel-s,W rel-t ∈ RH rel×H are linear transition matrices andH rel is a hidden feature dimension. Compared to the subsequent token classification, a single token can hold multiple relations with other tokens to represent hierarchical structures of document layouts. For more detail about this graph-based formulation, see Appendix E. 4 KEY INFORMATION EXTRACTION TASKS Here, we describe three EE tasks and three EL tasks from four KIE benchmark datasets. • Form Understanding in Noisy Scanned Documents (FUNSD) (Jaume et al., 2019) is a set of documents with various forms. The dataset consists of 149 training and 50 testing examples. FUNSD has both EE and EL tasks. In the EE task, there are three semantic entities: Header, Question, and Answer. In the EL task, the semantic hierarchies are represented as relations between text blocks like header-question and question-answer pairs. • SROIE* is a variant of Task 3 of “Scanned Receipts OCR and Information Extraction” (SROIE)2 that consists of a set of store receipts. In the original SROIE task, semantic contents (Company, Date, Address, and Total price) are generated without explicit connection to the text blocks. To convert SROIE into a EE task, we developed SROIE* by matching ground truth contents with text blocks. We also split the original training set into 526 training and 100 testing examples because the ground truths are not given in the original test set. SROIE* will be publicly available. • Consolidated Receipt Dataset (CORD) (Park et al., 2019) is a set of store receipts with 800 training, 100 validation, and 100 testing examples. CORD consists of both EE and EL tasks. In the EE task, there are 30 semantic entities including menu name, menu price, and so on. In the EL task, the semantic entities are linked according to their layout structure. For example, menu name entities are linked to menu id, menu count, and menu price. • Complicated Table Structure Recognition (SciTSR) (Chi et al., 2019) is a EL task that connects cells in a table to recognize the table structure. There are two types of relations: vertical and horizontal connection between cells. The dataset consists of 12,000 training images and 3,000 test images. Although, these four datasets provide test beds for the EE and EL tasks, they represent the subset of real problems as the optimal order of text blocks is given. In real service, user can submit documents with a complex layout where the serialization of input texts are non-trivial. FUNSD provides the optimal orders of text blocks related to target classes in both training and testing examples. In SROIE*, CORD, and SciTSR, the text blocks are serialized in reading orders. To reveal the serialization problem in the EE and EL tasks, we randomly permuted text blocks of the datasets to remove their order information. We denote the permuted datasets as p-FUNSD, pSROIE*, p-CORD, and p-SciTSR and compare all models on them. For fair comparisons, we will open the permuted datasets. 5 EXPERIMENTS 5.1 EXPERIMENT SETTINGS For pre-training, IIT-CDIP Test Collection 1.03 (Lewis et al., 2006), which consists of approximatley 11M document images, is used but 400K RVL-CDIP dataset4 (Harley et al., 2015) is excluded following LayoutLM. In-house OCR engine was applied to obtain text blocks from unlabeled document images. The main Transformer structure of BROS is the same as BERT. By following BERTBASE, the hidden size, the number of self-attention heads, the feed-forward size, and the number of Transformer layers set to 768, 12, 3072, and 12, respectively. The same pre-training setting with LayoutLM is used for a fair comparison. 2https://rrc.cvc.uab.es/?ch=13 3https://ir.nist.gov/cdip/ 4https://www.cs.cmu.edu/ aharley/rvl-cdip/ BROS is trained by using AdamW optimizer (Loshchilov & Hutter, 2019) with a learning rate of 5e-5 with linear decay. First 10% of the total epochs are used for a warm-up. We initialized weights of BROS with those of BERTBASE and trained BROS on IIT-CDIP for 2 epochs with 80 of batch size, following LayoutLM. The pre-training takes 64 hours on 8 NVIDIA Tesla V100s with Distributed Data Parallel (DDP). During fine-tuning, the learning rate is set to 5e-5. The batch size is set to 16 for all tasks. The number of training epochs or steps is as follows: 100 epochs for FUNSD, 1K steps for SROIE* and CORD, and 7.5 epochs for SciTSR. The hidden feature dimensions, Hntc and H rel, of the SPADE decoder are set to 128 for FUNSD, 64 for SROIE*, and 256 for CORD and SciTSR. Although the authors of LayoutLM published their codes on GitHub5, the data and script file used for pre-training are not included. For a fair comparison, we made our own implementation, which we refer to LayoutLM†, on the same pre-training data and script file used for BROS pre-training. We verified LayoutLM† by comparing its performances on FUNSD from the reported scores in (Xu et al., 2020). See Appendix A for more information. 5.2 EXPERIMENTAL RESULTS WITH OPTIMAL ORDER INFORMATION Table 1 shows the results on four KIE datasets with given optimal order information of text blocks. For EL tasks, we applied SPADE decoders to all pre-trained models such as BERT, LayoutLM, and BROS. In all tasks, we observed that BERT shows lower scores than LayoutLM and BROS presumably due to the loss of spatial information. BROS achieves the highest scores showing the effectiveness of our approach. Specifically, in FUNSD, BROS shows the state-of-the-art performances with a large margins of 2.32pp in the EE task and 19.63pp in the EL task. Moreover, it should be noted that BROS achieves higher F1 score than one of the LayoutLM variants, which utilizes visual features (81.21 > 79.27 (Xu et al., 2020)). In SROIE* and CORD, BROS also shows the best performances over all the EE and EL tasks. In SciTSR, LayoutLM and BROS show the importance of pre-training by exceeding other baselines with large margins which are trained by using either only spatial information of cells (Tabby and DeepDeSRT) or without pre-training spatial texts 5https://github.com/microsoft/unilm/tree/master/layoutlm (GraphTSR). These results prove that BROS captures better representations of text blocks for KIE downstream tasks. 5.3 EXPERIMENTAL RESULTS WITHOUT OPTIMAL ORDER INFORMATION It is an another challenging problem to arrange text blocks in the order that humans can understand (Li et al., 2020). Although most commercial OCR products provide an order of OCR text blocks, they are unable to reconcile the structural formatting of the texts precisely (See Appendix B). Therefore, the experiments in Section 5.2 cannot fully represent real KIE problems because they assume the optimal order of text blocks is given. To reveal the challenge, we removed the order information in all datasets by permuting the order of text blocks as mentioned in Section 4 and investigated how BERT, LayoutLM, and BROS work without the order information. We utilized a SPADE decoder for all models because BIO tagging on these permuted dataset cannot extract a sequence of text blocks in a correct order. Table 2 shows the results. Due to the lose of correct orders, BERT shows poor performances over all tasks. By utilizing spatial information of text blocks, LayoutLM† shows better performance but it suffers from huge performance degradation compared to the score computed with the optimal order. On the other hand, BROS shows comparable results compared the cases with the optimal order and achieves better performances than BERT and LayoutLM†. To systematically investigate how the order information affects the performance of the models, we construct variants of FUNSD by re-ordering text blocks with two sorting methods based on the topleft points. The text blocks of xy-FUNSD are sorted according to x-axis with ascending order of y-axis and those of yx-FUNSD are sorted according to y-axis with ascending order of x-axis. Table 3 shows performance on p-FUNSD, xy-FUNSD, yx-FUNSD, and the original FUNSD. All models utilize a SPADE decoder for a fair comparison. Interestingly, the performance of LayoutLM† is degraded in the order of FUNSD, yx-FUNSD, xy-FUNSD, and p-FUNSD as like the order of the reasonable serialization for text on 2D space. On the other hand, the performance of BROS is relatively consistent. These results show that BROS is applicable to real KIE problems without relying on an additional serialization method. 5.4 ABLATION STUDIES Table 4 provides the result of the ablative experiments computed while changing pre-training strategy, 2D position embedding and encoding methods, and a decoder for downstream tasks. The 2D embedding method represents how to treat spatial information of text blocks and the 2D encoding 2D position encoding method F1 Decoder for downstream tasks F1 method indicates how to merge the 2D embeddings into BERT. The results show that all modifications improve performance when comparing the methods of LayoutLM. Specifically, 2D position embedding and its encoding methods show huge performance gaps by 6.45pp and 31.98pp, respectively. These results represent a co-modality of our 2D continuous position embedding approach and its untied encoding method. LayoutLM and BROS are initialized with weights of BERT to utilize powerful knowledge of BERT that learns from large-scale corpora. However, BERT includes its 1D positional embeddings (1DPE) that might be harmful by making a sequence of text blocks even though there is no order information. To investigate the effectiveness of the 1D-PE, we conduct an additional ablative study. BROS without the 1D-PE shows the same F1 scores on both FUNSD and p-FUNSD (70.07), but BROS with the 1D-PE shows performance degradation when the dataset loses the optimal order information (81.21 on FUNSD→ 75.14 on p-FUNSD). Nevertheless, BROS with the 1D-PE shows better performances on both datasets. This might be because the 1D-PE preserves the token order in a single text block. Based on this result, we decided to incorporate the 1D-PE in our model. 6 CONCLUSION We present a novel pre-trained language model, BROS, for understanding semi-structured documents. BROS encodes 2D continuous position of text blocks and learns natural language from text blocks with an area-driven training strategy. To extract key contexts from text blocks without the order information, BROS adapts a graph-based decoder that identifies text sequences for EE tasks and layout relationships for EL tasks. In our extensive experiments on three EE and three EL tasks, BROS consistently shows better performances as well as the robustness on perturbed orders of text blocks compared to the existing approaches. A REPRODUCING THE LAYOUTLM As mentioned in the paper, to compare BROS from LayoutLM in diverse experimental settings, we implement LayoutLM in our experimental pipeline. Table 5 compares our implementations from the reported scores in Xu et al. (2020). As can be seen, multiple experiments are conducted according to the number of pre-training data. Our implementation, referred to LayoutLM†, shows comparable performances over all settings. B VISUALIZATION OF SERIALIZED OCR BLOCKS With the developments in the field of machine learning, the performance of commercial OCR has improved over the years. However, it is still hard to entrust the ordering of commercial OCR block outputs Li et al. (2020). Figure 4 shows the gap between the comprehensive reading order and outputs of commercial OCR. Specifically, the figure contrasts how the words in the OCR results should be serialized (Figure 4a) but most commercially available OCR technologies are unable to reconcile the structural formatting of the text – leading to them ordering the words horizontally (Figure 4b). This cursory example illustrates that as advanced as commercial OCR solutions have become, there are still ways to improve and our proposed method is one way in which this can be done. C ABLATION STUDIES Here, we provide more ablation studies on the components proposed in the paper. In the following tables, the number of pre-training data is 512K and the scores (F1) are the average of 5 experimental results. And for all the EL tasks, since BIO tagging cannot address the problem, SPADE decoder is applied to all models. C.1 FURTHER ABLATION STUDIES ON ALL DOWNSTREAM TASKS Table 6 and Table 7 are the extension of Table 4 and show the F1 scores for all downstream EE and EL tasks measured by changing each components one by one in BROS. From these tables, we can see that the settings of BROS show the best performance in most cases. C.2 GRADUALLY ADDING PROPOSED COMPONENTS TO THE ORIGINAL LAYOUTLM To evaluate performance improvements from LayoutLM, we provide the experimental results when gradually adding each new component. Table 8 and Table 9 provide performance changes of F1 score for EE and EL tasks, respectively. In most cases, our proposed methods show performance improvements over all tasks. C.3 PROPOSED COMPONENTS ON THE ORIGINAL LAYOUTLM For apples-to-apples comparison, we provides performance changes when adding each proposed component on LayoutLM. The results are shown in Table 10 and Table 11. When changing the original module to ours, the performances are solely increased except for the case of the positional embedding (sinusoid & linear). Interestingly, when combining our positional embedding and encoding (untied), the performance is dramatically increased. This result shows the benefits of using our proposed embedding and encoding methods together. D RESOURCE ANALYSIS Table 12 shows the resource and speed analysis of LayoutLM and BROS. The F1 scores of LayoutLM are referred from (Xu et al., 2020) and all pre-training models are trained with 1 epoch of 11M data. As can be seen, BROS shows better performance than LayoutLMLARGE even though requiring fewer parameters and less inference time. E GRAPH-BASED FORMALIZATION FOR EE AND EL TASKS Document KIE is a task that extracts structural information from documents. In this paper, we defined EE task that identifies text sequences for target classes and EL task that links the head of the text sequences to determine structural information of documents. These EE and EL tasks can be interpreted as tasks that identify a directional graph structure between OCR text blocks. In this formalization, all tokens are treated as nodes in a graph and the links between the nodes indicate the structural relationships between tokens in a document. Figure 5 shows examples of FUNSD, SROIE, CORE, and SciTSR with the graph-based formalization. F SAMPLE INFERENCE RESULTS OF THE FUNSD EE AND EL TASKS Figure 6 shows the inference results of LayoutLM and BROS and the ground truth of a same FUNSD image. Even though the document has a complex layout, BROS identified key contexts and relations reasonably. However, we observed that LayoutLM tends to link unrelated contexts that are spatially far in the layout.
1. What is the focus of the paper, and what are the proposed contributions? 2. What are the strengths and weaknesses of the approach used in the paper? 3. Are there any concerns regarding the novelty and originality of the work? 4. How effective is the proposed positional encoder, and how does it compare to other approaches? 5. What are the limitations of the paper, and what aspects could be improved? 6. Are there any questions or areas of confusion regarding the presentation and content of the paper?
Review
Review Overall Authors used BERT alongside to a 2D-position embedding based on a sinusoidal function and a graph-based decoder to improve performance on document information extraction tasks. They do pre-train their model (BROS) on a large dataset with 11M documents, and then used such models to perform downstream tasks in four smaller datasets. Their models achieve better quantitative results when compared to the provided baselines. Positive aspects Positional encoder based on sinusoidal function seems to be effective. Authors perform experimentally sound experiments, following closely LayoutLM. Pre-trained models could be useful. Authors reproduced results from their strongest baseline. Better results in all downstream tasks. Cons and aspects to improve My main concern is that the overall contribution is seems to be limited.In fact, the original paper of the Transformer approach, already proposed such kind of embedding. It is good to know that it works for 2D-coordinates for the task at hand, though it seems to be more a marginal improvement on existing work rather than a standalone contribution. It is hard to tell what are the standalone contributions of the paper, and what is coming from other works. Authors could have provided more in-depth details (visualizations, analysis, examples) to show main differences between the proposed approach and baselines (specially LayoutLM). Also they could visually demonstrate the advantages of their approach. Authors could have plugged their embedding strategy in LayoutLM to understand the impact of that particular component. I would like to have seen qualitative examples of model predictions, and more examples from the dataset. A figure containing the whole process could be helpful to better understand the processing required to train / test such models. Figure from LayoutLM is a good example of that, it comprises the entire process and makes it easier to understand the whole architecture. In the abstract, authors say "BROS utilizes a powerful graph-based decoder that can capture the relation between text segment"* Though in the text such a component (that is from other work) is only mentioned twice without further detail. It is unclear to me: How regions of interest are detected in this work? (I assumed authors used the same strategy as LayoutLM). OCR seems to be an extra-step in the preprocessing stage. What to do if the user does not have the same OCR. What is the impact of a good OCR for training and testing (prediction of new, unseen documents)? "In-house OCR engine was applied" can authors provide more details on that? This line of work could be much stronger if the models comprised the whole process (detection, text extraction, recognition) in an end-to-end manner. Notes on text and style There are parts of the manuscript that felt somewhat informal and confusing to me. I will provide some details as follows. Set a default format for numbers in tables. Table 1 has two distinct decimal number formats. Personally, I think it is better to write 5 × 10 − 5 rather than 5e-5. In the results section there is a typo: "performances with a large margins of 2.32pp in". Also, text could be more formal. I would avoid using the use of the pp abbreviation. "By achieving the best, these results prove that BROS" this sentence can be improved. "Moreover, it should be noted that BROS achieves higher f1 score than 79.27 of LayoutLM using visual features". I think authors wanted to say that even though BROS does not rely on visual features, it does outperform LayoutLM which, in turn, uses visual features.
ICLR
Title BROS: A Pre-trained Language Model for Understanding Texts in Document Abstract Understanding document from their visual snapshots is an emerging and challenging problem that requires both advanced computer vision and NLP methods. Although the recent advance in OCR enables the accurate extraction of text segments, it is still challenging to extract key information from documents due to the diversity of layouts. To compensate for the difficulties, this paper introduces a pre-trained language model, BERT Relying On Spatiality (BROS), that represents and understands the semantics of spatially distributed texts. Different from previous pre-training methods on 1D text, BROS is pre-trained on large-scale semistructured documents with a novel area-masking strategy while efficiently including the spatial layout information of input documents. Also, to generate structured outputs in various document understanding tasks, BROS utilizes a powerful graphbased decoder that can capture the relation between text segments. BROS achieves state-of-the-art results on four benchmark tasks: FUNSD, SROIE*, CORD, and SciTSR. Our experimental settings and implementation codes will be publicly available. 1 INTRODUCTION Document intelligence (DI)1, which understands industrial documents from their visual appearance, is a critical application of AI in business. One of the important challenges of DI is a key information extraction task (KIE) (Huang et al., 2019; Jaume et al., 2019; Park et al., 2019) that extracts structured information from documents such as financial reports, invoices, business emails, insurance quotes, and many others. KIE requires a multi-disciplinary perspective spanning from computer vision for extracting text from document images to natural language processing for parsing key information from the identified texts. Optical character recognition (OCR) is a key component to extract texts in document images. As OCR provides a set of text blocks consisting of a text and its location, key information in documents can be represented as a single or a sequence of the text blocks (Schuster et al., 2013; Qian et al., 2019; Hwang et al., 2019; 2020). Although OCR alleviates the burden of processing images, understanding semantic relations between text blocks on diverse layouts remains a challenging problem. To solve this problem, existing works use a pre-trained language model to utilize its effective representation of text. Hwang et al. (2019) fine-tunes BERT by regarding KIE tasks as sequence tagging problems. Denk & Reisswig (2019) uses BERT to incorporate textual information into image pixels during their image segmentation tasks. However, since BERT is designed for text sequences, they artificially convert text blocks distributed in two dimensions into a single text sequence losing spatial layout information. Recently, Xu et al. (2020) proposes LayoutLM pre-trained on large-scale documents by utilizing spatial information of text blocks. They show the effectiveness of the pretraining approach by achieving high performance on several downstream tasks. Despite this success, LayoutLM has three limitations. First, LayoutLM embeds x- and y-axis individually using trainable parameters like the position embedding of BERT, ignoring the gap between positions in a sequence and 2D space. Second, its pre-training method is essentially identical to BERT that does not explicitly consider spatial relations between text blocks. Finally, in its downstream tasks, LayoutLM only conducts sequential tagging tasks (e.g. BIO tagging) that require serialization of text blocks. 1https://sites.google.com/view/di2019 These limitations indicate that LayoutLM fails not only to fully utilize spatial information but also to address KIE problems in practical scenarios when a serialization of text blocks is difficult. This paper introduces an advanced language model, BROS, pre-trained on large-scale documents, and provides a new guideline for KIE tasks. Specifically, to address the three limitations mentioned above, BROS combines three proposed methods: (1) a 2D positional encoding method that can represent the continuous property of 2D space, (2) a novel area-masking pre-training strategy that performs masked language modeling on 2D, and (3) a combination with a graph-based decoder for solving KIE tasks. We evaluated BROS on four public KIE datasets: FUNSD (form-like documents), SROIE* (receipts), CORD (receipts), and SciTSR (table structures) and observed that BROS achieved the best results on all tasks. Also, to address KIE problem under a more realistic setting we removed the order information between text blocks from the four benchmark datasets. BROS still shows the best performance on these modified datasets. Further ablation studies provide how each component contributes to the final performances of BROS. 2 RELATED WORK 2.1 PRE-TRAINED LANGUAGE MODELS BERT (Devlin et al., 2019) is a pre-trained language model using Transformer (Vaswani et al., 2017) that shows superior performance on various NLP tasks. The main strategy to train BERT is a masked language model (MLM) that masks and estimates randomly selected tokens to learn the semantics of language from large-scale corpora. Many variants of BERT have been introduced to learn transferable knowledge by modifying the pre-training strategy. XLNet (Yang et al., 2019) permutes tokens during the pre-training phase to reduce a discrepancy from the fine-tuning phase. XLNet also utilizes relative position encoding to handle long texts. StructBERT (Wang et al., 2020) shuffles tokens in text spans and adds sentence prediction tasks for recovering the order of words or sentences. SpanBERT (Joshi et al., 2020) masks span of tokens to extract better representation for span selection tasks such as question answering and co-reference resolution. ELECTRA (Clark et al., 2020) is trained to distinguish real and fake input tokens generated by another network for sample-efficient pre-training. Inspired by these previous works, BROS utilizes a new pre-training strategy that can capture complex spatial dependencies between text blocks distributed on two dimensions. Note that LayoutLM is the first pre-trained language model on spatial text blocks but they still employs the original MLM of BERT. 2.2 KEY INFORMATION EXTRACTION FROM DOCUMENTS Most of the existing approaches utilize a serializer to identify the text order of key information. POT (Hwang et al., 2019) applies BERT on serialized text blocks and extracts key contexts via a BIO tagging approach. CharGrid (Katti et al., 2018) and BERTGrid (Denk & Reisswig, 2019) map text blocks upon a grid space, identify the region of key information, and extract key contexts in the pre-determined order. Liu et al. (2019), Yu et al. (2020), and Qian et al. (2019) utilize graph convolutional networks to model dependencies between text blocks but their decoder that performs BIO tagging relies on a serialization. LayoutLM (Xu et al., 2020) is pre-trained on large-scale documents with spatial information of text blocks, but it also conducts BIO tagging for their downstream tasks. However, using a serializer and relying on the identified sequence has two limitations. First, the information represented in two dimensional layout can be lost by improper serialization. Second, there may even be no correct serialization order. A natural way to model key contexts from text blocks is a graph-based formulation that identifies all relationships between text blocks. SPADE (Hwang et al., 2020) proposes a graph-based decoder to extract key contexts from identified connectivity between text blocks without any serialization. Specifically, they utilize BERT without its sequential position embeddings and train the model while fine-tuning BERT. However, their performance is limited by the amount of data as all relations have to be learned from the beginning at the fine-tuning stage. To fully utilize the graph-based decoder, BROS is pre-trained on a large number of documents and is combined with the SPADE decoder to determine key contexts from text blocks. 3 BERT RELYING ON SPATIALITY (BROS) The main structure of BROS follows BERT, but there are three novel differences: (1) a spatial encoding metric that reflects the continuous property of 2D space, (2) a pre-training objective designed for text blocks on 2D space, and (3) a guideline for designing downstream models based on a graphbased formulation. Figure 1 shows visual description of BROS for document KIE tasks. 3.1 ENCODING SPATIAL INFORMATION INTO BERT 3.1.1 REPRESENTATION OF A TEXT BLOCK LOCATION The way to represent spatial information of text blocks is important to encode information from layouts. We utilize sinusoidal functions to encode continuous values of x- and y-axis, and merge them through a linear transformation to represent a point upon 2D space. For formal description, we use p = (x, y) to denote a point on 2D space and f sinu : R → RDs to represent a sinusoidal function. Ds is the dimensions of sinusoid embedding. BROS encodes a 2D point by applying the sinusoidal function to x- and y-axis and concatenating them as p̄ = [f sinu(x)⊕ f sinu(y)]. The ⊕ symbol indicates concatenation. The bounding box of a text block, bbi, consists of four vertices, such as ptli , p tr i , p br i , and p bl i that indicate top-left, top-right, bottom-right, and bottom-left points, respectively. The four points are converted into vectors such as p̄tli , p̄ tr i , p̄ br i , and p̄bli with f sinu. Finally, to represent a spatial embedding, bbi, BROS combines four identified vectors through a linear transformation, bbi = W tlp̄tli + W trp̄tri + W brp̄bri + W blp̄bli , (1) where W tl, W tr, W br, W bl ∈ RH×2Ds are linear transition metrics and H is a hidden size of BERT. The periodic property of the sinusoidal function can encode continuous 2D coordinates more natural than using point-specific embedding used in BERT and LayoutLM. In addition, by learning the linear transition parameters, BROS provides an effective representation of a bounding box. 3.1.2 ENCODING SPATIAL REPRESENTATION Position encoding methods affect how models utilize the position information. In BERT, position embedding is tied with the token through a point-wise summation. However, 2D spatial information is richer than 1D sequence due to the their continuous property and the high dimensionality. Moreover, text blocks can be placed over various locations on documents without significant changes in its semantic meaning. For example, locations of page numbers differ over multiple document snapshots even though they are captured from a single document. Therefore, more advanced approach is required to maximally include spatial information during encoding beyond the simple summation approach used in BERT. In BROS, the spatial information is directly encoded during the contextualization of text blocks. Specifically, BROS calculates an attention logit combining both semantic and spatial features. The former is the same as the original attention mechanism in Transformer (Vaswani et al., 2017), but the latter is a new component identifying the importance of the target location when the source context and location are given. Our proposed attention logit is formulated as follows, Ai,j = (W qti) >(W ktj) + (W qti W sq|qbbi)>(W sk|qbbj) + (W sqbbi)>(W skbbj), (2) where ti and tj are context representations for ith and jth tokens and W q, W k, W sq|q, W sk|q, W sq, W sk are linear transition matrices. The symbol indicates Hadamard product. The first term indicates an attention logit from contextual representations and the third term is from spatial representation. The second term is designed to model the spatial dependency given the source semantic representation, ti. The second and third terms are independently calculated at each layer because spatial dependencies might differ over layers. 3.2 PRE-TRAINING OBJECTIVE: AREA-MASKED LANGUAGE MODEL Pre-training diverse layouts from unlabeled documents is a key factor for document understanding tasks. To learn effective spatial representation including relationships between text blocks, we propose a novel pre-training objective. Inspired by SpanBERT (Joshi et al., 2020), we expand spans of a 1D sequence to consecutive text blocks in 2D space. Specifically, we select a few regions in a document layout, mask all tokens of text blocks in the selected regions, and estimate the masked tokens. The rules for masking tokens in area-masked language model are as the following procedure. (a) Select a text block randomly and get the top-left and bottom-right points (ptl and pbr) of the block. (b) Identify the width, height, and center of the block as (w, h) = |ptl − pbr| and c = (ptl + pbr)/2. (c) Expand the width and height as (ŵ, ĥ) = l ∗ (w, h) where l ∼ exp(λ) and λ is a distribution parameter. (d) Identify rectangular masking area of which top-left and bottom-right are p̂tl = ptl − (ŵ, ĥ), and p̂br = pbr + (ŵ, ĥ), respectively. (e) Mask all tokens of text blocks whose centers are allocated in the area. (f) Repeat (a)–(e) until 15% of tokens are masked. The rationale behind using exponential distribution is to convert the geometric distribution used in SpanBERT for a discrete domain into distribution for a continuous domain. Thus, we set λ = −ln(1 − p) where p = 0.2 used in SpanBERT. In addition, we truncated exponential distribution with 1 to prevent an infinity multiplier covering all space of the document. It should be noted that the masking area is expanded from a randomly selected text block since the area should be related to the text sizes and locations to represent text spans in 2D space. Figure 2 compares token- and area-masking on text blocks. Finally, the loss function for the area-masked language model is formed as; LAMLM = − ∑ x̂∈A(x) log p(x̂|x\A(x)), (3) where x, A(x), and x\A(x) denote tokens in given image, masked tokens of which text block is located in masking area, and the rest tokens, respectively. Similar to BERT (Devlin et al., 2019), the masked tokens are replaced by [MASK] token 80% of the time, a random token 10% of the time, or an unchanged token 10% of the time. 3.3 SPATIAL DEPENDENCY PARSERS FOR DOWNSTREAM TASKS Key information in a document (e.g. store address in a receipt) is represented as sub-sequences of text blocks. Although BIO tagging has been used to extract the sub-sequences from a text sequence, it cannot represent key texts in a document without the optimal order of text blocks. Therefore, BIO tagging cannot be applied when the optimal order is not available which often can appear in a practical scenario. To deal with the issue, BROS utilizes a decoder of SPADE (Hwang et al., 2020) that can infer a sequence of text blocks by employing a graph-based formulation. BROS supports two downstream tasks: (1) an entity extraction (EE) task and (2) an entity linking (EL) task. The EE identifies a sequence of text blocks for key information (e.g. extract address texts in a receipt) and the EL determines relations between target texts when target text blocks are known (e.g. identify key and value text pairs). For EE tasks, BROS divides the problem into two sub-tasks: starting token classification (Figure 3, a) and subsequent token classification (Figure 3, b). Let t̃i ∈ RH denote the ith token representation from the last Transformer layer of BROS. The starting token classification conducts a token-level tagging determining whether a token is a starting token of target information as follows, fstc(t̃i) = softmax(W stct̃i) ∈ RC+1, (4) where W stc ∈ R(C+1)×H is a linear transition matrix and C indicates the number of target classes. Here, the extra +1 dimension is considered to indicate non-starting tokens. The subsequent token classification is conducted by utilizing pair-wise token representations as follows, fntc(t̃i) = softmax((W ntc-st̃i)>(tntc ⊕W ntc-tt̃1 ⊕ · · · ⊕W ntc-tt̃N ))> ∈ RN+1, (5) where W ntc-s,W ntc-t ∈ RHntc×H are linear transition matrices, Hntc is a hidden feature dimension for the next token classification decoder andN is the maximum number of tokens. Here, tntc ∈ RHntc is a model parameter to classify tokens which do not have a next token or are not related to any class. It has a similar role with an end-of-sequence token, [EOS], in NLP. By solving these two sub-tasks, BROS can identify a sequence of text blocks by finding first tokens and connecting subsequent tokens. For EL tasks, BROS conducts a binary classification for all possible pairs of tokens (Figure 3, c) as follows, frel(t̃i, t̃j) = sigmoid((W rel-st̃i)>(W rel-tt̃j)), (6) where W rel-s,W rel-t ∈ RH rel×H are linear transition matrices andH rel is a hidden feature dimension. Compared to the subsequent token classification, a single token can hold multiple relations with other tokens to represent hierarchical structures of document layouts. For more detail about this graph-based formulation, see Appendix E. 4 KEY INFORMATION EXTRACTION TASKS Here, we describe three EE tasks and three EL tasks from four KIE benchmark datasets. • Form Understanding in Noisy Scanned Documents (FUNSD) (Jaume et al., 2019) is a set of documents with various forms. The dataset consists of 149 training and 50 testing examples. FUNSD has both EE and EL tasks. In the EE task, there are three semantic entities: Header, Question, and Answer. In the EL task, the semantic hierarchies are represented as relations between text blocks like header-question and question-answer pairs. • SROIE* is a variant of Task 3 of “Scanned Receipts OCR and Information Extraction” (SROIE)2 that consists of a set of store receipts. In the original SROIE task, semantic contents (Company, Date, Address, and Total price) are generated without explicit connection to the text blocks. To convert SROIE into a EE task, we developed SROIE* by matching ground truth contents with text blocks. We also split the original training set into 526 training and 100 testing examples because the ground truths are not given in the original test set. SROIE* will be publicly available. • Consolidated Receipt Dataset (CORD) (Park et al., 2019) is a set of store receipts with 800 training, 100 validation, and 100 testing examples. CORD consists of both EE and EL tasks. In the EE task, there are 30 semantic entities including menu name, menu price, and so on. In the EL task, the semantic entities are linked according to their layout structure. For example, menu name entities are linked to menu id, menu count, and menu price. • Complicated Table Structure Recognition (SciTSR) (Chi et al., 2019) is a EL task that connects cells in a table to recognize the table structure. There are two types of relations: vertical and horizontal connection between cells. The dataset consists of 12,000 training images and 3,000 test images. Although, these four datasets provide test beds for the EE and EL tasks, they represent the subset of real problems as the optimal order of text blocks is given. In real service, user can submit documents with a complex layout where the serialization of input texts are non-trivial. FUNSD provides the optimal orders of text blocks related to target classes in both training and testing examples. In SROIE*, CORD, and SciTSR, the text blocks are serialized in reading orders. To reveal the serialization problem in the EE and EL tasks, we randomly permuted text blocks of the datasets to remove their order information. We denote the permuted datasets as p-FUNSD, pSROIE*, p-CORD, and p-SciTSR and compare all models on them. For fair comparisons, we will open the permuted datasets. 5 EXPERIMENTS 5.1 EXPERIMENT SETTINGS For pre-training, IIT-CDIP Test Collection 1.03 (Lewis et al., 2006), which consists of approximatley 11M document images, is used but 400K RVL-CDIP dataset4 (Harley et al., 2015) is excluded following LayoutLM. In-house OCR engine was applied to obtain text blocks from unlabeled document images. The main Transformer structure of BROS is the same as BERT. By following BERTBASE, the hidden size, the number of self-attention heads, the feed-forward size, and the number of Transformer layers set to 768, 12, 3072, and 12, respectively. The same pre-training setting with LayoutLM is used for a fair comparison. 2https://rrc.cvc.uab.es/?ch=13 3https://ir.nist.gov/cdip/ 4https://www.cs.cmu.edu/ aharley/rvl-cdip/ BROS is trained by using AdamW optimizer (Loshchilov & Hutter, 2019) with a learning rate of 5e-5 with linear decay. First 10% of the total epochs are used for a warm-up. We initialized weights of BROS with those of BERTBASE and trained BROS on IIT-CDIP for 2 epochs with 80 of batch size, following LayoutLM. The pre-training takes 64 hours on 8 NVIDIA Tesla V100s with Distributed Data Parallel (DDP). During fine-tuning, the learning rate is set to 5e-5. The batch size is set to 16 for all tasks. The number of training epochs or steps is as follows: 100 epochs for FUNSD, 1K steps for SROIE* and CORD, and 7.5 epochs for SciTSR. The hidden feature dimensions, Hntc and H rel, of the SPADE decoder are set to 128 for FUNSD, 64 for SROIE*, and 256 for CORD and SciTSR. Although the authors of LayoutLM published their codes on GitHub5, the data and script file used for pre-training are not included. For a fair comparison, we made our own implementation, which we refer to LayoutLM†, on the same pre-training data and script file used for BROS pre-training. We verified LayoutLM† by comparing its performances on FUNSD from the reported scores in (Xu et al., 2020). See Appendix A for more information. 5.2 EXPERIMENTAL RESULTS WITH OPTIMAL ORDER INFORMATION Table 1 shows the results on four KIE datasets with given optimal order information of text blocks. For EL tasks, we applied SPADE decoders to all pre-trained models such as BERT, LayoutLM, and BROS. In all tasks, we observed that BERT shows lower scores than LayoutLM and BROS presumably due to the loss of spatial information. BROS achieves the highest scores showing the effectiveness of our approach. Specifically, in FUNSD, BROS shows the state-of-the-art performances with a large margins of 2.32pp in the EE task and 19.63pp in the EL task. Moreover, it should be noted that BROS achieves higher F1 score than one of the LayoutLM variants, which utilizes visual features (81.21 > 79.27 (Xu et al., 2020)). In SROIE* and CORD, BROS also shows the best performances over all the EE and EL tasks. In SciTSR, LayoutLM and BROS show the importance of pre-training by exceeding other baselines with large margins which are trained by using either only spatial information of cells (Tabby and DeepDeSRT) or without pre-training spatial texts 5https://github.com/microsoft/unilm/tree/master/layoutlm (GraphTSR). These results prove that BROS captures better representations of text blocks for KIE downstream tasks. 5.3 EXPERIMENTAL RESULTS WITHOUT OPTIMAL ORDER INFORMATION It is an another challenging problem to arrange text blocks in the order that humans can understand (Li et al., 2020). Although most commercial OCR products provide an order of OCR text blocks, they are unable to reconcile the structural formatting of the texts precisely (See Appendix B). Therefore, the experiments in Section 5.2 cannot fully represent real KIE problems because they assume the optimal order of text blocks is given. To reveal the challenge, we removed the order information in all datasets by permuting the order of text blocks as mentioned in Section 4 and investigated how BERT, LayoutLM, and BROS work without the order information. We utilized a SPADE decoder for all models because BIO tagging on these permuted dataset cannot extract a sequence of text blocks in a correct order. Table 2 shows the results. Due to the lose of correct orders, BERT shows poor performances over all tasks. By utilizing spatial information of text blocks, LayoutLM† shows better performance but it suffers from huge performance degradation compared to the score computed with the optimal order. On the other hand, BROS shows comparable results compared the cases with the optimal order and achieves better performances than BERT and LayoutLM†. To systematically investigate how the order information affects the performance of the models, we construct variants of FUNSD by re-ordering text blocks with two sorting methods based on the topleft points. The text blocks of xy-FUNSD are sorted according to x-axis with ascending order of y-axis and those of yx-FUNSD are sorted according to y-axis with ascending order of x-axis. Table 3 shows performance on p-FUNSD, xy-FUNSD, yx-FUNSD, and the original FUNSD. All models utilize a SPADE decoder for a fair comparison. Interestingly, the performance of LayoutLM† is degraded in the order of FUNSD, yx-FUNSD, xy-FUNSD, and p-FUNSD as like the order of the reasonable serialization for text on 2D space. On the other hand, the performance of BROS is relatively consistent. These results show that BROS is applicable to real KIE problems without relying on an additional serialization method. 5.4 ABLATION STUDIES Table 4 provides the result of the ablative experiments computed while changing pre-training strategy, 2D position embedding and encoding methods, and a decoder for downstream tasks. The 2D embedding method represents how to treat spatial information of text blocks and the 2D encoding 2D position encoding method F1 Decoder for downstream tasks F1 method indicates how to merge the 2D embeddings into BERT. The results show that all modifications improve performance when comparing the methods of LayoutLM. Specifically, 2D position embedding and its encoding methods show huge performance gaps by 6.45pp and 31.98pp, respectively. These results represent a co-modality of our 2D continuous position embedding approach and its untied encoding method. LayoutLM and BROS are initialized with weights of BERT to utilize powerful knowledge of BERT that learns from large-scale corpora. However, BERT includes its 1D positional embeddings (1DPE) that might be harmful by making a sequence of text blocks even though there is no order information. To investigate the effectiveness of the 1D-PE, we conduct an additional ablative study. BROS without the 1D-PE shows the same F1 scores on both FUNSD and p-FUNSD (70.07), but BROS with the 1D-PE shows performance degradation when the dataset loses the optimal order information (81.21 on FUNSD→ 75.14 on p-FUNSD). Nevertheless, BROS with the 1D-PE shows better performances on both datasets. This might be because the 1D-PE preserves the token order in a single text block. Based on this result, we decided to incorporate the 1D-PE in our model. 6 CONCLUSION We present a novel pre-trained language model, BROS, for understanding semi-structured documents. BROS encodes 2D continuous position of text blocks and learns natural language from text blocks with an area-driven training strategy. To extract key contexts from text blocks without the order information, BROS adapts a graph-based decoder that identifies text sequences for EE tasks and layout relationships for EL tasks. In our extensive experiments on three EE and three EL tasks, BROS consistently shows better performances as well as the robustness on perturbed orders of text blocks compared to the existing approaches. A REPRODUCING THE LAYOUTLM As mentioned in the paper, to compare BROS from LayoutLM in diverse experimental settings, we implement LayoutLM in our experimental pipeline. Table 5 compares our implementations from the reported scores in Xu et al. (2020). As can be seen, multiple experiments are conducted according to the number of pre-training data. Our implementation, referred to LayoutLM†, shows comparable performances over all settings. B VISUALIZATION OF SERIALIZED OCR BLOCKS With the developments in the field of machine learning, the performance of commercial OCR has improved over the years. However, it is still hard to entrust the ordering of commercial OCR block outputs Li et al. (2020). Figure 4 shows the gap between the comprehensive reading order and outputs of commercial OCR. Specifically, the figure contrasts how the words in the OCR results should be serialized (Figure 4a) but most commercially available OCR technologies are unable to reconcile the structural formatting of the text – leading to them ordering the words horizontally (Figure 4b). This cursory example illustrates that as advanced as commercial OCR solutions have become, there are still ways to improve and our proposed method is one way in which this can be done. C ABLATION STUDIES Here, we provide more ablation studies on the components proposed in the paper. In the following tables, the number of pre-training data is 512K and the scores (F1) are the average of 5 experimental results. And for all the EL tasks, since BIO tagging cannot address the problem, SPADE decoder is applied to all models. C.1 FURTHER ABLATION STUDIES ON ALL DOWNSTREAM TASKS Table 6 and Table 7 are the extension of Table 4 and show the F1 scores for all downstream EE and EL tasks measured by changing each components one by one in BROS. From these tables, we can see that the settings of BROS show the best performance in most cases. C.2 GRADUALLY ADDING PROPOSED COMPONENTS TO THE ORIGINAL LAYOUTLM To evaluate performance improvements from LayoutLM, we provide the experimental results when gradually adding each new component. Table 8 and Table 9 provide performance changes of F1 score for EE and EL tasks, respectively. In most cases, our proposed methods show performance improvements over all tasks. C.3 PROPOSED COMPONENTS ON THE ORIGINAL LAYOUTLM For apples-to-apples comparison, we provides performance changes when adding each proposed component on LayoutLM. The results are shown in Table 10 and Table 11. When changing the original module to ours, the performances are solely increased except for the case of the positional embedding (sinusoid & linear). Interestingly, when combining our positional embedding and encoding (untied), the performance is dramatically increased. This result shows the benefits of using our proposed embedding and encoding methods together. D RESOURCE ANALYSIS Table 12 shows the resource and speed analysis of LayoutLM and BROS. The F1 scores of LayoutLM are referred from (Xu et al., 2020) and all pre-training models are trained with 1 epoch of 11M data. As can be seen, BROS shows better performance than LayoutLMLARGE even though requiring fewer parameters and less inference time. E GRAPH-BASED FORMALIZATION FOR EE AND EL TASKS Document KIE is a task that extracts structural information from documents. In this paper, we defined EE task that identifies text sequences for target classes and EL task that links the head of the text sequences to determine structural information of documents. These EE and EL tasks can be interpreted as tasks that identify a directional graph structure between OCR text blocks. In this formalization, all tokens are treated as nodes in a graph and the links between the nodes indicate the structural relationships between tokens in a document. Figure 5 shows examples of FUNSD, SROIE, CORE, and SciTSR with the graph-based formalization. F SAMPLE INFERENCE RESULTS OF THE FUNSD EE AND EL TASKS Figure 6 shows the inference results of LayoutLM and BROS and the ground truth of a same FUNSD image. Even though the document has a complex layout, BROS identified key contexts and relations reasonably. However, we observed that LayoutLM tends to link unrelated contexts that are spatially far in the layout.
1. What are the novel contributions of the paper regarding the pre-trained language model for document understanding? 2. What are the strengths of the proposed approach, particularly in its ability to address spatial information and relations? 3. What are the concerns regarding the speed and resource consumption of the proposed method? 4. Can the authors provide additional clarification or details regarding the masking strategy in Figure 1(b)? 5. How does the reviewer assess the overall quality and organization of the paper's content?
Review
Review Summary: The paper provides a novel pretrained language model for document understanding named BROS, which adds spatial layout information and new area-masking strategy. The authors do some experiments on four public datasets to illustrate the effectiveness of BROS. The new architecture is well-suited for understanding texts in document, which is valuable. ########################################################################## Reasons for score: Overall, I vote for accepting. I deem that the pre-trained language model based on BERT that encodes spatial information is useful for 2D document. Hopefully the authors can address my concern in the rebuttal period (see cons below). ##########################################################################Pros: The paper addresses siome limitations which are very important for document understanding: spatial information, spatial relation, and the information of text blocks. This paper provides comprehensive experiments, including both qualitative analysis and quantitative results, to show the effectiveness of the proposed model. The entire structure is organized well and the formulas are very detailed. ########################################################################## Cons: What are the advantages of BROS in terms of speed and resource consumption? It would be more convincing if the authors can provide more cases in the rebuttal period. For the Figure 1(b), it would be better to provide more details about it, which seems not very clear to me. Like how to mask in red area? ########################################################################## Questions during rebuttal period: Please address and clarify the cons above #########################################################################
ICLR
Title A Differentiable Self-disambiguated Sense Embedding Model via Scaled Gumbel Softmax Abstract We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources. It jointly learns how to disambiguate senses given local context and how to represent senses using hard attention. Unlike previous multi-prototype models, our model approximates differentiable discrete sense selection via a modified Gumbel softmax. We also propose a novel human evaluation task that quantitatively measures (1) how meaningful the learned sense groups are to humans and (2) how well the model is able to disambiguate senses given a context sentence—an evaluation ignored by previous models. Our model not only discovers distinct, interpretable embeddings but is competitive against previous models on word similarity tasks. 1 SENSE-SPECIFIC EMBEDDING Machine learning models for natural language processing applications often represent words with real valued vector embeddings. Popular word embedding models such as Word2Vec (Mikolov et al., 2013a;b) and GloVe (Pennington et al., 2014) enabled state-of-the-art results on myriad NLP tasks such as sentiment analysis (Kim, 2014; Tai et al., 2015) and textual entailment (Chen et al., 2017). However, for polysemous words (those with multiple senses), learning a single vector for each word type conflates different meanings (e.g., “A hydrogen bond exists between water molecules.” vs. “Do you want to buy this bond?”). This is not a new problem—Schütze (1998) demonstrates the deficiency of assigning just one vector per word—but it is more pernicious in modern models, as conflated senses can pull semantically unrelated words toward each other in the embedding space (Neelakantan et al., 2014; Pilehvar & Collier, 2016; Camacho-Collados & Pilehvar, 2018). To disentangle distinct senses in word embeddings and learn finer-grained semantic clusters, multi-prototype word embedding models learn multiple sense-specific embeddings for a single word (Section 7). But what makes a good multisense word embedding? While word similarity is the most common evaluation, it has many detractors (Faruqui et al., 2016; Gladkova & Drozd, 2016): similarity is subjective and is hard to be differentiate from word relatedness. Moreover, word similarity tasks— with the exception of Stanford Contextual Word Similarity (Huang et al., 2012, SCWS)—ignore polysemous cases or are tied to specific sense inventories (Boyd-Graber et al., 2006). Moreover, these evaluations ignore a key component of learning sense inventories: do they make sense to a human? Previous multisense embedding papers present nearest neighbors to claim their representations are interpretable and useful. Like topic models, these implicit interpretability claims need to be rigorously verified. In Section 6, we adapt techniqes for evaluating topic models (Chang et al., 2009) to measure whether learned sense groups are internally coherent and whether humans can consistently match a learned sense vector to a word in context. Just like topic models, word embedding models that win conventional evaluations do not always make sense to humans. We present a simple method that not only correlates well with traditional word similarity evaluations (Section 5) but also discovers interpretable (measured by human evaluations) sense embeddings (Section 6). Our model extends the Skip-Gram Word2Vec model and simultaneously learns (1) automatic sense induction given local context and (2) sense-specific embeddings. To learn disentangled sense representations (i.e., avoid sense mixing), we approximate hard attention and preserve N/A We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources. It jointly learns how to disambiguate senses given local context and how to represent senses using hard attention. Unlike previous multi-prototype models, our model approximates differentiable discrete sense selection via a modified Gumbel softmax. We also propose a novel human evaluation task that quantitatively measures (1) how meaningful the learned sense groups are to humans and (2) how well the model is able to disambiguate senses given a context sentence—an evaluation ignored by previous models. Our model not only discovers distinct, interpretable embeddings but is competitive against previous models on word similarity tasks. 1 SENSE-SPECIFIC EMBEDDING Machine learning models for natural language processing applications often represent words with real valued vector embeddings. Popular word embedding models such as Word2Vec (Mikolov et al., 2013a;b) and GloVe (Pennington et al., 2014) enabled state-of-the-art results on myriad NLP tasks such as sentiment analysis (Kim, 2014; Tai et al., 2015) and textual entailment (Chen et al., 2017). However, for polysemous words (those with multiple senses), learning a single vector for each word type conflates different meanings (e.g., “A hydrogen bond exists between water molecules.” vs. “Do you want to buy this bond?”). This is not a new problem—Schütze (1998) demonstrates the deficiency of assigning just one vector per word—but it is more pernicious in modern models, as conflated senses can pull semantically unrelated words toward each other in the embedding space (Neelakantan et al., 2014; Pilehvar & Collier, 2016; Camacho-Collados & Pilehvar, 2018). To disentangle distinct senses in word embeddings and learn finer-grained semantic clusters, multi-prototype word embedding models learn multiple sense-specific embeddings for a single word (Section 7). But what makes a good multisense word embedding? While word similarity is the most common evaluation, it has many detractors (Faruqui et al., 2016; Gladkova & Drozd, 2016): similarity is subjective and is hard to be differentiate from word relatedness. Moreover, word similarity tasks— with the exception of Stanford Contextual Word Similarity (Huang et al., 2012, SCWS)—ignore polysemous cases or are tied to specific sense inventories (Boyd-Graber et al., 2006). Moreover, these evaluations ignore a key component of learning sense inventories: do they make sense to a human? Previous multisense embedding papers present nearest neighbors to claim their representations are interpretable and useful. Like topic models, these implicit interpretability claims need to be rigorously verified. In Section 6, we adapt techniqes for evaluating topic models (Chang et al., 2009) to measure whether learned sense groups are internally coherent and whether humans can consistently match a learned sense vector to a word in context. Just like topic models, word embedding models that win conventional evaluations do not always make sense to humans. We present a simple method that not only correlates well with traditional word similarity evaluations (Section 5) but also discovers interpretable (measured by human evaluations) sense embeddings (Section 6). Our model extends the Skip-Gram Word2Vec model and simultaneously learns (1) automatic sense induction given local context and (2) sense-specific embeddings. To learn disentangled sense representations (i.e., avoid sense mixing), we approximate hard attention and preserve You only live twice, Mr. Bond . . . c̄i ci1 c i m ... G M predict contexts G M Gumbel softmax Marginalization context words center word context embeddings lookup sense embeddings lookup sense attention C S ... wic̃i P (sik|wi, c̃i) chemical 007 financial si1 si2 siK P (cij |wi) Figure 1: Network struture with an example of our GASI model which learns a set of global context embeddings C and a set of sense embeddings S differentiability via a scaled variant of the Gumbel Softmax function (Section 3.2). This modeling contribution—Scaled Gumbel Softmax—is critical for disambiguating senses. 2 FOUNDATIONS: SKIP-GRAM AND GUMBEL SOFTMAX Our model extends Skip-Gram Word2Vec (Mikolov et al., 2013a;b), which jointly learns word embeddings W ∈ R|V |×d and context embeddings C ∈ R|V |×d. More specifically, given a vocabulary V and embedding dimension d, it maximizes the likelihood of the context words cij that surround a given center word wi in a context window c̃i, J(W,C) ∝ ∑ wi∈V ∑ cij∈c̃i logP (cij |wi; W,C), (1) where P (cij |wi) is estimated by a softmax over all possible context words, i.e, the vocabulary, P (cij |wi; W,C) = exp ( cij > wi ) ∑ c∈V exp (c >wi) . (2) In practice, logP (cij |wi) is approximated by negative sampling to reduce computational cost. 2.1 GUMBEL SOFTMAX The Gumbel softmax (Jang et al., 2016; Maddison et al., 2016) approximates the sampling of discrete random variables. Given a discrete random variable X with P (X = k) ∝ αk, αk ∈ (0,∞), the Gumbel-max (Gumbel & Lieblein, 1954; Maddison et al., 2014) refactors the sampling of X into X = arg max k (logαk + gk), (3) where the Gumbel noise gk = − log(− log(uk)) and uk are i.i.d samples drawn from Uniform(0, 1). The Gumbel softmax approximates sampling one hot(arg maxk(logαk + gk)) by yk = softmax((logαk + gk)/τ). (4) 3 GUMBEL-ATTENTION SENSE INDUCTION (GASI) Building on these foundations, we now introduce our model, GASI, and along the way introduce a soft-attention stepping-stone (SASI); afterward, we will compare models on both traditional evaluation metrics and interpretability. The critical component of our model is that we model the sense selection probability, which can be interpreted as sense attention over contexts, into the Skip-Gram model while preserving the original objective through marginalization (Figure 1). By using Gumbel Softmax, our model both approximates discrete sense selection and is differentiable. Previous models are either non-differentiable or otherwise complicate inference through hard attention with reinforcement learning methods (Lee & Chen, 2017). 3.1 ATTENTIONAL SENSE INDUCTION FRAMEWORK Embedding Parameters We learn a context embedding matrix C ∈ R|V |×d and a sense embedding tensor S ∈ R|V |×K×d. Unlike previous work (Neelakantan et al., 2014; Lee & Chen, 2017), no extra embeddings are kept for sense induction. Number of Senses For simplicity and consistency with most previous work, our model has a fixed number of senses K.1 Sense Attention in Objective Function Assuming a center word wi has senses {si1, si2, . . . , siK}, the original Skip-Gram likelihood can be written as marginal distribution over all senses of wi with the sense induction probability P (sik |wi), we focus on the sense disambiguation given local context c̃i and estimate P (cij |wi) = K∑ k=1 P (cij | sik)P (sik |wi) ≈ K∑ k=1 P (cij | sik)P (sik |wi, c̃i)︸ ︷︷ ︸ attention , (5) Replacing P (cij |wi) in Equation 1 with Equation 5 gives our objective function J(S,C) ∝ ∑ wi∈V ∑ cij∈c̃i log K∑ k=1 P (cij | sik)P (sik |wi, c̃i). (6) Lower Bound the Objective for Negative Sampling Like the Skip-Gram objective (Equation 2), we model the likelihood of a context word given the center sense P (cij | sik) using softmax, P (cij | sik) = exp ( cij > sik ) ∑|V | j=1 exp ( c>j s i k ) , (7) where the bold symbol sik is the vector representation of sense s j k from S, and cj is the context embedding of word cj from C. Computing the softmax over the vocabulary is time-consuming. We want to adopt negative sampling to approximate logP (cij | sik), which does not exist explicitly in our objective function (Equation 6).2 However, given the concavity of the logarithm function, we can apply Jensen’s inequality, log K∑ k=1 P (cij | sik)P (sik |wi, c̃i) ≥ K∑ k=1 P (sik |wi, c̃i) logP (cij | sik), (8) and create a lower bound of the objective. Maximizing this lower bound gives us a tractable objective, J(S,C) ∝ ∑ wi∈V ∑ cij∈c̃i K∑ k=1 P (sik |wi, c̃i) logP (cij | sik), (9) where logP (cij | sik) is estimated by negative sampling Mikolov et al. (2013b), log σ(cij > sik) + n∑ j=1 Ecj∼Pn(c)[log σ(−c > j s j k))], (10) 1We can prune the duplicated senses for words that have senses less than K, details in Appendix B. We can also set different number of senses based on word frequency in the training, details in Appendix B.3. 2Deriving the negative sampling requires the logarithm of a softmax (Goldberg & Levy, 2014). c̄⊤i s i k gk : Gumbel noise Modeling Sense Attention We can model the attention term, contextual sense induction distribution, with soft attention; we call the resulting model soft-attention sense induction (SASI); although it is a stepping stone to our final model, we compare against it in our experiments as it helps isolate the contributions of hard attention. In SASI, the sense attention is conditioned on the entire local context c̃i with softmax: P (sik |wi, c̃i) = exp ( c̄>i s i k )∑K k=1 exp ( c̄>i s i k ) , (11) where c̄i is the mean of the context vectors in c̃i. 3.2 SCALED GUMBEL SOFTMAX FOR SENSE DISAMBIGUATION To reduce separate senses and learn distinguishable sense representations, we implement hard attention in our full model, GASI. To preserve differentiability and circumvent the difficulties in training with reinforcement learning (Sutton & Barto, 1998), we apply the reparameterization trick with Gumbel softmax (Section 2.1) to our sense attention function (Equation 11) and make a continuous relaxation. Vanilla Gumbel Attention The discrete sense sampling from Equation 11 can be refactored by zi = one hot(arg max k (c̄i >sik + gk)), (12) and the hard attention is approximated with yik = softmax((c̄i >sik + gk)/τ). (13) Scaled Gumbel Softmax for Sense Disambiguation Gumbel softmax learns a flat distribution over senses even with low temperatures (Figure 2): the dot product c̄>i s i k is too small compared to the Gumbel noise gk (Figure 3).3 Thus we use a scaling factor β to reduce the randomness,4 and tune it as a hyperparameter.5 γik = softmax((c̄i >sik + βgk)/τ), (14) We use GASI-β to identify the GASI model with scaling factor. This modification is critical for learning distinguishable senses (Figure 2, Table 1, and Table 5). Final Objective Function The objective function of our GASI-β model is J(S,C) ∝ ∑ wi∈V ∑ wc∈ci K∑ k=1 softmax((c̄i >sjk + βgk)/τ) logP (wc | s i k). (15) 3Float32 precision, the saturation of log(σ(·)) and gradient vanishing result in a small range of c̄>i sik. 4Normalizing c̄>i s i k or directly using logP (s i k |wi, c̃i) results in a similar outcome. 5Learning β instead of fixing it as a hyperparameter does not successfully disambiguate senses. 4 TRAINING SETTINGS For fair comparisons, we try to remain consistent with previous work (Huang et al., 2012; Neelakantan et al., 2014; Lee & Chen, 2017) in all aspects of training. In particular, we train GASI on the same April 2010 Wikipedia snapshot (Shaoul C., 2010) with 1B tokens the same vocabulary released by Neelakantan et al. (2014); set the number of senses K = 3 and dimension d = 300 for each word unless otherwise specified. More details are in Appendix A. We fix the temperature τ = 0.5,6 and tune the scaling factor β from {0.1, 0.2, ...,0.9} on the AvgSimC measure for the contextual word similarity task (Section 5). The optimal scaling factor β is 0.4. If not reprinted, numbers for competing models are either computed with pre-trained embeddings released by authors or trained on released code.7 5 WORD SIMILARITY EVALUATION We first compare our GASI and GASI-β model with previous work on standard word similarity tasks before turning to interpretability experiments. Each task has word pairs with a similarity/relatedness score. For evaluation, we measure Spearman’s rank correlation ρ (Spearman, 1904) between word embedding similarity and the gold similarity judgements: higher scores imply the model captures semantic similarities consistent with the trusted similarity scores. Contextual Word Similarity Tailored for sense embedding evaluation, Stanford Contextual Word Similarities (Huang et al., 2012, SCWS) has 2003 word pairs and similarity scores with sentential context. Moreover, the word pairs and their contexts reflect homonymous and polysemous words. Therefore, we use this dataset to tune our hyperparameters. To compute the word similarity with senses we use two metrics Reisinger & Mooney (2010) that take context and sense disambiguation into account: MaxSimC computes the cosine similarity cos(s∗1, s ∗ 2) between the two most probable senses s ∗ 1 and s ∗ 2 that maximizes P (sik |wi, c̃i). AvgSimC weights average similarity over the combinations of all senses∑K i=1 ∑K i=j P (s 1 i |w1, c̃1)P (s2j |w2, c̃2) cos(s1i s2j ). We compare variants of our model with multi-prototype sense embedding models (Table 1), including two previous state-of-the-art models: the clustering-based Multi-Sense Skip-Gram model (Neelakantan et al., 2014, MSSG) on AvgSimC metric and the RL-based Modularizing Unsupervised 6This is similar to the experiment settings for Gumbel softmax in Maddison et al. (2016) 7We adopt the numbers for Li & Jurafsky (2015) from Lee & Chen (2017) and tune the PDF-GM (Athiwaratkun et al., 2018) model on the same 1B corpus and vocabulary as previous works using https://github.com/ benathi/multisense-prob-fasttext with suggested hyperparameters and select the best results. Model MaxSimC AvgSimC Huang et al. (2012)-50d 26.1 65.7 MSSG-6K 57.3 69.3 MSSG-30K 59.3 69.2 Tian et al. (2014) 63.6 65.4 Li & Jurafsky (2015) 66.6 66.8 Qiu et al. (2016) 64.9 66.1 Bartunov et al. (2016) 53.8 61.2 MUSE Boltzmann 67.9 68.7 SASI 55.1 67.8 GASI(w/o scaling) 68.2 68.3 GASI-β 66.4 69.5 Table 1: Spearman’s correlation 100ρ on SCWS (trained 1B token, 300d vectors except for Huang et al.) Model Accuracy(%) Unsupervised Multi-prototype models MSSG-30K 54.00 MUSE Boltzmann 52.14 GASI-β 55.27 Multi-prototype models with external lexical resources DeConf 58.55 SW2V 54.56 Table 2: Unsupervised sense selection accuracy on Word in Context Sense Embeddings (Lee & Chen, 2017, MUSE) on MaxSimC. All three are better than the baseline Skip-Gram model (65.2 using the word embedding). GASI better captures similarity than SASI, corroborating that hard attention aids word sense selection. GASI without scaling (β) has the best MaxSimC; however, it learns a flat sense distribution (Figure 2). GASI-β has the best AvgSimC and a competitive MaxSimC. While MUSE has a higher MaxSimC than GASI-β, it fails to distinguish senses as well (Figure 4, Section 6). The Probabilistic FastText Gaussian Mixture (Athiwaratkun et al., 2018, PDF-GM) is SOTA on multiple non-contextual word similarity tasks (Table 3). Without sense selection module given context, we evaluate PDF-GM on MaxSim (Equation 16), which is 66.4. Our GASI-β has the same on MaxSim, and better correlation on AvgSimC (69.5). Word Sense Selection in Context SCWS evaluates models’ ability of sense selection indirectly. We further compare GASI-β with previous SOTA, MSSG-30K and MUSE, on the Word in Context dataset (Pilehvar & Camacho-Collados, 2018, WiC) which requires the model to identify whether a word has the same sense in two contexts. Lacking ground truth for the development set,8 to reduce the variance in training and to focus on evaluating the sense selection module, we use an evaluation suited for unsupervised models: if the model selects different sense vectors given contexts, we mark that the word has different senses.9 For MUSE, MSSG and GASI-β, we use each model’s sense selection module; for DeConf (Pilehvar & Collier, 2016) and SW2V (Mancini et al., 2017), we follow Pilehvar & Camacho-Collados (2018) and Pelevina et al. (2016) by selecting the closest sense vectors to the context vector. Results on DeConf are comparable to supervised results (59.4± 0.7). Our GASI-β has the best result apart from DeConf itself, which uses the same sense inventory (Miller & Fellbaum, 1998, WordNet) used to build WiC. This evaluation, however, does not reflect the interpretability of the senses themselves. We address this in Section 6. Non-Contextual Word Similarity To evaluate the semantics captured by each sense-specific embeddings, we compare the models on the non-contextual word similarity datasets: RG-65 (Rubenstein & Goodenough, 1965); SimLex-999 (Hill et al., 2015); WS-353 (Finkelstein et al., 2002); MEN-3k (Bruni et al., 2014); MC-30 (Miller & Charles, 1991); YP-130 (Yang & Powers, 2006); MTurk-287 (Radinsky et al., 2011); MTurk-771 (Halawi et al., 2012); RW-2k (Luong et al., 2013). Similar to Lee & Chen (2017) and Athiwaratkun et al. (2018), we compute the word similarity based on senses by MaxSim (Reisinger & Mooney, 2010), which maximizes the cosine similarity over the combination of all sense pairs and does not require local contexts, MaxSim(w1, w2) = max 0≤i≤K,0≤j≤K cos(s1i , s 2 j ). (16) GASI-β has better correlation on three datasets, is competitive on the rest (Table 3), and remains competitive without scaling. GASI is better than MUSE, the other hard-attention multi-prototype model, on six datasets and worse on three. Our model can reproduce word similarities as well or better than existing models through our sense selection. 8Unavailable as of November 2018 at https://pilehvar.github.io/wic/ 9For words not in vocabulary or only have one sense learned, we chose randomly. 6 CROWDSOURCING EVALUATION GASI can capture word similarity (Section 5), but do the learned representations make sense? Could a human use them to help build a dictionary? If you show a human the senses, can they understand why a model would assign a sense to that context? In this section we evaluate whether the representations make sense to human consumers of multisense models. Qualitive analysis Previous papers use nearest neighbors of a few examples to qualitatively argue that their models have captured meaningful senses of words. We also give an example in Figure 4, which provides an intuitive view on how the learned senses are clustered by visualizing the nearest neighbors of word “bond” using t-SNE projection (Maaten & Hinton, 2008). Our proposed model (right) disentangles the three sense of “bond” clearly and learns three distinct sense vectors. However, the examples can be cherry-picked and lack standards. This problem also bedeviled topic modeling until the introduction of rigorous human evaluation (Chang et al., 2009). We adapt both aspects Chang et al’s evaluations: word intrusion (Schnabel et al., 2015) to evaluate whether individual senses are coherent and topic intrusion—rather sense intrusion in this case—to evaluate whether humans agree with models’ sense assignments in context. Both crowdsourcing tasks collect human inputs on Figure-Eight. We compare our models with two previous state-of-the-art multi-prototype sense embeddings models that disambiguate senses given local context, i.e., MSSG (Neelakantan et al., 2014) and MUSE (Lee & Chen, 2017).10 6.1 WORD INTRUSION FOR SENSE COHERENCE Schnabel et al. (2015) suggests a “good” word embedding should have coherent neighbors and evaluate coherence by word intrusion. They presents crowdworkers four words: three are close in embedding space while one of which is an “intruder”. If the embedding makes sense, contributors will easily spot the word that “does not belong”. Similarly, we examine the coherence of ten nearest neighbors of senses in the contextual word sense selection task (Section 6.2) and replace one neighbor with an “intruder” (Figure 5). We generate three intruders for each sense and collect three judgements per intruder. We consider the “intruder” to be correctly selected if at least two judgements are correct. Figure 5: Word intrusion task prompt Model Sense-level Judgement-level AggrementAccuracy Accuracy MUSE 67.33 62.89 0.73 MSSG-30K 69.33 66.67 0.76 GASI-β 71.33 67.33 0.77 Table 4: Word intrusion evalutations on top ten nearest neighbors of sense embeddings. 10MSSG has two settings; we run human evaluation with MSSG-30K which has higher correlation with MaxSimC on SCWS. Question (required) Vandiver mentions the $100 million highway bond issue approved earlier in the 007, octopussy, moneypenny, goldfinger, thunderball, moonraker, goldeneye atom, transition, bonding, covalent, hydrogen, molecule, substituent, carbons mortgage-backed, securities, coupon, debenture, repurchase, refinance, surety, * Choose one sense group that the target (underlined) word fits best. Like Chang et al. (2009), we want the “intruder” to not be too different in terms of frequency to the target set but not too similar semantically. For sense smi of word type wi, we randomly select a word from the neighbors of another sense sni of wi but with a low threshold, i.e., any words that has cosine similarity larger than 0.0 can be viewed as a neighbor. Result and Analysis All models have comparable model accuracy. GASI-β learns senses that have the highest coherency among top ten nearest neighbors while MUSE learns more sense mixtures. Inter-rater Agreement We use the aggregated confidence score provided by Figure-Eight to estimate the level of agreement between multiple contributors.11 The agreements are high for all models and our GASI-β has the highest agreement, suggesting that the senses learned by GASI-β are easier to interpret. 6.2 CONTEXTUAL WORD SENSE SELECTION The previous task measures whether individual senses are coherent. In this task, we measure whether the learned senses by sense embedding models make sense human and evaluate the models’ ability to disambiguate senses in context. Task Description Given a target word in context, we ask a crowdworker to select which sense group best fits the sentence. Each sense group is described by its top ten distinct nearest neighbors (Figure 6).12 Data Collection We select fifty nouns with five sentences from SemCor 3.0 (Miller et al., 1994). We first filter all word types with fewer than ten sentences and select the fifty most polysemous nouns from WordNet (Miller & Fellbaum, 1998) among the remaining senses. For each noun, we randomly select five sentences. Metrics For each model, we collect three judgements for each question. We consider a model correct if at least two crowdworkers select the same sense as the model. We also consider the probability P assigned to the human choices by the model, indicating the model’s confidence in sense selection. P = 1/3 indicates the model learns flat, uniform sense induction distribution is unable to disambiguate senses. Sense disambiguation and interpretability If humans consistently pick the same sense as the model: 1) humans can interpret the nearest neighbor words (as measured by the previous experiment); 2) the senses are distinguishable to human; 3) the human’s choice is consistent with the model’s. Results and Analysis GASI-β selects senses that are most consistent with humans; it has the highest accuracy and assigns the largest probability assigned to the human choices (Table 5). Thus, GASI-β produces sense embeddings that are both more interpretable and distinguishable. GASI without a scaling factor, however, has low consistency and flat sense distribution. Inter-rater Agreement We use the confidence score computed by Figure-Eight to estimate the rater’s agreement for this task as well. Our GASI-β achieves the highest human-model agreement while both MUSE and GASI without scaling have the lowest. 11https://success.figure-eight.com/hc/en-us/articles/201855939-How-to-Calculate-a-Confidence-Score 12We shuffle the choices for questions with the same target word. Model Accuracy P Agreement MUSE 28.0 0.33 0.68 MSSG-30K 44.5 0.37 0.73 GASI (no β) 33.8 0.33 0.68 GASI-β 50.0 0.48 0.75 Table 5: Human-model consistency on contextual word sense selection; P is the average probability assigned by the model to the human choices. GASIβ is most consistent with human. MUSE MSSG GASI-β word overlaps correct 4.78 0.39 1.52 error 5.43 0.98 6.36 cosine sim by Glove correct 0.86 0.33 0.36 error 0.88 0.57 0.81 Table 6: Similarities of human and model choices when they disagree (error) vs. similarities between the senses that both human and model select with other senses in the same word (correct). Human agrees with the model when the senses are distinct. Error Analysis Next, we attempt to answer why crowdworkers disagree with the model although they can interpret most senses (measured by the word intruder task, Table 4). Is it that the model has learned duplicate senses that both the users and model cannot distinguish or is it that crowdworkers agree with each other but disagree with the model? The former relates to the model’s ability in learning human distinguishable senses; while the latter relates to the model’s ability in contextual sense selection. Two trends reveal that duplicated senses that are not distinguishable to humans are one of the main causes of human-model disagreement. First, users agree with the model when the senses are distinct (Table 6, correct), while disagreement rises with more similar senses (Table 6, error); second, more distinct senses allows higher inter-rater agreement (Figure 7). We measure distinctness both by counting the number of shared nearest neighbors and the average cosine simlarities of GloVe embeddings.13 Specifically, MUSE learns duplicate senses for most words, preventing users from choosing appropriate senses and results in random human-model agreement. GASI-β learns some duplicated senses and some distinguishable senses. MSSG appears to learn the least similar senses, but they are not distinguishable enough for humans. For MSSG, small neighbor overlaps do not necessarily help humans to distinguish between senses: users disagree with each other (agreement 0.33) even when the number of overlaps is very small (Figure 7). An intuitive example is shown in Table 7, which demonstrates the necessity of human evaluation. If we use rater agreement to measure how distinguishable the learned senses are to humans, GASI-β learns the most distinguishable senses (histogram in Figure 8). Figure 8 also shows that the model is more likely to agree with humans when humans agree more with each other (as a result of more distinct senses), i.e., human-model consistency correlates with rater agreement (Figure 8). MSSG disagrees with humans more even when raters agree with each other, indicating worse sense selection ability. 6.3 WORD SIMILARITY VS. SENSE DISAMBIGUATION The evaluation results on word similarity tasks (Section 5) and human evaluations (Section 6) are inconsistent for several models. GASI, GASI-β model and the MUSE model are all competitive in word similarity (Table 1 and Table 3), but only GASI-β also does well in the human evaluations (Table 5). Both GASI without scaling and MUSE fail to learn distinguishable senses and cannot disambiguate senses given local context. High word similarities do not necessarily indicate “good” sense embeddings quality; our human evaluation—contextual word sense selection—is complementary. 7 RELATED WORK Schütze (1998) introduces context-group discrimination for senses and uses the centroid of context vectors as a sense representation. Other work induces senses by context clustering (Purandare & Pedersen, 2004) or probabilistic mixture models (Brody & Lapata, 2009). Reisinger & Mooney (2010) first introduce multiple sense-specific vectors for each word, inspiring other multi-prototype sense embedding models. Generally, to address polysemy in word embeddings, some previous work 13Different models learn different representations; we use GloVe for a uniform basis of comparison. MSSG MUSE GASI- 0.33 0.67 1.00 (rater agreement) 0.5 0.75 2.5 5.0 ov er la ps co si ne s im Average in-word sense similarities Figure 7: More distinct senses within each word lead to higher inter-rater agreement trained on annotated sense corpora (Iacobacci et al., 2015) or external sense inventories (Labutov & Lipson, 2013; Chen et al., 2014; Jauhar et al., 2015; Chen et al., 2015; Wu & Giles, 2015; Pilehvar & Collier, 2016; Mancini et al., 2017); Rothe & Schütze (2015; 2017) extend word embeddings to lexical resources without training; others induce senses via multilingual parallel corpora (Guo et al., 2014; Šuster et al., 2016; Ettinger et al., 2016). We contrast our GASI to unsupervised monolingual multi-prototype models along two dimensions: sense induction methodology and differentiability. 1), Huang et al. (2012) and Neelakantan et al. (2014) induce senses by context clustering; Tian et al. (2014) model a corpus-level sense distribution; Li & Jurafsky (2015) model the sense assignment as a Chinese Restaurant Process; Qiu et al. (2016) induce senses by minimizing an energy function on a context-depend network; Bartunov et al. (2016) model the sense assignment as a steak-breaking process; Nguyen et al. (2017) model the sense embeddings as a weighted combination of topic vectors with pre-computed weights by topic models; Athiwaratkun & Wilson (2017) and Athiwaratkun et al. (2018) model word representations as Gaussian Mixture embeddings where each Gaussian component captures different senses; Lee & Chen (2017) computes sense distribution by a separate set of sense induction vectors; while our GASI marginalizes the likelihood of contexts over senses and induces senses by local context vectors; the most similar sense selection module is a bilingual model (Šuster et al., 2016) except that it does not introduce lower bound for negative sampling but uses weighted embeddings, which results in more sense mixture. 2), most sense selection models are non-differentiable and discretely select senses, with two exceptions: Šuster et al. (2016) use weighted vectors over senses; Lee & Chen (2017) implement hard attention with RL to mitigate the non-differentiability. In contrast, GASI keeps full differentiability by reparameterization and approximates discrete sense sampling with scaled Gumbel softmax. 8 CONCLUSION The goal of multi-sense word embeddings is not just to win word sense evaluation datasets; rather, they should also describe language: given millions of tokens of a language, what are the patterns in the language that can help a lexicographer or linguist in day-to-day tasks like building dictionaries or understanding semantic drift. Our differentiable Gumbel Attention Sense Induction (GASI) offers a best of both worlds: comparable word similarities while also learning more distinguishable, interpretable senses. A TRAINING DETAILS During training, we fix the window size to five and the dimensionality of the embedding space to 300 for comparison to previous work. We initialize both sense and context embeddings randomly within U(-0.5/dim, 0.5/dim) as in Word2Vec. We set the initial learning rate to 0.01; it is decreased linearly until training concludes after 5 epochs. The batch size is 512, and we use five negative samples per center word-context pair as suggested by Mikolov et al. (2013a). The subsample threshold is 1e-4. We train our model on the GeForce GTX 1080 Ti, and our implementation (using pytorch 3.0) takes ∼ 6 hours to train one epoch on the April 2010 Wikipedia snapshot Shaoul C. (2010) with 100k vocabulary. For comparison, our implementation of Skip-Gram on the same framework takes ∼ 2 hours each epoch. B NUMBER OF SENSES For simplicity and consistency with most of previous work, we present our model with a fixed number of senses K. B.1 POST-TRAINING PRUNING For words that do not have multiple senses or have most senses appear very low-frequently in corpus, our model (as well as many previous models) learns duplicate senses. We can easily remove such duplicates by pruning the learned sense embeddings with a threshold λ. Specifically, for each word wi, if the cosine distance between any of its sense embeddings (sim, s i n) is smaller than λ, we consider them to be duplicates. After discovering all duplicate pairs, we start pruning with the sense sik that has the most duplications and keep pruning with the same strategy until no more duplicates remain. Model-specific pruning We estimate a model-specific threshold λ from the learned embeddings instead of deciding it arbitrary. Therefore, this pruning methods is also applicable to other sense embedding models. We first sample 100 words from the negative sampling distribution over the vocabulary. Then, we retrieve the five nearest neighbors (from all senses of all words) to each sense of each sampled word. If one of a word’s own senses appears as a nearest neighbor, we append the distance between them to a sense duplication list Ddup. For other nearest neighbors, we append their distances to the word neighbor list Dnn. After populating the two lists, we want to choose a threshold that would prune away all of the sense duplicates while differentiating sense duplications with other distinct neighbor words. Thus, we compute λ = 1 2 (mean(Ddup) + mean(Dnn)). (17) Table 1 compares the sense embeddings after pruning with the original mode on the Stanford Contextual Word Similarities (SCWS) task Huang et al. (2012). Both AvgSimC and MaxSimC with post-pruning embeddings decrease only a few compare to that from GASI-0.4. B.2 NUMBER OF SENSES VS. WORD FREQUENCY It is a common assumption that more frequent words have more senses. Figure 1 shows a histogram of the number of senses left for words ranked by their frequency, and the results agree with the assumption. Generally, the model learns more sense for high frequent words, except for the most frequent ones. The most frequent words are usually considered stopwords, such as “the”, “a” and “our’, which have only one common meaning. Moreover, we compare our model initialized with three senses (GASI-0.4, K = 3) against the one that has five (GASI-0.4, K = 5). Initializing with a larger number of senses, the model is able to uncover more senses for most words. B.3 INITIALZING K BASED ON WORD FREQUENCY Despite our model has a fixed number of senses. It is easy to implement our model with different number of senses with a mask matrix. And we can define different number of senses for each word based on their frequency. In Table 1, we show the results from a model that only top 30,000 word are initialized with three senses while others have one. The same choice is applied by Neelakantan et al. (2014).
1. What is the main contribution of the paper, and how does it differ from previous works? 2. What are the strengths and weaknesses of the proposed model, particularly in terms of its ability to disambiguate different sense identities and learn sense representations? 3. How does the reviewer assess the novelty and significance of the paper's contributions, especially compared to prior works such as Lee and Chen (2017) and Li and Jurafsky (2015)? 4. What are some concerns or suggestions the reviewer has regarding the evaluation and discussion of the paper, including the choice of tasks and the lack of error analysis? 5. How might the authors improve their work, for example, by focusing more on the sense selection module or trying the WiC dataset?
Review
Review This paper proposes GASI to disambiguate different sense identities and learn sense representations given contextual information. The main idea is to use scaled Gumbel softmax as the sense selection method instead of soft or hard attention, which is the novelty and contribution of this paper. In addition, the authors proposed a new evaluation task, contextual word sense selection, which can be used to quantitatively evaluate the semantic meaningfulness of sense embeddings. The proposed model achieves comparable performance on traditional word/sense intrinsic evaluation and word intrusion test as previous models, while it outperforms baselines on the proposed contextual word sense selection task. While the scaled Gumbel softmax is the claimed novelty, it is more like an extension of the original MUSE model (Lee and Chen, 2017), which proposed the sense selection and representation learning modules for learning sense-level embeddings. The only difference between the proposed one and Lee and Chen (2017) is Gumbel softmax instead of reinforcement learning between sense selection and representation learning modules. Therefore, the idea from the proposed model is similar to Li and Jurafsky (2015), because the sense selection is not one-hot but a distribution. The novelty of this paper is limited because the model is relatively incremental. From my perspective, the more influential contribution is that this paper points out the importance of evaluating sense selection capability, which is ignored by most prior work. Therefore, I expect to see more detailed evaluation on the selection module of the model. Also, because the task of this paper is multi-sense embeddings, the traditional word similarity (without contexts) task seems unnecessary. Moreover, there is no error analysis about the result on the proposed contextual word sense selection task, which may shed more light on the strength and weakness of the model. Finally, I suggest the authors remove the word-level similarity task and try the recently released Word in Context (WiC) dataset, which is a binary classification task that determines whether the meaning of a word is different given two contexts. It would be better to see that GASI performs well on this task given its better sense selection module. Overall, the contribution is somewhat incremental and the evaluation/discussion should focus more on the sense selection module. Considering the issues mentioned above, I will expect better quality for an ICLR paper.
ICLR
Title A Differentiable Self-disambiguated Sense Embedding Model via Scaled Gumbel Softmax Abstract We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources. It jointly learns how to disambiguate senses given local context and how to represent senses using hard attention. Unlike previous multi-prototype models, our model approximates differentiable discrete sense selection via a modified Gumbel softmax. We also propose a novel human evaluation task that quantitatively measures (1) how meaningful the learned sense groups are to humans and (2) how well the model is able to disambiguate senses given a context sentence—an evaluation ignored by previous models. Our model not only discovers distinct, interpretable embeddings but is competitive against previous models on word similarity tasks. 1 SENSE-SPECIFIC EMBEDDING Machine learning models for natural language processing applications often represent words with real valued vector embeddings. Popular word embedding models such as Word2Vec (Mikolov et al., 2013a;b) and GloVe (Pennington et al., 2014) enabled state-of-the-art results on myriad NLP tasks such as sentiment analysis (Kim, 2014; Tai et al., 2015) and textual entailment (Chen et al., 2017). However, for polysemous words (those with multiple senses), learning a single vector for each word type conflates different meanings (e.g., “A hydrogen bond exists between water molecules.” vs. “Do you want to buy this bond?”). This is not a new problem—Schütze (1998) demonstrates the deficiency of assigning just one vector per word—but it is more pernicious in modern models, as conflated senses can pull semantically unrelated words toward each other in the embedding space (Neelakantan et al., 2014; Pilehvar & Collier, 2016; Camacho-Collados & Pilehvar, 2018). To disentangle distinct senses in word embeddings and learn finer-grained semantic clusters, multi-prototype word embedding models learn multiple sense-specific embeddings for a single word (Section 7). But what makes a good multisense word embedding? While word similarity is the most common evaluation, it has many detractors (Faruqui et al., 2016; Gladkova & Drozd, 2016): similarity is subjective and is hard to be differentiate from word relatedness. Moreover, word similarity tasks— with the exception of Stanford Contextual Word Similarity (Huang et al., 2012, SCWS)—ignore polysemous cases or are tied to specific sense inventories (Boyd-Graber et al., 2006). Moreover, these evaluations ignore a key component of learning sense inventories: do they make sense to a human? Previous multisense embedding papers present nearest neighbors to claim their representations are interpretable and useful. Like topic models, these implicit interpretability claims need to be rigorously verified. In Section 6, we adapt techniqes for evaluating topic models (Chang et al., 2009) to measure whether learned sense groups are internally coherent and whether humans can consistently match a learned sense vector to a word in context. Just like topic models, word embedding models that win conventional evaluations do not always make sense to humans. We present a simple method that not only correlates well with traditional word similarity evaluations (Section 5) but also discovers interpretable (measured by human evaluations) sense embeddings (Section 6). Our model extends the Skip-Gram Word2Vec model and simultaneously learns (1) automatic sense induction given local context and (2) sense-specific embeddings. To learn disentangled sense representations (i.e., avoid sense mixing), we approximate hard attention and preserve N/A We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources. It jointly learns how to disambiguate senses given local context and how to represent senses using hard attention. Unlike previous multi-prototype models, our model approximates differentiable discrete sense selection via a modified Gumbel softmax. We also propose a novel human evaluation task that quantitatively measures (1) how meaningful the learned sense groups are to humans and (2) how well the model is able to disambiguate senses given a context sentence—an evaluation ignored by previous models. Our model not only discovers distinct, interpretable embeddings but is competitive against previous models on word similarity tasks. 1 SENSE-SPECIFIC EMBEDDING Machine learning models for natural language processing applications often represent words with real valued vector embeddings. Popular word embedding models such as Word2Vec (Mikolov et al., 2013a;b) and GloVe (Pennington et al., 2014) enabled state-of-the-art results on myriad NLP tasks such as sentiment analysis (Kim, 2014; Tai et al., 2015) and textual entailment (Chen et al., 2017). However, for polysemous words (those with multiple senses), learning a single vector for each word type conflates different meanings (e.g., “A hydrogen bond exists between water molecules.” vs. “Do you want to buy this bond?”). This is not a new problem—Schütze (1998) demonstrates the deficiency of assigning just one vector per word—but it is more pernicious in modern models, as conflated senses can pull semantically unrelated words toward each other in the embedding space (Neelakantan et al., 2014; Pilehvar & Collier, 2016; Camacho-Collados & Pilehvar, 2018). To disentangle distinct senses in word embeddings and learn finer-grained semantic clusters, multi-prototype word embedding models learn multiple sense-specific embeddings for a single word (Section 7). But what makes a good multisense word embedding? While word similarity is the most common evaluation, it has many detractors (Faruqui et al., 2016; Gladkova & Drozd, 2016): similarity is subjective and is hard to be differentiate from word relatedness. Moreover, word similarity tasks— with the exception of Stanford Contextual Word Similarity (Huang et al., 2012, SCWS)—ignore polysemous cases or are tied to specific sense inventories (Boyd-Graber et al., 2006). Moreover, these evaluations ignore a key component of learning sense inventories: do they make sense to a human? Previous multisense embedding papers present nearest neighbors to claim their representations are interpretable and useful. Like topic models, these implicit interpretability claims need to be rigorously verified. In Section 6, we adapt techniqes for evaluating topic models (Chang et al., 2009) to measure whether learned sense groups are internally coherent and whether humans can consistently match a learned sense vector to a word in context. Just like topic models, word embedding models that win conventional evaluations do not always make sense to humans. We present a simple method that not only correlates well with traditional word similarity evaluations (Section 5) but also discovers interpretable (measured by human evaluations) sense embeddings (Section 6). Our model extends the Skip-Gram Word2Vec model and simultaneously learns (1) automatic sense induction given local context and (2) sense-specific embeddings. To learn disentangled sense representations (i.e., avoid sense mixing), we approximate hard attention and preserve You only live twice, Mr. Bond . . . c̄i ci1 c i m ... G M predict contexts G M Gumbel softmax Marginalization context words center word context embeddings lookup sense embeddings lookup sense attention C S ... wic̃i P (sik|wi, c̃i) chemical 007 financial si1 si2 siK P (cij |wi) Figure 1: Network struture with an example of our GASI model which learns a set of global context embeddings C and a set of sense embeddings S differentiability via a scaled variant of the Gumbel Softmax function (Section 3.2). This modeling contribution—Scaled Gumbel Softmax—is critical for disambiguating senses. 2 FOUNDATIONS: SKIP-GRAM AND GUMBEL SOFTMAX Our model extends Skip-Gram Word2Vec (Mikolov et al., 2013a;b), which jointly learns word embeddings W ∈ R|V |×d and context embeddings C ∈ R|V |×d. More specifically, given a vocabulary V and embedding dimension d, it maximizes the likelihood of the context words cij that surround a given center word wi in a context window c̃i, J(W,C) ∝ ∑ wi∈V ∑ cij∈c̃i logP (cij |wi; W,C), (1) where P (cij |wi) is estimated by a softmax over all possible context words, i.e, the vocabulary, P (cij |wi; W,C) = exp ( cij > wi ) ∑ c∈V exp (c >wi) . (2) In practice, logP (cij |wi) is approximated by negative sampling to reduce computational cost. 2.1 GUMBEL SOFTMAX The Gumbel softmax (Jang et al., 2016; Maddison et al., 2016) approximates the sampling of discrete random variables. Given a discrete random variable X with P (X = k) ∝ αk, αk ∈ (0,∞), the Gumbel-max (Gumbel & Lieblein, 1954; Maddison et al., 2014) refactors the sampling of X into X = arg max k (logαk + gk), (3) where the Gumbel noise gk = − log(− log(uk)) and uk are i.i.d samples drawn from Uniform(0, 1). The Gumbel softmax approximates sampling one hot(arg maxk(logαk + gk)) by yk = softmax((logαk + gk)/τ). (4) 3 GUMBEL-ATTENTION SENSE INDUCTION (GASI) Building on these foundations, we now introduce our model, GASI, and along the way introduce a soft-attention stepping-stone (SASI); afterward, we will compare models on both traditional evaluation metrics and interpretability. The critical component of our model is that we model the sense selection probability, which can be interpreted as sense attention over contexts, into the Skip-Gram model while preserving the original objective through marginalization (Figure 1). By using Gumbel Softmax, our model both approximates discrete sense selection and is differentiable. Previous models are either non-differentiable or otherwise complicate inference through hard attention with reinforcement learning methods (Lee & Chen, 2017). 3.1 ATTENTIONAL SENSE INDUCTION FRAMEWORK Embedding Parameters We learn a context embedding matrix C ∈ R|V |×d and a sense embedding tensor S ∈ R|V |×K×d. Unlike previous work (Neelakantan et al., 2014; Lee & Chen, 2017), no extra embeddings are kept for sense induction. Number of Senses For simplicity and consistency with most previous work, our model has a fixed number of senses K.1 Sense Attention in Objective Function Assuming a center word wi has senses {si1, si2, . . . , siK}, the original Skip-Gram likelihood can be written as marginal distribution over all senses of wi with the sense induction probability P (sik |wi), we focus on the sense disambiguation given local context c̃i and estimate P (cij |wi) = K∑ k=1 P (cij | sik)P (sik |wi) ≈ K∑ k=1 P (cij | sik)P (sik |wi, c̃i)︸ ︷︷ ︸ attention , (5) Replacing P (cij |wi) in Equation 1 with Equation 5 gives our objective function J(S,C) ∝ ∑ wi∈V ∑ cij∈c̃i log K∑ k=1 P (cij | sik)P (sik |wi, c̃i). (6) Lower Bound the Objective for Negative Sampling Like the Skip-Gram objective (Equation 2), we model the likelihood of a context word given the center sense P (cij | sik) using softmax, P (cij | sik) = exp ( cij > sik ) ∑|V | j=1 exp ( c>j s i k ) , (7) where the bold symbol sik is the vector representation of sense s j k from S, and cj is the context embedding of word cj from C. Computing the softmax over the vocabulary is time-consuming. We want to adopt negative sampling to approximate logP (cij | sik), which does not exist explicitly in our objective function (Equation 6).2 However, given the concavity of the logarithm function, we can apply Jensen’s inequality, log K∑ k=1 P (cij | sik)P (sik |wi, c̃i) ≥ K∑ k=1 P (sik |wi, c̃i) logP (cij | sik), (8) and create a lower bound of the objective. Maximizing this lower bound gives us a tractable objective, J(S,C) ∝ ∑ wi∈V ∑ cij∈c̃i K∑ k=1 P (sik |wi, c̃i) logP (cij | sik), (9) where logP (cij | sik) is estimated by negative sampling Mikolov et al. (2013b), log σ(cij > sik) + n∑ j=1 Ecj∼Pn(c)[log σ(−c > j s j k))], (10) 1We can prune the duplicated senses for words that have senses less than K, details in Appendix B. We can also set different number of senses based on word frequency in the training, details in Appendix B.3. 2Deriving the negative sampling requires the logarithm of a softmax (Goldberg & Levy, 2014). c̄⊤i s i k gk : Gumbel noise Modeling Sense Attention We can model the attention term, contextual sense induction distribution, with soft attention; we call the resulting model soft-attention sense induction (SASI); although it is a stepping stone to our final model, we compare against it in our experiments as it helps isolate the contributions of hard attention. In SASI, the sense attention is conditioned on the entire local context c̃i with softmax: P (sik |wi, c̃i) = exp ( c̄>i s i k )∑K k=1 exp ( c̄>i s i k ) , (11) where c̄i is the mean of the context vectors in c̃i. 3.2 SCALED GUMBEL SOFTMAX FOR SENSE DISAMBIGUATION To reduce separate senses and learn distinguishable sense representations, we implement hard attention in our full model, GASI. To preserve differentiability and circumvent the difficulties in training with reinforcement learning (Sutton & Barto, 1998), we apply the reparameterization trick with Gumbel softmax (Section 2.1) to our sense attention function (Equation 11) and make a continuous relaxation. Vanilla Gumbel Attention The discrete sense sampling from Equation 11 can be refactored by zi = one hot(arg max k (c̄i >sik + gk)), (12) and the hard attention is approximated with yik = softmax((c̄i >sik + gk)/τ). (13) Scaled Gumbel Softmax for Sense Disambiguation Gumbel softmax learns a flat distribution over senses even with low temperatures (Figure 2): the dot product c̄>i s i k is too small compared to the Gumbel noise gk (Figure 3).3 Thus we use a scaling factor β to reduce the randomness,4 and tune it as a hyperparameter.5 γik = softmax((c̄i >sik + βgk)/τ), (14) We use GASI-β to identify the GASI model with scaling factor. This modification is critical for learning distinguishable senses (Figure 2, Table 1, and Table 5). Final Objective Function The objective function of our GASI-β model is J(S,C) ∝ ∑ wi∈V ∑ wc∈ci K∑ k=1 softmax((c̄i >sjk + βgk)/τ) logP (wc | s i k). (15) 3Float32 precision, the saturation of log(σ(·)) and gradient vanishing result in a small range of c̄>i sik. 4Normalizing c̄>i s i k or directly using logP (s i k |wi, c̃i) results in a similar outcome. 5Learning β instead of fixing it as a hyperparameter does not successfully disambiguate senses. 4 TRAINING SETTINGS For fair comparisons, we try to remain consistent with previous work (Huang et al., 2012; Neelakantan et al., 2014; Lee & Chen, 2017) in all aspects of training. In particular, we train GASI on the same April 2010 Wikipedia snapshot (Shaoul C., 2010) with 1B tokens the same vocabulary released by Neelakantan et al. (2014); set the number of senses K = 3 and dimension d = 300 for each word unless otherwise specified. More details are in Appendix A. We fix the temperature τ = 0.5,6 and tune the scaling factor β from {0.1, 0.2, ...,0.9} on the AvgSimC measure for the contextual word similarity task (Section 5). The optimal scaling factor β is 0.4. If not reprinted, numbers for competing models are either computed with pre-trained embeddings released by authors or trained on released code.7 5 WORD SIMILARITY EVALUATION We first compare our GASI and GASI-β model with previous work on standard word similarity tasks before turning to interpretability experiments. Each task has word pairs with a similarity/relatedness score. For evaluation, we measure Spearman’s rank correlation ρ (Spearman, 1904) between word embedding similarity and the gold similarity judgements: higher scores imply the model captures semantic similarities consistent with the trusted similarity scores. Contextual Word Similarity Tailored for sense embedding evaluation, Stanford Contextual Word Similarities (Huang et al., 2012, SCWS) has 2003 word pairs and similarity scores with sentential context. Moreover, the word pairs and their contexts reflect homonymous and polysemous words. Therefore, we use this dataset to tune our hyperparameters. To compute the word similarity with senses we use two metrics Reisinger & Mooney (2010) that take context and sense disambiguation into account: MaxSimC computes the cosine similarity cos(s∗1, s ∗ 2) between the two most probable senses s ∗ 1 and s ∗ 2 that maximizes P (sik |wi, c̃i). AvgSimC weights average similarity over the combinations of all senses∑K i=1 ∑K i=j P (s 1 i |w1, c̃1)P (s2j |w2, c̃2) cos(s1i s2j ). We compare variants of our model with multi-prototype sense embedding models (Table 1), including two previous state-of-the-art models: the clustering-based Multi-Sense Skip-Gram model (Neelakantan et al., 2014, MSSG) on AvgSimC metric and the RL-based Modularizing Unsupervised 6This is similar to the experiment settings for Gumbel softmax in Maddison et al. (2016) 7We adopt the numbers for Li & Jurafsky (2015) from Lee & Chen (2017) and tune the PDF-GM (Athiwaratkun et al., 2018) model on the same 1B corpus and vocabulary as previous works using https://github.com/ benathi/multisense-prob-fasttext with suggested hyperparameters and select the best results. Model MaxSimC AvgSimC Huang et al. (2012)-50d 26.1 65.7 MSSG-6K 57.3 69.3 MSSG-30K 59.3 69.2 Tian et al. (2014) 63.6 65.4 Li & Jurafsky (2015) 66.6 66.8 Qiu et al. (2016) 64.9 66.1 Bartunov et al. (2016) 53.8 61.2 MUSE Boltzmann 67.9 68.7 SASI 55.1 67.8 GASI(w/o scaling) 68.2 68.3 GASI-β 66.4 69.5 Table 1: Spearman’s correlation 100ρ on SCWS (trained 1B token, 300d vectors except for Huang et al.) Model Accuracy(%) Unsupervised Multi-prototype models MSSG-30K 54.00 MUSE Boltzmann 52.14 GASI-β 55.27 Multi-prototype models with external lexical resources DeConf 58.55 SW2V 54.56 Table 2: Unsupervised sense selection accuracy on Word in Context Sense Embeddings (Lee & Chen, 2017, MUSE) on MaxSimC. All three are better than the baseline Skip-Gram model (65.2 using the word embedding). GASI better captures similarity than SASI, corroborating that hard attention aids word sense selection. GASI without scaling (β) has the best MaxSimC; however, it learns a flat sense distribution (Figure 2). GASI-β has the best AvgSimC and a competitive MaxSimC. While MUSE has a higher MaxSimC than GASI-β, it fails to distinguish senses as well (Figure 4, Section 6). The Probabilistic FastText Gaussian Mixture (Athiwaratkun et al., 2018, PDF-GM) is SOTA on multiple non-contextual word similarity tasks (Table 3). Without sense selection module given context, we evaluate PDF-GM on MaxSim (Equation 16), which is 66.4. Our GASI-β has the same on MaxSim, and better correlation on AvgSimC (69.5). Word Sense Selection in Context SCWS evaluates models’ ability of sense selection indirectly. We further compare GASI-β with previous SOTA, MSSG-30K and MUSE, on the Word in Context dataset (Pilehvar & Camacho-Collados, 2018, WiC) which requires the model to identify whether a word has the same sense in two contexts. Lacking ground truth for the development set,8 to reduce the variance in training and to focus on evaluating the sense selection module, we use an evaluation suited for unsupervised models: if the model selects different sense vectors given contexts, we mark that the word has different senses.9 For MUSE, MSSG and GASI-β, we use each model’s sense selection module; for DeConf (Pilehvar & Collier, 2016) and SW2V (Mancini et al., 2017), we follow Pilehvar & Camacho-Collados (2018) and Pelevina et al. (2016) by selecting the closest sense vectors to the context vector. Results on DeConf are comparable to supervised results (59.4± 0.7). Our GASI-β has the best result apart from DeConf itself, which uses the same sense inventory (Miller & Fellbaum, 1998, WordNet) used to build WiC. This evaluation, however, does not reflect the interpretability of the senses themselves. We address this in Section 6. Non-Contextual Word Similarity To evaluate the semantics captured by each sense-specific embeddings, we compare the models on the non-contextual word similarity datasets: RG-65 (Rubenstein & Goodenough, 1965); SimLex-999 (Hill et al., 2015); WS-353 (Finkelstein et al., 2002); MEN-3k (Bruni et al., 2014); MC-30 (Miller & Charles, 1991); YP-130 (Yang & Powers, 2006); MTurk-287 (Radinsky et al., 2011); MTurk-771 (Halawi et al., 2012); RW-2k (Luong et al., 2013). Similar to Lee & Chen (2017) and Athiwaratkun et al. (2018), we compute the word similarity based on senses by MaxSim (Reisinger & Mooney, 2010), which maximizes the cosine similarity over the combination of all sense pairs and does not require local contexts, MaxSim(w1, w2) = max 0≤i≤K,0≤j≤K cos(s1i , s 2 j ). (16) GASI-β has better correlation on three datasets, is competitive on the rest (Table 3), and remains competitive without scaling. GASI is better than MUSE, the other hard-attention multi-prototype model, on six datasets and worse on three. Our model can reproduce word similarities as well or better than existing models through our sense selection. 8Unavailable as of November 2018 at https://pilehvar.github.io/wic/ 9For words not in vocabulary or only have one sense learned, we chose randomly. 6 CROWDSOURCING EVALUATION GASI can capture word similarity (Section 5), but do the learned representations make sense? Could a human use them to help build a dictionary? If you show a human the senses, can they understand why a model would assign a sense to that context? In this section we evaluate whether the representations make sense to human consumers of multisense models. Qualitive analysis Previous papers use nearest neighbors of a few examples to qualitatively argue that their models have captured meaningful senses of words. We also give an example in Figure 4, which provides an intuitive view on how the learned senses are clustered by visualizing the nearest neighbors of word “bond” using t-SNE projection (Maaten & Hinton, 2008). Our proposed model (right) disentangles the three sense of “bond” clearly and learns three distinct sense vectors. However, the examples can be cherry-picked and lack standards. This problem also bedeviled topic modeling until the introduction of rigorous human evaluation (Chang et al., 2009). We adapt both aspects Chang et al’s evaluations: word intrusion (Schnabel et al., 2015) to evaluate whether individual senses are coherent and topic intrusion—rather sense intrusion in this case—to evaluate whether humans agree with models’ sense assignments in context. Both crowdsourcing tasks collect human inputs on Figure-Eight. We compare our models with two previous state-of-the-art multi-prototype sense embeddings models that disambiguate senses given local context, i.e., MSSG (Neelakantan et al., 2014) and MUSE (Lee & Chen, 2017).10 6.1 WORD INTRUSION FOR SENSE COHERENCE Schnabel et al. (2015) suggests a “good” word embedding should have coherent neighbors and evaluate coherence by word intrusion. They presents crowdworkers four words: three are close in embedding space while one of which is an “intruder”. If the embedding makes sense, contributors will easily spot the word that “does not belong”. Similarly, we examine the coherence of ten nearest neighbors of senses in the contextual word sense selection task (Section 6.2) and replace one neighbor with an “intruder” (Figure 5). We generate three intruders for each sense and collect three judgements per intruder. We consider the “intruder” to be correctly selected if at least two judgements are correct. Figure 5: Word intrusion task prompt Model Sense-level Judgement-level AggrementAccuracy Accuracy MUSE 67.33 62.89 0.73 MSSG-30K 69.33 66.67 0.76 GASI-β 71.33 67.33 0.77 Table 4: Word intrusion evalutations on top ten nearest neighbors of sense embeddings. 10MSSG has two settings; we run human evaluation with MSSG-30K which has higher correlation with MaxSimC on SCWS. Question (required) Vandiver mentions the $100 million highway bond issue approved earlier in the 007, octopussy, moneypenny, goldfinger, thunderball, moonraker, goldeneye atom, transition, bonding, covalent, hydrogen, molecule, substituent, carbons mortgage-backed, securities, coupon, debenture, repurchase, refinance, surety, * Choose one sense group that the target (underlined) word fits best. Like Chang et al. (2009), we want the “intruder” to not be too different in terms of frequency to the target set but not too similar semantically. For sense smi of word type wi, we randomly select a word from the neighbors of another sense sni of wi but with a low threshold, i.e., any words that has cosine similarity larger than 0.0 can be viewed as a neighbor. Result and Analysis All models have comparable model accuracy. GASI-β learns senses that have the highest coherency among top ten nearest neighbors while MUSE learns more sense mixtures. Inter-rater Agreement We use the aggregated confidence score provided by Figure-Eight to estimate the level of agreement between multiple contributors.11 The agreements are high for all models and our GASI-β has the highest agreement, suggesting that the senses learned by GASI-β are easier to interpret. 6.2 CONTEXTUAL WORD SENSE SELECTION The previous task measures whether individual senses are coherent. In this task, we measure whether the learned senses by sense embedding models make sense human and evaluate the models’ ability to disambiguate senses in context. Task Description Given a target word in context, we ask a crowdworker to select which sense group best fits the sentence. Each sense group is described by its top ten distinct nearest neighbors (Figure 6).12 Data Collection We select fifty nouns with five sentences from SemCor 3.0 (Miller et al., 1994). We first filter all word types with fewer than ten sentences and select the fifty most polysemous nouns from WordNet (Miller & Fellbaum, 1998) among the remaining senses. For each noun, we randomly select five sentences. Metrics For each model, we collect three judgements for each question. We consider a model correct if at least two crowdworkers select the same sense as the model. We also consider the probability P assigned to the human choices by the model, indicating the model’s confidence in sense selection. P = 1/3 indicates the model learns flat, uniform sense induction distribution is unable to disambiguate senses. Sense disambiguation and interpretability If humans consistently pick the same sense as the model: 1) humans can interpret the nearest neighbor words (as measured by the previous experiment); 2) the senses are distinguishable to human; 3) the human’s choice is consistent with the model’s. Results and Analysis GASI-β selects senses that are most consistent with humans; it has the highest accuracy and assigns the largest probability assigned to the human choices (Table 5). Thus, GASI-β produces sense embeddings that are both more interpretable and distinguishable. GASI without a scaling factor, however, has low consistency and flat sense distribution. Inter-rater Agreement We use the confidence score computed by Figure-Eight to estimate the rater’s agreement for this task as well. Our GASI-β achieves the highest human-model agreement while both MUSE and GASI without scaling have the lowest. 11https://success.figure-eight.com/hc/en-us/articles/201855939-How-to-Calculate-a-Confidence-Score 12We shuffle the choices for questions with the same target word. Model Accuracy P Agreement MUSE 28.0 0.33 0.68 MSSG-30K 44.5 0.37 0.73 GASI (no β) 33.8 0.33 0.68 GASI-β 50.0 0.48 0.75 Table 5: Human-model consistency on contextual word sense selection; P is the average probability assigned by the model to the human choices. GASIβ is most consistent with human. MUSE MSSG GASI-β word overlaps correct 4.78 0.39 1.52 error 5.43 0.98 6.36 cosine sim by Glove correct 0.86 0.33 0.36 error 0.88 0.57 0.81 Table 6: Similarities of human and model choices when they disagree (error) vs. similarities between the senses that both human and model select with other senses in the same word (correct). Human agrees with the model when the senses are distinct. Error Analysis Next, we attempt to answer why crowdworkers disagree with the model although they can interpret most senses (measured by the word intruder task, Table 4). Is it that the model has learned duplicate senses that both the users and model cannot distinguish or is it that crowdworkers agree with each other but disagree with the model? The former relates to the model’s ability in learning human distinguishable senses; while the latter relates to the model’s ability in contextual sense selection. Two trends reveal that duplicated senses that are not distinguishable to humans are one of the main causes of human-model disagreement. First, users agree with the model when the senses are distinct (Table 6, correct), while disagreement rises with more similar senses (Table 6, error); second, more distinct senses allows higher inter-rater agreement (Figure 7). We measure distinctness both by counting the number of shared nearest neighbors and the average cosine simlarities of GloVe embeddings.13 Specifically, MUSE learns duplicate senses for most words, preventing users from choosing appropriate senses and results in random human-model agreement. GASI-β learns some duplicated senses and some distinguishable senses. MSSG appears to learn the least similar senses, but they are not distinguishable enough for humans. For MSSG, small neighbor overlaps do not necessarily help humans to distinguish between senses: users disagree with each other (agreement 0.33) even when the number of overlaps is very small (Figure 7). An intuitive example is shown in Table 7, which demonstrates the necessity of human evaluation. If we use rater agreement to measure how distinguishable the learned senses are to humans, GASI-β learns the most distinguishable senses (histogram in Figure 8). Figure 8 also shows that the model is more likely to agree with humans when humans agree more with each other (as a result of more distinct senses), i.e., human-model consistency correlates with rater agreement (Figure 8). MSSG disagrees with humans more even when raters agree with each other, indicating worse sense selection ability. 6.3 WORD SIMILARITY VS. SENSE DISAMBIGUATION The evaluation results on word similarity tasks (Section 5) and human evaluations (Section 6) are inconsistent for several models. GASI, GASI-β model and the MUSE model are all competitive in word similarity (Table 1 and Table 3), but only GASI-β also does well in the human evaluations (Table 5). Both GASI without scaling and MUSE fail to learn distinguishable senses and cannot disambiguate senses given local context. High word similarities do not necessarily indicate “good” sense embeddings quality; our human evaluation—contextual word sense selection—is complementary. 7 RELATED WORK Schütze (1998) introduces context-group discrimination for senses and uses the centroid of context vectors as a sense representation. Other work induces senses by context clustering (Purandare & Pedersen, 2004) or probabilistic mixture models (Brody & Lapata, 2009). Reisinger & Mooney (2010) first introduce multiple sense-specific vectors for each word, inspiring other multi-prototype sense embedding models. Generally, to address polysemy in word embeddings, some previous work 13Different models learn different representations; we use GloVe for a uniform basis of comparison. MSSG MUSE GASI- 0.33 0.67 1.00 (rater agreement) 0.5 0.75 2.5 5.0 ov er la ps co si ne s im Average in-word sense similarities Figure 7: More distinct senses within each word lead to higher inter-rater agreement trained on annotated sense corpora (Iacobacci et al., 2015) or external sense inventories (Labutov & Lipson, 2013; Chen et al., 2014; Jauhar et al., 2015; Chen et al., 2015; Wu & Giles, 2015; Pilehvar & Collier, 2016; Mancini et al., 2017); Rothe & Schütze (2015; 2017) extend word embeddings to lexical resources without training; others induce senses via multilingual parallel corpora (Guo et al., 2014; Šuster et al., 2016; Ettinger et al., 2016). We contrast our GASI to unsupervised monolingual multi-prototype models along two dimensions: sense induction methodology and differentiability. 1), Huang et al. (2012) and Neelakantan et al. (2014) induce senses by context clustering; Tian et al. (2014) model a corpus-level sense distribution; Li & Jurafsky (2015) model the sense assignment as a Chinese Restaurant Process; Qiu et al. (2016) induce senses by minimizing an energy function on a context-depend network; Bartunov et al. (2016) model the sense assignment as a steak-breaking process; Nguyen et al. (2017) model the sense embeddings as a weighted combination of topic vectors with pre-computed weights by topic models; Athiwaratkun & Wilson (2017) and Athiwaratkun et al. (2018) model word representations as Gaussian Mixture embeddings where each Gaussian component captures different senses; Lee & Chen (2017) computes sense distribution by a separate set of sense induction vectors; while our GASI marginalizes the likelihood of contexts over senses and induces senses by local context vectors; the most similar sense selection module is a bilingual model (Šuster et al., 2016) except that it does not introduce lower bound for negative sampling but uses weighted embeddings, which results in more sense mixture. 2), most sense selection models are non-differentiable and discretely select senses, with two exceptions: Šuster et al. (2016) use weighted vectors over senses; Lee & Chen (2017) implement hard attention with RL to mitigate the non-differentiability. In contrast, GASI keeps full differentiability by reparameterization and approximates discrete sense sampling with scaled Gumbel softmax. 8 CONCLUSION The goal of multi-sense word embeddings is not just to win word sense evaluation datasets; rather, they should also describe language: given millions of tokens of a language, what are the patterns in the language that can help a lexicographer or linguist in day-to-day tasks like building dictionaries or understanding semantic drift. Our differentiable Gumbel Attention Sense Induction (GASI) offers a best of both worlds: comparable word similarities while also learning more distinguishable, interpretable senses. A TRAINING DETAILS During training, we fix the window size to five and the dimensionality of the embedding space to 300 for comparison to previous work. We initialize both sense and context embeddings randomly within U(-0.5/dim, 0.5/dim) as in Word2Vec. We set the initial learning rate to 0.01; it is decreased linearly until training concludes after 5 epochs. The batch size is 512, and we use five negative samples per center word-context pair as suggested by Mikolov et al. (2013a). The subsample threshold is 1e-4. We train our model on the GeForce GTX 1080 Ti, and our implementation (using pytorch 3.0) takes ∼ 6 hours to train one epoch on the April 2010 Wikipedia snapshot Shaoul C. (2010) with 100k vocabulary. For comparison, our implementation of Skip-Gram on the same framework takes ∼ 2 hours each epoch. B NUMBER OF SENSES For simplicity and consistency with most of previous work, we present our model with a fixed number of senses K. B.1 POST-TRAINING PRUNING For words that do not have multiple senses or have most senses appear very low-frequently in corpus, our model (as well as many previous models) learns duplicate senses. We can easily remove such duplicates by pruning the learned sense embeddings with a threshold λ. Specifically, for each word wi, if the cosine distance between any of its sense embeddings (sim, s i n) is smaller than λ, we consider them to be duplicates. After discovering all duplicate pairs, we start pruning with the sense sik that has the most duplications and keep pruning with the same strategy until no more duplicates remain. Model-specific pruning We estimate a model-specific threshold λ from the learned embeddings instead of deciding it arbitrary. Therefore, this pruning methods is also applicable to other sense embedding models. We first sample 100 words from the negative sampling distribution over the vocabulary. Then, we retrieve the five nearest neighbors (from all senses of all words) to each sense of each sampled word. If one of a word’s own senses appears as a nearest neighbor, we append the distance between them to a sense duplication list Ddup. For other nearest neighbors, we append their distances to the word neighbor list Dnn. After populating the two lists, we want to choose a threshold that would prune away all of the sense duplicates while differentiating sense duplications with other distinct neighbor words. Thus, we compute λ = 1 2 (mean(Ddup) + mean(Dnn)). (17) Table 1 compares the sense embeddings after pruning with the original mode on the Stanford Contextual Word Similarities (SCWS) task Huang et al. (2012). Both AvgSimC and MaxSimC with post-pruning embeddings decrease only a few compare to that from GASI-0.4. B.2 NUMBER OF SENSES VS. WORD FREQUENCY It is a common assumption that more frequent words have more senses. Figure 1 shows a histogram of the number of senses left for words ranked by their frequency, and the results agree with the assumption. Generally, the model learns more sense for high frequent words, except for the most frequent ones. The most frequent words are usually considered stopwords, such as “the”, “a” and “our’, which have only one common meaning. Moreover, we compare our model initialized with three senses (GASI-0.4, K = 3) against the one that has five (GASI-0.4, K = 5). Initializing with a larger number of senses, the model is able to uncover more senses for most words. B.3 INITIALZING K BASED ON WORD FREQUENCY Despite our model has a fixed number of senses. It is easy to implement our model with different number of senses with a mask matrix. And we can define different number of senses for each word based on their frequency. In Table 1, we show the results from a model that only top 30,000 word are initialized with three senses while others have one. The same choice is applied by Neelakantan et al. (2014).
1. What is the main contribution of the paper in the field of multi-sense word embeddings? 2. What are the strengths of the proposed approach, particularly in its training objective and use of Gumbel-softmax reparametrization trick? 3. What are the suggestions provided by the reviewer regarding the number of senses, evaluation methods, and comparison with other existing methods? 4. How does the reviewer assess the empirical gains of the method, and what are their concerns regarding its applicability in real-world scenarios? 5. What is the significance of the lower bound on the log likelihood objective, and how does it relate to negative sampling?
Review
Review The paper presents a method for deriving multi sense word embeddings. The key idea behind this method is to learn a sense embedding tensor using a skip-gram style training objective. The objective defines the probability of contexts marginalised over latent sense embeddings. The paper uses Gumbel-softmax reparametrization trick to approximate sampling from the discrete sense distributions. The method also uses a separate hyperparameter to help scale the dot product appropriately. Strengths: 1. The technique is a well-motivated solution for a hard problem that builds on the skip-gram model for learning word embeddings. 2. A new manual evaluation approach for comparing sense induction approaches. 3. The empirical advance while relatively modest appears to be significant since the technique seems to yield better results than multiple baselines across a range of tasks. Suggestions: 1. The number of senses is fixed to three. This is a bit arbitrary, even though it is following some precedence. I like the information in the appendix that shows how to handle cases when there are duplicate senses induced for words that dont have many senses. It would be useful to know how to handle the cases where a word can have more than three senses. Given that the authors have a way of pruning duplicate senses, it would have been interesting to try a few basic methods that select the number of senses per word dynamically. 2. The evaluation includes word similarity task and crowdsourcing for sense intrusion and sense selection. These provide a measure of intrinsic quality of the sense based embeddings. However, as Li and Jurafsky (2015) point out, typically applications use more powerful models that use a wide context. It is not clear how these improvements to sense embeddings will translate in these settings. It would have been useful to have at least one or two end applications to illustrate this. 3. Given that the empirical gains are not quite consistent, I would encourage the authors to specifically argue why this particular method should be favoured over other existing methods. The related work discussion merely highlights methodological differences. For example, the contrast with Lee and Chen (2017) seems to be only that of differentiability. Is the claim that differentiability is desirable because this allows for fine tuning in applications? If this is the case then it will be nice to have this verified. 4. The lower bound on the log likelihood objective is good but what are we supposed to take away from it? Is it that there is an interpretation that allows us to get away with negative sampling? Overall I like the paper. It presents an application of the Gumbel-softmax trick for sense embeddings induction and shows some empirical evidence for the usefulness of this idea, including some manual evaluation. I think the evaluation could be strengthened with some end applications and much crisper arguments on why the method is preferable over other methods that achieve comparable performance. References: [Li and Jurafsky., EMNLP 2015] Do Multi-Sense Embeddings Improve Natural Language Understanding?
ICLR
Title A Differentiable Self-disambiguated Sense Embedding Model via Scaled Gumbel Softmax Abstract We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources. It jointly learns how to disambiguate senses given local context and how to represent senses using hard attention. Unlike previous multi-prototype models, our model approximates differentiable discrete sense selection via a modified Gumbel softmax. We also propose a novel human evaluation task that quantitatively measures (1) how meaningful the learned sense groups are to humans and (2) how well the model is able to disambiguate senses given a context sentence—an evaluation ignored by previous models. Our model not only discovers distinct, interpretable embeddings but is competitive against previous models on word similarity tasks. 1 SENSE-SPECIFIC EMBEDDING Machine learning models for natural language processing applications often represent words with real valued vector embeddings. Popular word embedding models such as Word2Vec (Mikolov et al., 2013a;b) and GloVe (Pennington et al., 2014) enabled state-of-the-art results on myriad NLP tasks such as sentiment analysis (Kim, 2014; Tai et al., 2015) and textual entailment (Chen et al., 2017). However, for polysemous words (those with multiple senses), learning a single vector for each word type conflates different meanings (e.g., “A hydrogen bond exists between water molecules.” vs. “Do you want to buy this bond?”). This is not a new problem—Schütze (1998) demonstrates the deficiency of assigning just one vector per word—but it is more pernicious in modern models, as conflated senses can pull semantically unrelated words toward each other in the embedding space (Neelakantan et al., 2014; Pilehvar & Collier, 2016; Camacho-Collados & Pilehvar, 2018). To disentangle distinct senses in word embeddings and learn finer-grained semantic clusters, multi-prototype word embedding models learn multiple sense-specific embeddings for a single word (Section 7). But what makes a good multisense word embedding? While word similarity is the most common evaluation, it has many detractors (Faruqui et al., 2016; Gladkova & Drozd, 2016): similarity is subjective and is hard to be differentiate from word relatedness. Moreover, word similarity tasks— with the exception of Stanford Contextual Word Similarity (Huang et al., 2012, SCWS)—ignore polysemous cases or are tied to specific sense inventories (Boyd-Graber et al., 2006). Moreover, these evaluations ignore a key component of learning sense inventories: do they make sense to a human? Previous multisense embedding papers present nearest neighbors to claim their representations are interpretable and useful. Like topic models, these implicit interpretability claims need to be rigorously verified. In Section 6, we adapt techniqes for evaluating topic models (Chang et al., 2009) to measure whether learned sense groups are internally coherent and whether humans can consistently match a learned sense vector to a word in context. Just like topic models, word embedding models that win conventional evaluations do not always make sense to humans. We present a simple method that not only correlates well with traditional word similarity evaluations (Section 5) but also discovers interpretable (measured by human evaluations) sense embeddings (Section 6). Our model extends the Skip-Gram Word2Vec model and simultaneously learns (1) automatic sense induction given local context and (2) sense-specific embeddings. To learn disentangled sense representations (i.e., avoid sense mixing), we approximate hard attention and preserve N/A We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources. It jointly learns how to disambiguate senses given local context and how to represent senses using hard attention. Unlike previous multi-prototype models, our model approximates differentiable discrete sense selection via a modified Gumbel softmax. We also propose a novel human evaluation task that quantitatively measures (1) how meaningful the learned sense groups are to humans and (2) how well the model is able to disambiguate senses given a context sentence—an evaluation ignored by previous models. Our model not only discovers distinct, interpretable embeddings but is competitive against previous models on word similarity tasks. 1 SENSE-SPECIFIC EMBEDDING Machine learning models for natural language processing applications often represent words with real valued vector embeddings. Popular word embedding models such as Word2Vec (Mikolov et al., 2013a;b) and GloVe (Pennington et al., 2014) enabled state-of-the-art results on myriad NLP tasks such as sentiment analysis (Kim, 2014; Tai et al., 2015) and textual entailment (Chen et al., 2017). However, for polysemous words (those with multiple senses), learning a single vector for each word type conflates different meanings (e.g., “A hydrogen bond exists between water molecules.” vs. “Do you want to buy this bond?”). This is not a new problem—Schütze (1998) demonstrates the deficiency of assigning just one vector per word—but it is more pernicious in modern models, as conflated senses can pull semantically unrelated words toward each other in the embedding space (Neelakantan et al., 2014; Pilehvar & Collier, 2016; Camacho-Collados & Pilehvar, 2018). To disentangle distinct senses in word embeddings and learn finer-grained semantic clusters, multi-prototype word embedding models learn multiple sense-specific embeddings for a single word (Section 7). But what makes a good multisense word embedding? While word similarity is the most common evaluation, it has many detractors (Faruqui et al., 2016; Gladkova & Drozd, 2016): similarity is subjective and is hard to be differentiate from word relatedness. Moreover, word similarity tasks— with the exception of Stanford Contextual Word Similarity (Huang et al., 2012, SCWS)—ignore polysemous cases or are tied to specific sense inventories (Boyd-Graber et al., 2006). Moreover, these evaluations ignore a key component of learning sense inventories: do they make sense to a human? Previous multisense embedding papers present nearest neighbors to claim their representations are interpretable and useful. Like topic models, these implicit interpretability claims need to be rigorously verified. In Section 6, we adapt techniqes for evaluating topic models (Chang et al., 2009) to measure whether learned sense groups are internally coherent and whether humans can consistently match a learned sense vector to a word in context. Just like topic models, word embedding models that win conventional evaluations do not always make sense to humans. We present a simple method that not only correlates well with traditional word similarity evaluations (Section 5) but also discovers interpretable (measured by human evaluations) sense embeddings (Section 6). Our model extends the Skip-Gram Word2Vec model and simultaneously learns (1) automatic sense induction given local context and (2) sense-specific embeddings. To learn disentangled sense representations (i.e., avoid sense mixing), we approximate hard attention and preserve You only live twice, Mr. Bond . . . c̄i ci1 c i m ... G M predict contexts G M Gumbel softmax Marginalization context words center word context embeddings lookup sense embeddings lookup sense attention C S ... wic̃i P (sik|wi, c̃i) chemical 007 financial si1 si2 siK P (cij |wi) Figure 1: Network struture with an example of our GASI model which learns a set of global context embeddings C and a set of sense embeddings S differentiability via a scaled variant of the Gumbel Softmax function (Section 3.2). This modeling contribution—Scaled Gumbel Softmax—is critical for disambiguating senses. 2 FOUNDATIONS: SKIP-GRAM AND GUMBEL SOFTMAX Our model extends Skip-Gram Word2Vec (Mikolov et al., 2013a;b), which jointly learns word embeddings W ∈ R|V |×d and context embeddings C ∈ R|V |×d. More specifically, given a vocabulary V and embedding dimension d, it maximizes the likelihood of the context words cij that surround a given center word wi in a context window c̃i, J(W,C) ∝ ∑ wi∈V ∑ cij∈c̃i logP (cij |wi; W,C), (1) where P (cij |wi) is estimated by a softmax over all possible context words, i.e, the vocabulary, P (cij |wi; W,C) = exp ( cij > wi ) ∑ c∈V exp (c >wi) . (2) In practice, logP (cij |wi) is approximated by negative sampling to reduce computational cost. 2.1 GUMBEL SOFTMAX The Gumbel softmax (Jang et al., 2016; Maddison et al., 2016) approximates the sampling of discrete random variables. Given a discrete random variable X with P (X = k) ∝ αk, αk ∈ (0,∞), the Gumbel-max (Gumbel & Lieblein, 1954; Maddison et al., 2014) refactors the sampling of X into X = arg max k (logαk + gk), (3) where the Gumbel noise gk = − log(− log(uk)) and uk are i.i.d samples drawn from Uniform(0, 1). The Gumbel softmax approximates sampling one hot(arg maxk(logαk + gk)) by yk = softmax((logαk + gk)/τ). (4) 3 GUMBEL-ATTENTION SENSE INDUCTION (GASI) Building on these foundations, we now introduce our model, GASI, and along the way introduce a soft-attention stepping-stone (SASI); afterward, we will compare models on both traditional evaluation metrics and interpretability. The critical component of our model is that we model the sense selection probability, which can be interpreted as sense attention over contexts, into the Skip-Gram model while preserving the original objective through marginalization (Figure 1). By using Gumbel Softmax, our model both approximates discrete sense selection and is differentiable. Previous models are either non-differentiable or otherwise complicate inference through hard attention with reinforcement learning methods (Lee & Chen, 2017). 3.1 ATTENTIONAL SENSE INDUCTION FRAMEWORK Embedding Parameters We learn a context embedding matrix C ∈ R|V |×d and a sense embedding tensor S ∈ R|V |×K×d. Unlike previous work (Neelakantan et al., 2014; Lee & Chen, 2017), no extra embeddings are kept for sense induction. Number of Senses For simplicity and consistency with most previous work, our model has a fixed number of senses K.1 Sense Attention in Objective Function Assuming a center word wi has senses {si1, si2, . . . , siK}, the original Skip-Gram likelihood can be written as marginal distribution over all senses of wi with the sense induction probability P (sik |wi), we focus on the sense disambiguation given local context c̃i and estimate P (cij |wi) = K∑ k=1 P (cij | sik)P (sik |wi) ≈ K∑ k=1 P (cij | sik)P (sik |wi, c̃i)︸ ︷︷ ︸ attention , (5) Replacing P (cij |wi) in Equation 1 with Equation 5 gives our objective function J(S,C) ∝ ∑ wi∈V ∑ cij∈c̃i log K∑ k=1 P (cij | sik)P (sik |wi, c̃i). (6) Lower Bound the Objective for Negative Sampling Like the Skip-Gram objective (Equation 2), we model the likelihood of a context word given the center sense P (cij | sik) using softmax, P (cij | sik) = exp ( cij > sik ) ∑|V | j=1 exp ( c>j s i k ) , (7) where the bold symbol sik is the vector representation of sense s j k from S, and cj is the context embedding of word cj from C. Computing the softmax over the vocabulary is time-consuming. We want to adopt negative sampling to approximate logP (cij | sik), which does not exist explicitly in our objective function (Equation 6).2 However, given the concavity of the logarithm function, we can apply Jensen’s inequality, log K∑ k=1 P (cij | sik)P (sik |wi, c̃i) ≥ K∑ k=1 P (sik |wi, c̃i) logP (cij | sik), (8) and create a lower bound of the objective. Maximizing this lower bound gives us a tractable objective, J(S,C) ∝ ∑ wi∈V ∑ cij∈c̃i K∑ k=1 P (sik |wi, c̃i) logP (cij | sik), (9) where logP (cij | sik) is estimated by negative sampling Mikolov et al. (2013b), log σ(cij > sik) + n∑ j=1 Ecj∼Pn(c)[log σ(−c > j s j k))], (10) 1We can prune the duplicated senses for words that have senses less than K, details in Appendix B. We can also set different number of senses based on word frequency in the training, details in Appendix B.3. 2Deriving the negative sampling requires the logarithm of a softmax (Goldberg & Levy, 2014). c̄⊤i s i k gk : Gumbel noise Modeling Sense Attention We can model the attention term, contextual sense induction distribution, with soft attention; we call the resulting model soft-attention sense induction (SASI); although it is a stepping stone to our final model, we compare against it in our experiments as it helps isolate the contributions of hard attention. In SASI, the sense attention is conditioned on the entire local context c̃i with softmax: P (sik |wi, c̃i) = exp ( c̄>i s i k )∑K k=1 exp ( c̄>i s i k ) , (11) where c̄i is the mean of the context vectors in c̃i. 3.2 SCALED GUMBEL SOFTMAX FOR SENSE DISAMBIGUATION To reduce separate senses and learn distinguishable sense representations, we implement hard attention in our full model, GASI. To preserve differentiability and circumvent the difficulties in training with reinforcement learning (Sutton & Barto, 1998), we apply the reparameterization trick with Gumbel softmax (Section 2.1) to our sense attention function (Equation 11) and make a continuous relaxation. Vanilla Gumbel Attention The discrete sense sampling from Equation 11 can be refactored by zi = one hot(arg max k (c̄i >sik + gk)), (12) and the hard attention is approximated with yik = softmax((c̄i >sik + gk)/τ). (13) Scaled Gumbel Softmax for Sense Disambiguation Gumbel softmax learns a flat distribution over senses even with low temperatures (Figure 2): the dot product c̄>i s i k is too small compared to the Gumbel noise gk (Figure 3).3 Thus we use a scaling factor β to reduce the randomness,4 and tune it as a hyperparameter.5 γik = softmax((c̄i >sik + βgk)/τ), (14) We use GASI-β to identify the GASI model with scaling factor. This modification is critical for learning distinguishable senses (Figure 2, Table 1, and Table 5). Final Objective Function The objective function of our GASI-β model is J(S,C) ∝ ∑ wi∈V ∑ wc∈ci K∑ k=1 softmax((c̄i >sjk + βgk)/τ) logP (wc | s i k). (15) 3Float32 precision, the saturation of log(σ(·)) and gradient vanishing result in a small range of c̄>i sik. 4Normalizing c̄>i s i k or directly using logP (s i k |wi, c̃i) results in a similar outcome. 5Learning β instead of fixing it as a hyperparameter does not successfully disambiguate senses. 4 TRAINING SETTINGS For fair comparisons, we try to remain consistent with previous work (Huang et al., 2012; Neelakantan et al., 2014; Lee & Chen, 2017) in all aspects of training. In particular, we train GASI on the same April 2010 Wikipedia snapshot (Shaoul C., 2010) with 1B tokens the same vocabulary released by Neelakantan et al. (2014); set the number of senses K = 3 and dimension d = 300 for each word unless otherwise specified. More details are in Appendix A. We fix the temperature τ = 0.5,6 and tune the scaling factor β from {0.1, 0.2, ...,0.9} on the AvgSimC measure for the contextual word similarity task (Section 5). The optimal scaling factor β is 0.4. If not reprinted, numbers for competing models are either computed with pre-trained embeddings released by authors or trained on released code.7 5 WORD SIMILARITY EVALUATION We first compare our GASI and GASI-β model with previous work on standard word similarity tasks before turning to interpretability experiments. Each task has word pairs with a similarity/relatedness score. For evaluation, we measure Spearman’s rank correlation ρ (Spearman, 1904) between word embedding similarity and the gold similarity judgements: higher scores imply the model captures semantic similarities consistent with the trusted similarity scores. Contextual Word Similarity Tailored for sense embedding evaluation, Stanford Contextual Word Similarities (Huang et al., 2012, SCWS) has 2003 word pairs and similarity scores with sentential context. Moreover, the word pairs and their contexts reflect homonymous and polysemous words. Therefore, we use this dataset to tune our hyperparameters. To compute the word similarity with senses we use two metrics Reisinger & Mooney (2010) that take context and sense disambiguation into account: MaxSimC computes the cosine similarity cos(s∗1, s ∗ 2) between the two most probable senses s ∗ 1 and s ∗ 2 that maximizes P (sik |wi, c̃i). AvgSimC weights average similarity over the combinations of all senses∑K i=1 ∑K i=j P (s 1 i |w1, c̃1)P (s2j |w2, c̃2) cos(s1i s2j ). We compare variants of our model with multi-prototype sense embedding models (Table 1), including two previous state-of-the-art models: the clustering-based Multi-Sense Skip-Gram model (Neelakantan et al., 2014, MSSG) on AvgSimC metric and the RL-based Modularizing Unsupervised 6This is similar to the experiment settings for Gumbel softmax in Maddison et al. (2016) 7We adopt the numbers for Li & Jurafsky (2015) from Lee & Chen (2017) and tune the PDF-GM (Athiwaratkun et al., 2018) model on the same 1B corpus and vocabulary as previous works using https://github.com/ benathi/multisense-prob-fasttext with suggested hyperparameters and select the best results. Model MaxSimC AvgSimC Huang et al. (2012)-50d 26.1 65.7 MSSG-6K 57.3 69.3 MSSG-30K 59.3 69.2 Tian et al. (2014) 63.6 65.4 Li & Jurafsky (2015) 66.6 66.8 Qiu et al. (2016) 64.9 66.1 Bartunov et al. (2016) 53.8 61.2 MUSE Boltzmann 67.9 68.7 SASI 55.1 67.8 GASI(w/o scaling) 68.2 68.3 GASI-β 66.4 69.5 Table 1: Spearman’s correlation 100ρ on SCWS (trained 1B token, 300d vectors except for Huang et al.) Model Accuracy(%) Unsupervised Multi-prototype models MSSG-30K 54.00 MUSE Boltzmann 52.14 GASI-β 55.27 Multi-prototype models with external lexical resources DeConf 58.55 SW2V 54.56 Table 2: Unsupervised sense selection accuracy on Word in Context Sense Embeddings (Lee & Chen, 2017, MUSE) on MaxSimC. All three are better than the baseline Skip-Gram model (65.2 using the word embedding). GASI better captures similarity than SASI, corroborating that hard attention aids word sense selection. GASI without scaling (β) has the best MaxSimC; however, it learns a flat sense distribution (Figure 2). GASI-β has the best AvgSimC and a competitive MaxSimC. While MUSE has a higher MaxSimC than GASI-β, it fails to distinguish senses as well (Figure 4, Section 6). The Probabilistic FastText Gaussian Mixture (Athiwaratkun et al., 2018, PDF-GM) is SOTA on multiple non-contextual word similarity tasks (Table 3). Without sense selection module given context, we evaluate PDF-GM on MaxSim (Equation 16), which is 66.4. Our GASI-β has the same on MaxSim, and better correlation on AvgSimC (69.5). Word Sense Selection in Context SCWS evaluates models’ ability of sense selection indirectly. We further compare GASI-β with previous SOTA, MSSG-30K and MUSE, on the Word in Context dataset (Pilehvar & Camacho-Collados, 2018, WiC) which requires the model to identify whether a word has the same sense in two contexts. Lacking ground truth for the development set,8 to reduce the variance in training and to focus on evaluating the sense selection module, we use an evaluation suited for unsupervised models: if the model selects different sense vectors given contexts, we mark that the word has different senses.9 For MUSE, MSSG and GASI-β, we use each model’s sense selection module; for DeConf (Pilehvar & Collier, 2016) and SW2V (Mancini et al., 2017), we follow Pilehvar & Camacho-Collados (2018) and Pelevina et al. (2016) by selecting the closest sense vectors to the context vector. Results on DeConf are comparable to supervised results (59.4± 0.7). Our GASI-β has the best result apart from DeConf itself, which uses the same sense inventory (Miller & Fellbaum, 1998, WordNet) used to build WiC. This evaluation, however, does not reflect the interpretability of the senses themselves. We address this in Section 6. Non-Contextual Word Similarity To evaluate the semantics captured by each sense-specific embeddings, we compare the models on the non-contextual word similarity datasets: RG-65 (Rubenstein & Goodenough, 1965); SimLex-999 (Hill et al., 2015); WS-353 (Finkelstein et al., 2002); MEN-3k (Bruni et al., 2014); MC-30 (Miller & Charles, 1991); YP-130 (Yang & Powers, 2006); MTurk-287 (Radinsky et al., 2011); MTurk-771 (Halawi et al., 2012); RW-2k (Luong et al., 2013). Similar to Lee & Chen (2017) and Athiwaratkun et al. (2018), we compute the word similarity based on senses by MaxSim (Reisinger & Mooney, 2010), which maximizes the cosine similarity over the combination of all sense pairs and does not require local contexts, MaxSim(w1, w2) = max 0≤i≤K,0≤j≤K cos(s1i , s 2 j ). (16) GASI-β has better correlation on three datasets, is competitive on the rest (Table 3), and remains competitive without scaling. GASI is better than MUSE, the other hard-attention multi-prototype model, on six datasets and worse on three. Our model can reproduce word similarities as well or better than existing models through our sense selection. 8Unavailable as of November 2018 at https://pilehvar.github.io/wic/ 9For words not in vocabulary or only have one sense learned, we chose randomly. 6 CROWDSOURCING EVALUATION GASI can capture word similarity (Section 5), but do the learned representations make sense? Could a human use them to help build a dictionary? If you show a human the senses, can they understand why a model would assign a sense to that context? In this section we evaluate whether the representations make sense to human consumers of multisense models. Qualitive analysis Previous papers use nearest neighbors of a few examples to qualitatively argue that their models have captured meaningful senses of words. We also give an example in Figure 4, which provides an intuitive view on how the learned senses are clustered by visualizing the nearest neighbors of word “bond” using t-SNE projection (Maaten & Hinton, 2008). Our proposed model (right) disentangles the three sense of “bond” clearly and learns three distinct sense vectors. However, the examples can be cherry-picked and lack standards. This problem also bedeviled topic modeling until the introduction of rigorous human evaluation (Chang et al., 2009). We adapt both aspects Chang et al’s evaluations: word intrusion (Schnabel et al., 2015) to evaluate whether individual senses are coherent and topic intrusion—rather sense intrusion in this case—to evaluate whether humans agree with models’ sense assignments in context. Both crowdsourcing tasks collect human inputs on Figure-Eight. We compare our models with two previous state-of-the-art multi-prototype sense embeddings models that disambiguate senses given local context, i.e., MSSG (Neelakantan et al., 2014) and MUSE (Lee & Chen, 2017).10 6.1 WORD INTRUSION FOR SENSE COHERENCE Schnabel et al. (2015) suggests a “good” word embedding should have coherent neighbors and evaluate coherence by word intrusion. They presents crowdworkers four words: three are close in embedding space while one of which is an “intruder”. If the embedding makes sense, contributors will easily spot the word that “does not belong”. Similarly, we examine the coherence of ten nearest neighbors of senses in the contextual word sense selection task (Section 6.2) and replace one neighbor with an “intruder” (Figure 5). We generate three intruders for each sense and collect three judgements per intruder. We consider the “intruder” to be correctly selected if at least two judgements are correct. Figure 5: Word intrusion task prompt Model Sense-level Judgement-level AggrementAccuracy Accuracy MUSE 67.33 62.89 0.73 MSSG-30K 69.33 66.67 0.76 GASI-β 71.33 67.33 0.77 Table 4: Word intrusion evalutations on top ten nearest neighbors of sense embeddings. 10MSSG has two settings; we run human evaluation with MSSG-30K which has higher correlation with MaxSimC on SCWS. Question (required) Vandiver mentions the $100 million highway bond issue approved earlier in the 007, octopussy, moneypenny, goldfinger, thunderball, moonraker, goldeneye atom, transition, bonding, covalent, hydrogen, molecule, substituent, carbons mortgage-backed, securities, coupon, debenture, repurchase, refinance, surety, * Choose one sense group that the target (underlined) word fits best. Like Chang et al. (2009), we want the “intruder” to not be too different in terms of frequency to the target set but not too similar semantically. For sense smi of word type wi, we randomly select a word from the neighbors of another sense sni of wi but with a low threshold, i.e., any words that has cosine similarity larger than 0.0 can be viewed as a neighbor. Result and Analysis All models have comparable model accuracy. GASI-β learns senses that have the highest coherency among top ten nearest neighbors while MUSE learns more sense mixtures. Inter-rater Agreement We use the aggregated confidence score provided by Figure-Eight to estimate the level of agreement between multiple contributors.11 The agreements are high for all models and our GASI-β has the highest agreement, suggesting that the senses learned by GASI-β are easier to interpret. 6.2 CONTEXTUAL WORD SENSE SELECTION The previous task measures whether individual senses are coherent. In this task, we measure whether the learned senses by sense embedding models make sense human and evaluate the models’ ability to disambiguate senses in context. Task Description Given a target word in context, we ask a crowdworker to select which sense group best fits the sentence. Each sense group is described by its top ten distinct nearest neighbors (Figure 6).12 Data Collection We select fifty nouns with five sentences from SemCor 3.0 (Miller et al., 1994). We first filter all word types with fewer than ten sentences and select the fifty most polysemous nouns from WordNet (Miller & Fellbaum, 1998) among the remaining senses. For each noun, we randomly select five sentences. Metrics For each model, we collect three judgements for each question. We consider a model correct if at least two crowdworkers select the same sense as the model. We also consider the probability P assigned to the human choices by the model, indicating the model’s confidence in sense selection. P = 1/3 indicates the model learns flat, uniform sense induction distribution is unable to disambiguate senses. Sense disambiguation and interpretability If humans consistently pick the same sense as the model: 1) humans can interpret the nearest neighbor words (as measured by the previous experiment); 2) the senses are distinguishable to human; 3) the human’s choice is consistent with the model’s. Results and Analysis GASI-β selects senses that are most consistent with humans; it has the highest accuracy and assigns the largest probability assigned to the human choices (Table 5). Thus, GASI-β produces sense embeddings that are both more interpretable and distinguishable. GASI without a scaling factor, however, has low consistency and flat sense distribution. Inter-rater Agreement We use the confidence score computed by Figure-Eight to estimate the rater’s agreement for this task as well. Our GASI-β achieves the highest human-model agreement while both MUSE and GASI without scaling have the lowest. 11https://success.figure-eight.com/hc/en-us/articles/201855939-How-to-Calculate-a-Confidence-Score 12We shuffle the choices for questions with the same target word. Model Accuracy P Agreement MUSE 28.0 0.33 0.68 MSSG-30K 44.5 0.37 0.73 GASI (no β) 33.8 0.33 0.68 GASI-β 50.0 0.48 0.75 Table 5: Human-model consistency on contextual word sense selection; P is the average probability assigned by the model to the human choices. GASIβ is most consistent with human. MUSE MSSG GASI-β word overlaps correct 4.78 0.39 1.52 error 5.43 0.98 6.36 cosine sim by Glove correct 0.86 0.33 0.36 error 0.88 0.57 0.81 Table 6: Similarities of human and model choices when they disagree (error) vs. similarities between the senses that both human and model select with other senses in the same word (correct). Human agrees with the model when the senses are distinct. Error Analysis Next, we attempt to answer why crowdworkers disagree with the model although they can interpret most senses (measured by the word intruder task, Table 4). Is it that the model has learned duplicate senses that both the users and model cannot distinguish or is it that crowdworkers agree with each other but disagree with the model? The former relates to the model’s ability in learning human distinguishable senses; while the latter relates to the model’s ability in contextual sense selection. Two trends reveal that duplicated senses that are not distinguishable to humans are one of the main causes of human-model disagreement. First, users agree with the model when the senses are distinct (Table 6, correct), while disagreement rises with more similar senses (Table 6, error); second, more distinct senses allows higher inter-rater agreement (Figure 7). We measure distinctness both by counting the number of shared nearest neighbors and the average cosine simlarities of GloVe embeddings.13 Specifically, MUSE learns duplicate senses for most words, preventing users from choosing appropriate senses and results in random human-model agreement. GASI-β learns some duplicated senses and some distinguishable senses. MSSG appears to learn the least similar senses, but they are not distinguishable enough for humans. For MSSG, small neighbor overlaps do not necessarily help humans to distinguish between senses: users disagree with each other (agreement 0.33) even when the number of overlaps is very small (Figure 7). An intuitive example is shown in Table 7, which demonstrates the necessity of human evaluation. If we use rater agreement to measure how distinguishable the learned senses are to humans, GASI-β learns the most distinguishable senses (histogram in Figure 8). Figure 8 also shows that the model is more likely to agree with humans when humans agree more with each other (as a result of more distinct senses), i.e., human-model consistency correlates with rater agreement (Figure 8). MSSG disagrees with humans more even when raters agree with each other, indicating worse sense selection ability. 6.3 WORD SIMILARITY VS. SENSE DISAMBIGUATION The evaluation results on word similarity tasks (Section 5) and human evaluations (Section 6) are inconsistent for several models. GASI, GASI-β model and the MUSE model are all competitive in word similarity (Table 1 and Table 3), but only GASI-β also does well in the human evaluations (Table 5). Both GASI without scaling and MUSE fail to learn distinguishable senses and cannot disambiguate senses given local context. High word similarities do not necessarily indicate “good” sense embeddings quality; our human evaluation—contextual word sense selection—is complementary. 7 RELATED WORK Schütze (1998) introduces context-group discrimination for senses and uses the centroid of context vectors as a sense representation. Other work induces senses by context clustering (Purandare & Pedersen, 2004) or probabilistic mixture models (Brody & Lapata, 2009). Reisinger & Mooney (2010) first introduce multiple sense-specific vectors for each word, inspiring other multi-prototype sense embedding models. Generally, to address polysemy in word embeddings, some previous work 13Different models learn different representations; we use GloVe for a uniform basis of comparison. MSSG MUSE GASI- 0.33 0.67 1.00 (rater agreement) 0.5 0.75 2.5 5.0 ov er la ps co si ne s im Average in-word sense similarities Figure 7: More distinct senses within each word lead to higher inter-rater agreement trained on annotated sense corpora (Iacobacci et al., 2015) or external sense inventories (Labutov & Lipson, 2013; Chen et al., 2014; Jauhar et al., 2015; Chen et al., 2015; Wu & Giles, 2015; Pilehvar & Collier, 2016; Mancini et al., 2017); Rothe & Schütze (2015; 2017) extend word embeddings to lexical resources without training; others induce senses via multilingual parallel corpora (Guo et al., 2014; Šuster et al., 2016; Ettinger et al., 2016). We contrast our GASI to unsupervised monolingual multi-prototype models along two dimensions: sense induction methodology and differentiability. 1), Huang et al. (2012) and Neelakantan et al. (2014) induce senses by context clustering; Tian et al. (2014) model a corpus-level sense distribution; Li & Jurafsky (2015) model the sense assignment as a Chinese Restaurant Process; Qiu et al. (2016) induce senses by minimizing an energy function on a context-depend network; Bartunov et al. (2016) model the sense assignment as a steak-breaking process; Nguyen et al. (2017) model the sense embeddings as a weighted combination of topic vectors with pre-computed weights by topic models; Athiwaratkun & Wilson (2017) and Athiwaratkun et al. (2018) model word representations as Gaussian Mixture embeddings where each Gaussian component captures different senses; Lee & Chen (2017) computes sense distribution by a separate set of sense induction vectors; while our GASI marginalizes the likelihood of contexts over senses and induces senses by local context vectors; the most similar sense selection module is a bilingual model (Šuster et al., 2016) except that it does not introduce lower bound for negative sampling but uses weighted embeddings, which results in more sense mixture. 2), most sense selection models are non-differentiable and discretely select senses, with two exceptions: Šuster et al. (2016) use weighted vectors over senses; Lee & Chen (2017) implement hard attention with RL to mitigate the non-differentiability. In contrast, GASI keeps full differentiability by reparameterization and approximates discrete sense sampling with scaled Gumbel softmax. 8 CONCLUSION The goal of multi-sense word embeddings is not just to win word sense evaluation datasets; rather, they should also describe language: given millions of tokens of a language, what are the patterns in the language that can help a lexicographer or linguist in day-to-day tasks like building dictionaries or understanding semantic drift. Our differentiable Gumbel Attention Sense Induction (GASI) offers a best of both worlds: comparable word similarities while also learning more distinguishable, interpretable senses. A TRAINING DETAILS During training, we fix the window size to five and the dimensionality of the embedding space to 300 for comparison to previous work. We initialize both sense and context embeddings randomly within U(-0.5/dim, 0.5/dim) as in Word2Vec. We set the initial learning rate to 0.01; it is decreased linearly until training concludes after 5 epochs. The batch size is 512, and we use five negative samples per center word-context pair as suggested by Mikolov et al. (2013a). The subsample threshold is 1e-4. We train our model on the GeForce GTX 1080 Ti, and our implementation (using pytorch 3.0) takes ∼ 6 hours to train one epoch on the April 2010 Wikipedia snapshot Shaoul C. (2010) with 100k vocabulary. For comparison, our implementation of Skip-Gram on the same framework takes ∼ 2 hours each epoch. B NUMBER OF SENSES For simplicity and consistency with most of previous work, we present our model with a fixed number of senses K. B.1 POST-TRAINING PRUNING For words that do not have multiple senses or have most senses appear very low-frequently in corpus, our model (as well as many previous models) learns duplicate senses. We can easily remove such duplicates by pruning the learned sense embeddings with a threshold λ. Specifically, for each word wi, if the cosine distance between any of its sense embeddings (sim, s i n) is smaller than λ, we consider them to be duplicates. After discovering all duplicate pairs, we start pruning with the sense sik that has the most duplications and keep pruning with the same strategy until no more duplicates remain. Model-specific pruning We estimate a model-specific threshold λ from the learned embeddings instead of deciding it arbitrary. Therefore, this pruning methods is also applicable to other sense embedding models. We first sample 100 words from the negative sampling distribution over the vocabulary. Then, we retrieve the five nearest neighbors (from all senses of all words) to each sense of each sampled word. If one of a word’s own senses appears as a nearest neighbor, we append the distance between them to a sense duplication list Ddup. For other nearest neighbors, we append their distances to the word neighbor list Dnn. After populating the two lists, we want to choose a threshold that would prune away all of the sense duplicates while differentiating sense duplications with other distinct neighbor words. Thus, we compute λ = 1 2 (mean(Ddup) + mean(Dnn)). (17) Table 1 compares the sense embeddings after pruning with the original mode on the Stanford Contextual Word Similarities (SCWS) task Huang et al. (2012). Both AvgSimC and MaxSimC with post-pruning embeddings decrease only a few compare to that from GASI-0.4. B.2 NUMBER OF SENSES VS. WORD FREQUENCY It is a common assumption that more frequent words have more senses. Figure 1 shows a histogram of the number of senses left for words ranked by their frequency, and the results agree with the assumption. Generally, the model learns more sense for high frequent words, except for the most frequent ones. The most frequent words are usually considered stopwords, such as “the”, “a” and “our’, which have only one common meaning. Moreover, we compare our model initialized with three senses (GASI-0.4, K = 3) against the one that has five (GASI-0.4, K = 5). Initializing with a larger number of senses, the model is able to uncover more senses for most words. B.3 INITIALZING K BASED ON WORD FREQUENCY Despite our model has a fixed number of senses. It is easy to implement our model with different number of senses with a mask matrix. And we can define different number of senses for each word based on their frequency. In Table 1, we show the results from a model that only top 30,000 word are initialized with three senses while others have one. The same choice is applied by Neelakantan et al. (2014).
1. What is the main contribution of the paper, and how does it extend the skipgram model? 2. What are the two proposed models for training sense embeddings, and how do they differ? 3. How does the Gumbel softmax variant contribute to the training process, and what are its advantages? 4. Can you explain the crowdsourced evaluation method and its significance in assessing the performance of the proposed method? 5. How does the method perform on the non-contextual word similarity task, and how does it compare to other methods? 6. What is the issue with Equation 6, and how should it be corrected or clarified? 7. Why is the description of the number of senses pruning method considered a non sequitur, and how could it be improved? 8. Are there any minor issues with the writing style or notation usage throughout the paper that could be addressed?
Review
Review * Summary This paper extends the skipgram model using one vector per sense of a word. Based on this, the paper proposes two models for training sense embeddings: One where the word senses are marginalized out with attention over the senses, and the second where only the sense with highest value of attention contributes to the loss. For the latter case, the paper uses a variant of Gumbel softmax for training. The paper shows evaluations on benchmark datasets that shows that the Gumbel softmax based method is competitive or better than other methods. Via a crowdsourced evaluation, the paper shows that the method also produces human interpretable clusters. * Review This paper is generally well written and presents a plausible solution for the problem of discovering senses in an unsupervised fashion. If \beta=0, then we get SASI, right? How well does this perform on the non-contextual word similarity task? Also, on the crowd sourced evaluation? The motivation for the hard attention/Gumbel softmax is to learn sense representations that are distinguishable. But do the experiments test this? There's something strange about Eq 6. If I understand this correctly, \tilde{c_i} is the context and c_j^i is the j^th context word. Then P(c_j^i | w, \tilde{c_i}) should be 1 because the context is given, right? While the motivation for the right hand side makes sense, the notation could use work. The description of how the number of senses is pruned in section 3.1 seems to be a bit of a non sequitur. It is not clear whether this is used in the experiments and if so, how it compares. The appendix gives more details, but it seems a bit out of place even then because the evaluations don't seem to use it. * Minor comments There are some places where the writing could be cleaned up. - Eq 16 changes the notation for the sense embeddings and the context words from earlier, say Eq 12. - Parenthetical citations would be more appropriate in some places Eg: above Eq 3, in footnote 3 - Page 6, above 6.2: Figure-Figure? - Page 9, Agreement paragraph: hight -> highest
ICLR
Title Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds Abstract We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network’s output. Our approach is based on an importance sampling scheme that judiciously defines a sampling distribution over the neural network parameters, and as a result, retains parameters of high importance while discarding redundant ones. We leverage a novel, empirical notion of sensitivity and extend traditional coreset constructions to the application of compressing parameters. Our theoretical analysis establishes guarantees on the size and accuracy of the resulting compressed network and gives rise to generalization bounds that may provide new insights into the generalization properties of neural networks. We demonstrate the practical effectiveness of our algorithm on a variety of neural network configurations and real-world data sets. 1 INTRODUCTION Within the past decade, large-scale neural networks have demonstrated unprecedented empirical success in high-impact applications such as object classification, speech recognition, computer vision, and natural language processing. However, with the ever-increasing size of state-of-the-art neural networks, the resulting storage requirements and performance of these models are becoming increasingly prohibitive in terms of both time and space. Recently proposed architectures for neural networks, such as those in Krizhevsky et al. (2012); Long et al. (2015); Badrinarayanan et al. (2015), contain millions of parameters, rendering them prohibitive to deploy on platforms that are resource-constrained, e.g., embedded devices, mobile phones, or small scale robotic platforms. In this work, we consider the problem of sparsifying the parameters of a trained fully-connected neural network in a principled way so that the output of the compressed neural network is approximately preserved. We introduce a neural network compression approach based on identifying and removing weighted edges with low relative importance via coresets, small weighted subsets of the original set that approximate the pertinent cost function. Our compression algorithm hinges on extensions of the traditional sensitivity-based coresets framework (Langberg & Schulman, 2010; Braverman et al., 2016), and to the best of our knowledge, is the first to apply coresets to parameter downsizing. In this regard, our work aims to simultaneously introduce a practical algorithm for compressing neural network parameters with provable guarantees and close the research gap in prior coresets work, which has predominantly focused on compressing input data points. In particular, this paper contributes the following: 1. A coreset approach to compressing problem-specific parameters based on a novel, empirical notion of sensitivity that extends state-of-the-art coreset constructions. 2. An efficient neural network compression algorithm, CoreNet, based on our extended coreset approach that sparsifies the parameters via importance sampling of weighted edges. 3. Extensions of the CoreNet method, CoreNet+ and CoreNet++, that improve upon the edge sampling approach by additionally performing neuron pruning and amplification. †Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, emails: {baykal, lucasl, igilitschenski, rus}@mit.edu ‡Robotics and Big Data Laboratory, University of Haifa, email: dannyf.post@gmail.com *These authors contributed equally to this work 4. Analytical results establishing guarantees on the approximation accuracy, size, and generalization of the compressed neural network. 5. Evaluations on real-world data sets that demonstrate the practical effectiveness of our algorithm in compressing neural network parameters and validate our theoretical results. 2 RELATED WORK Our work builds upon the following prior work in coresets and compression approaches. Coresets Coreset constructions were originally introduced in the context of computational geometry (Agarwal et al., 2005) and subsequently generalized for applications to other problems via an importance sampling-based, sensitivity framework (Langberg & Schulman, 2010; Braverman et al., 2016). Coresets have been used successfully to accelerate various machine learning algorithms such as k-means clustering (Feldman & Langberg, 2011; Braverman et al., 2016), graphical model training (Molina et al., 2018), and logistic regression (Huggins et al., 2016) (see the surveys of Bachem et al. (2017) and Munteanu & Schwiegelshohn (2018) for a complete list). In contrast to prior work, we generate coresets for reducing the number of parameters – rather than data points – via a novel construction scheme based on an efficiently-computable notion of sensitivity. Low-rank Approximations and Weight-sharing Denil et al. (2013) were among the first to empirically demonstrate the existence of significant parameter redundancy in deep neural networks. A predominant class of compression approaches consists of using low-rank matrix decompositions, such as Singular Value Decomposition (SVD) (Denton et al., 2014), to approximate the weight matrices with their low-rank counterparts. Similar works entail the use of low-rank tensor decomposition approaches applicable both during and after training (Jaderberg et al., 2014; Kim et al., 2015; Tai et al., 2015; Ioannou et al., 2015; Alvarez & Salzmann, 2017; Yu et al., 2017). Another class of approaches uses feature hashing and weight sharing (Weinberger et al., 2009; Shi et al., 2009; Chen et al., 2015b;a; Ullrich et al., 2017). Building upon the idea of weight-sharing, quantization (Gong et al., 2014; Wu et al., 2016; Zhou et al., 2017) or regular structure of weight matrices was used to reduce the effective number of parameters (Zhao et al., 2017; Sindhwani et al., 2015; Cheng et al., 2015; Choromanska et al., 2016; Wen et al., 2016). Despite their practical effectiveness in compressing neural networks, these works generally lack performance guarantees on the quality of their approximations and/or the size of the resulting compressed network. Weight Pruning Similar to our proposed method, weight pruning (LeCun et al., 1990) hinges on the idea that only a few dominant weights within a layer are required to approximately preserve the output. Approaches of this flavor have been investigated by Lebedev & Lempitsky (2016); Dong et al. (2017), e.g., by embedding sparsity as a constraint (Iandola et al., 2016; Aghasi et al., 2017; Lin et al., 2017). Another related approach is that of Han et al. (2015), which considers a combination of weight pruning and weight sharing methods. Nevertheless, prior work in weight pruning lacks rigorous theoretical analysis of the effect that the discarded weights can have on the compressed network. To the best of our knowledge, our work is the first to introduce a practical, sampling-based weight pruning algorithm with provable guarantees. Generalization The generalization properties of neural networks have been extensively investigated in various contexts (Dziugaite & Roy, 2017; Neyshabur et al., 2017a; Bartlett et al., 2017). However, as was pointed out by Neyshabur et al. (2017b), current approaches to obtaining non-vacuous generalization bounds do not fully or accurately capture the empirical success of state-of-the-art neural network architectures. Recently, Arora et al. (2018) and Zhou et al. (2018) highlighted the close connection between compressibility and generalization of neural networks. Arora et al. (2018) presented a compression method based on the Johnson-Lindenstrauss (JL) Lemma (Johnson & Lindenstrauss, 1984) and proved generalization bounds based on succinct reparameterizations of the original neural network. Building upon the work of Arora et al. (2018), we extend our theoretical compression results to establish novel generalization bounds for fully-connected neural networks. Unlike the method of Arora et al. (2018), which exhibits guarantees of the compressed network’s performance only on the set of training points, our method’s guarantees hold (probabilistically) for any random point drawn from the distribution. In addition, we establish that our method can ε-approximate the neural network output neuron-wise, which is stronger than the norm-based guarantee of Arora et al. (2018). In contrast to prior work, this paper addresses the problem of compressing a fully-connected neural network while provably preserving the network’s output. Unlike previous theoretically-grounded compression approaches – which provide guarantees in terms of the normed difference –, our method provides the stronger entry-wise approximation guarantee, even for points outside of the available data set. As our empirical results show, ensuring that the output of the compressed network entry-wise approximates that of the original network is critical to retaining high classification accuracy. Overall, our compression approach remedies the shortcomings of prior approaches in that it (i) exhibits favorable theoretical properties, (ii) is computationally efficient, e.g., does not require retraining of the neural network, (iii) is easy to implement, and (iv) can be used in conjunction with other compression approaches – such as quantization or Huffman coding – to obtain further improved compression rates. 3 PROBLEM DEFINITION 3.1 FULLY-CONNECTED NEURAL NETWORKS A feedforward fully-connected neural network withL ∈ N+ layers and parameters θ defines a mapping fθ : X → Y for a given input x ∈ X ⊆ Rd to an output y ∈ Y ⊆ Rk as follows. Let η` ∈ N+ denote the number of neurons in layer ` ∈ [L], where [L] = {1, . . . , L} denotes the index set, and where η1 = d and ηL = k. Further, let η = ∑L `=2 η ` and η∗ = max`∈{2,...,L} η`. For layers ` ∈ {2, . . . , L}, let W ` ∈ Rη`×η`−1 be the weight matrix for layer ` with entries denoted by w`ij , rows denoted by w`i ∈ R1×η `−1 , and θ = (W 2, . . . ,WL). For notational simplicity, we assume that the bias is embedded in the weight matrix. Then for an input vector x ∈ Rd, let a1 = x and z` = W `a`−1 ∈ Rη` , ∀` ∈ {2, . . . , L}, where a`−1 = φ(z`−1) ∈ Rη`−1 denotes the activation. We consider the activation function to be the Rectified Linear Unit (ReLU) function, i.e., φ(·) = max{· , 0} (entry-wise, if the input is a vector). The output of the network for an input x is fθ(x) = zL, and in particular, for classification tasks the prediction is argmaxi∈[k] fθ(x)i = argmaxi∈[k] z L i . 3.2 NEURAL NETWORK CORESET PROBLEM Consider the setting where a neural network fθ(·) has been trained on a training set of independent and identically distributed (i.i.d.) samples from a joint distribution on X × Y , yielding parameters θ = (W 2, . . . ,WL). We further denote the input points of a validation data set as P = {xi}ni=1 ⊆ X and the marginal distribution over the input space X as D. We define the size of the parameter tuple θ, nnz(θ), to be the sum of the number of non-zero entries in the weight matrices W 2, . . . ,WL. For any given ε, δ ∈ (0, 1), our overarching goal is to generate a reparameterization θ̂, yielding the neural network fθ̂(·), using a randomized algorithm, such that nnz(θ̂) nnz(θ), and the neural network output fθ(x), x ∼ D can be approximated up to 1 ± ε multiplicative error with probability greater than 1 − δ. We define the 1 ± ε multiplicative error between two k-dimensional vectors a, b ∈ Rk as the following entry-wise bound: a ∈ (1± ε)b ⇔ ai ∈ (1± ε)bi ∀i ∈ [k], and formalize the definition of an (ε, δ)-coreset as follows. Definition 1 ((ε, δ)-coreset). Given user-specified ε, δ ∈ (0, 1), a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) is an (ε, δ)-coreset for the network parameterized by θ if for x ∼ D, it holds that P̂ θ,x (fθ̂(x) ∈ (1± ε)fθ(x)) ≥ 1− δ, where Pθ̂,x denotes a probability measure with respect to a random data point x and the output θ̂ generated by a randomized compression scheme. 4 METHOD In this section, we introduce our neural network compression algorithm as depicted in Alg. 1. Our method is based on an important sampling-scheme that extends traditional sensitivity-based coreset constructions to the application of compressing parameters. 4.1 CORENET Our method (Alg. 1) hinges on the insight that a validation set of data points P i.i.d.∼ Dn can be used to approximate the relative importance, i.e., sensitivity, of each weighted edge with respect to the input data distributionD. For this purpose, we first pick a subsample of the data points S ⊆ P of appropriate size (see Sec. 5 for details) and cache each neuron’s activation and compute a neuron-specific constant to be used to determine the required edge sampling complexity (Lines 2-6). Algorithm 1 CORENET Input: ε, δ ∈ (0, 1): error and failure probability, respectively; P ⊆ X : a set of n points from the input space X such that P i.i.d.∼ Dn; θ = (W 2, . . . ,WL): parameters of the original uncompressed neural network. Output: θ̂ = (Ŵ 2, . . . , ŴL): sparsified parameter set such that fθ̂(·) ∈ (1± ε)fθ(·) (see Sec. 5 for details). 1: ε′ ← ε 2 (L−1) ; η ∗ ← max`∈{2,...,L−1} η`; η ← ∑L `=2 η `; λ∗ ← log(η η∗)/2; 2: S ← Uniform sample (without replacement) of dlog (8 η η∗/δ) log(η η∗)e points from P; 3: a1(x)← x ∀x ∈ S; 4: for x ∈ S do 5: for ` ∈ {2, . . . , L} do 6: a`(x)← φ(W `a`−1(x)); ∆`i(x)← ∑ k∈[η`−1] |w ` ik a `−1 k (x)|∣∣∣∑ k∈[η`−1] w ` ik a`−1 k (x) ∣∣∣ ; 7: for ` ∈ {2, . . . , L} do 8: ∆̂` ← ( 1 |S| maxi∈[η`] ∑ x∈S ∆ ` i(x) ) + κ, where κ = √ 2λ∗ ( 1 + √ 2λ∗ log (8 η η ∗/δ) ) ; 9: Ŵ ` ← (~0, . . . ,~0) ∈ Rη `×η`−1 ; ∆̂`→ ← ∏L k=` ∆̂ k; ε` ← ε ′ ∆̂`→ ; 10: for all i ∈ [η`] do 11: W+ ← {j ∈ [η`−1] : w`ij > 0}; W− ← {j ∈ [η`−1] : w`ij < 0}; 12: ŵ`+i ← SPARSIFY(W+, w ` i , ε`, δ,S, a`−1); ŵ`−i ← SPARSIFY(W−,−w ` i , ε`, δ,S, a`−1); 13: ŵ`i ← ŵ`+i − ŵ `− i ; Ŵ ` i• ← ŵ`i ; . Consolidate the weights into the ith row of Ŵ `; 14: return θ̂ = (Ŵ 2, . . . , ŴL); Algorithm 2 SPARSIFY(W, w, ε, δ,S, a(·)) Input: W ⊆ [η`−1]: index set; w ∈ R1×η `−1 : row vector corresponding to the weights incoming to node i ∈ [η`] in layer ` ∈ {2, . . . , L}; ε, δ ∈ (0, 1): error and failure probability, respectively; S ⊆ P: subsample of the original point set; a(·): cached activations of previous layer for all x ∈ S. Output: ŵ: sparse weight vector. 1: for j ∈ W do 2: sj ← maxx∈S wjaj(x)∑ k∈W wkak(x) ; . Compute the sensitivity of each edge 3: S ← ∑ j∈W sj ; 4: for j ∈ W do . Generate the importance sampling distribution over the incoming edges 5: qj ← sjS ; 6: m← ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ ; . Compute the number of required samples 7: C ← a multiset of m samples fromW where each j ∈ W is sampled with probability qj ; 8: ŵ ← (0, . . . , 0) ∈ R1×η `−1 ; . Initialize the compressed weight vector 9: for j ∈ C do . Update the entries of the sparsified weight matrix according to the samples C 10: ŵj ← ŵj + wjmqj ; . Entries are reweighted by 1 mqj to ensure unbiasedness of our estimator 11: return ŵ; Subsequently, we apply our core sampling scheme to sparsify the set of incoming weighted edges to each neuron in all layers (Lines 7-13). For technical reasons (see Sec. 5), we perform the sparsification on the positive and negative weighted edges separately and then consolidate the results (Lines 11- 13). By repeating this procedure for all neurons in every layer, we obtain a set θ̂ = (Ŵ 2, . . . , ŴL) of sparse weight matrices such that the output of each layer and the entire network is approximately preserved, i.e., Ŵ `â`−1(x) ≈W `a`−1(x) and fθ̂(x) ≈ fθ(x), respectively 1. 1â`−1(x) denotes the approximation from previous layers for an input x ∼ D; see Sec. 5 for details. 4.2 SPARSIFYING WEIGHTS The crux of our compression scheme lies in Alg. 2 (invoked twice on Line 12, Alg. 1) and in particular, in the importance sampling scheme used to select a small subset of edges of high importance. The cached activations are used to compute the sensitivity, i.e., relative importance, of each considered incoming edge j ∈ W to neuron i ∈ [η`], ` ∈ {2, . . . , L} (Alg. 2, Lines 1-2). The relative importance of each edge j is computed as the maximum (over x ∈ S) ratio of the edge’s contribution to the sum of contributions of all edges. In other words, the sensitivity sj of an edge j captures the highest (relative) impact j had on the output of neuron i ∈ [η`] in layer ` across all x ∈ S . The sensitivities are then used to compute an importance sampling distribution over the incoming weighted edges (Lines 4-5). The intuition behind the importance sampling distribution is that if sj is high, then edge j is more likely to have a high impact on the output of neuron i, therefore we should keep edge j with a higher probability. m edges are then sampled with replacement (Lines 6-7) and the sampled weights are then reweighed to ensure unbiasedness of our estimator (Lines 9-10). 4.3 EXTENSIONS: NEURON PRUNING AND AMPLIFICATION In this subsection we outline two improvements to our algorithm that that do not violate any of our theoretical properties and may improve compression rates in practical settings. Neuron pruning (CoreNet+) Similar to removing redundant edges, we can use the empirical activations to gauge the importance of each neuron. In particular, if the maximum activation (over all evaluations x ∈ S) of a neuron is equal to 0, then the neuron – along with all of the incoming and outgoing edges – can be pruned without significantly affecting the output with reasonable probability. This intuition can be made rigorous under the assumptions outlined in Sec. 5. Amplification (CoreNet++) Coresets that provide stronger approximation guarantees can be constructed via amplification – the procedure of constructing multiple approximations (coresets) (ŵ`i )1, . . . , (ŵ ` i )τ over τ trials, and picking the best one. To evaluate the quality of each approximation, a different subset T ⊆ P \ S can be used to infer performance. In practice, amplification would entail constructing multiple approximations by executing Line 12 of Alg. 1 and picking the one that achieves the lowest relative error on T . 5 ANALYSIS In this section, we establish the theoretical guarantees of our neural network compression algorithm (Alg. 1). The full proofs of all the claims presented in this section can be found in the Appendix. 5.1 PRELIMINARIES Let x ∼ D be a randomly drawn input point. We explicitly refer to the pre-activation and activation values at layer ` ∈ {2, . . . , `} with respect to the input x ∈ supp(D) as z`(x) and a`(x), respectively. The values of z`(x) and a`(x) at each layer ` will depend on whether or not we compressed the previous layers `′ ∈ {2, . . . , `}. To formalize this interdependency, we let ẑ`(x) and â`(x) denote the respective quantities of layer ` when we replace the weight matrices W 2, . . . ,W ` in layers 2, . . . , ` by Ŵ 2, . . . , Ŵ `, respectively. For the remainder of this section (Sec. 5) we let ` ∈ {2, . . . , L} be an arbitrary layer and let i ∈ [η`] be an arbitrary neuron in layer `. For purposes of clarity and readability, we will omit the the variable denoting the layer ` ∈ {2, . . . , L}, the neuron i ∈ [η`], and the incoming edge index j ∈ [η`−1], whenever they are clear from the context. For example, when referring to the intermediate value of a neuron i ∈ [η`] in layer ` ∈ {2, . . . , L}, z`i (x) = 〈w`i , â`−1(x)〉 ∈ R with respect to a point x, we will simply write z(x) = 〈w, a(x)〉 ∈ R, where w := w`i ∈ R1×η `−1 and a(x) := a`−1(x) ∈ Rη`−1×1. Under this notation, the weight of an incoming edge j is denoted by wj ∈ R. 5.2 IMPORTANCE SAMPLING BOUNDS FOR POSITIVE WEIGHTS In this subsection, we establish approximation guarantees under the assumption that the weights are positive. Moreover, we will also assume that the input, i.e., the activation from the previous layer, is non-negative (entry-wise). The subsequent subsection will then relax these assumptions to conclude that a neuron’s value can be approximated well even when the weights and activations are not all positive and non-negative, respectively. Let W = {j ∈ [η`−1] : wj > 0} ⊆ [η`−1] be the set of indices of incoming edges with strictly positive weights. To sample the incoming edges to a neuron, we quantify the relative importance of each edge as follows. Definition 2 (Relative Importance). The importance of an incoming edge j ∈ W with respect to an input x ∈ supp(D) is given by the function gj(x), where gj(x) = wj aj(x)∑ k∈W wk ak(x) ∀j ∈ W. Note that gj(x) is a function of the random variable x ∼ D. We now present our first assumption that pertains to the Cumulative Distribution Function (CDF) of the relative importance random variable. Assumption 1. For all j ∈ W , the CDF of the random variable gj(x), denoted by Fj (·), satisfies Fj (M/K) ≤ exp(−1/K), where M = min{x ∈ [0, 1] : Fj (x) = 1}, and K ∈ [2, log(η η∗)] is a universal constant.2 Assumption 1 is a technical assumption on the ratio of the weighted activations that will enable us to rule out pathological problem instances where the relative importance of each edge cannot be well-approximated using a small number of data points S ⊆ P . Henceforth, we consider a uniformly drawn (without replacement) subsample S ⊆ P as in Line 2 of Alg. 1, where |S| = dlog (8 η η∗/δ) log(η η∗)e, and define the sensitivity of an edge as follows. Definition 3 (Empirical Sensitivity). Let S ⊆ P be a subset of distinct points from P i.i.d.∼ Dn.Then, the sensitivity over positive edges j ∈ W directed to a neuron is defined as sj = maxx∈S gj(x). Our first lemma establishes a core result that relates the weighted sum with respect to the sparse row vector ŵ, ∑ k∈W ŵk âk(x), to the value of the of the weighted sum with respect to the ground-truth row vector w, ∑ k∈W wk âk(x). We remark that there is randomness with respect to the randomly generated row vector ŵ`i , a randomly drawn input x ∼ D, and the function â(·) = â`−1(·) defined by the randomly generated matrices Ŵ 2, . . . , Ŵ `−1 in the previous layers. Unless otherwise stated, we will henceforth use the shorthand notation P(·) to denote Pŵ`, x, â`−1(·). Moreover, for ease of presentation, we will first condition on the event E1/2 that â(x) ∈ (1± 1/2)a(x) holds. This conditioning will simplify the preliminary analysis and will be removed in our subsequent results. Lemma 1 (Positive-Weights Sparsification). Let ε, δ ∈ (0, 1), and x ∼ D. SPARSIFY(W, w, ε, δ,S, a(·)) generates a row vector ŵ such that P (∑ k∈W ŵk âk(x) /∈ (1± ε) ∑ k∈W wk âk(x) | E1/2 ) ≤ 3δ 8η where nnz(ŵ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , and S = ∑ j∈W sj . 5.3 IMPORTANCE SAMPLING BOUNDS We now relax the requirement that the weights are strictly positive and instead consider the following index sets that partition the weighted edges: W+ = {j ∈ [η`−1] : wj > 0} andW− = {j ∈ [η`−1] : wj < 0}. We still assume that the incoming activations from the previous layers are positive (this assumption can be relaxed as discussed in Appendix A.2.4). We define ∆`i(x) for a point x ∼ D and neuron i ∈ [η`] as ∆`i(x) = ∑ k∈[η`−1] |w ` ik a `−1 k (x)| |∑k∈[η`−1] w`ik a`−1k (x)| . The following assumption serves a similar purpose as does Assumption 1 in that it enables us to approximate the random variable ∆`i(x) via an empirical estimate over a small-sized sample of data points S ⊆ P . Assumption 2 (Subexponentiality of ∆`i(x)). For any layer ` ∈ {2, . . . , L} and neuron i ∈ [η`], the centered random variable ∆ = ∆`i(x) − E x∼D[∆`i(x)] is subexponential (Vershynin, 2016) with parameter λ ≤ log(η η∗)/2, i.e., E [exp (s∆)] ≤ exp(s2λ2) ∀|s| ≤ 1λ . 2 2The upper bound of log(ηη∗) for K and λ can be considered somewhat arbitrary in the sense that, more generally, we only require that K,λ ∈ O(polylog(ηη∗|P|). Defining the upper bound in this way simplifies the presentation of the core ideas without having to deal with the constants involved in the asymptotic notation. For ε ∈ (0, 1) and ` ∈ {2, . . . , L}, we let ε′ = ε2 (L−1) and define ε` = ε′ ∆̂`→ = ε 2 (L−1) ∏L k=` ∆̂ k , where ∆̂` = ( 1 |S| maxi∈[η`] ∑ x′∈S ∆ ` i(x ′) ) + κ. To formalize the interlayer dependencies, for each i ∈ [η`] we let E`i denote the (desirable) event that ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) holds, and let E` = ∩i∈[η`] E`i be the intersection over the events corresponding to each neuron in layer `. Lemma 2 (Conditional Neuron Value Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, i ∈ [η`], and x ∼ D. CORENET generates a row vector ŵ`i = ŵ `+ i − ŵ `− i ∈ R1×η `−1 such that P ( E`i | E`−1 ) = P ( ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) | E`−1 ) ≥ 1− δ/η, (1) where ε` = ε ′ ∆̂`→ and nnz(ŵ`i ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε`2 ⌉ + 1, where S = ∑ j∈W+ sj + ∑ j∈W− sj . The following core result establishes unconditional layer-wise approximation guarantees and culminates in our main compression theorem. Lemma 3 (Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` ∈ Rη`×η`−1 such that, for ẑ`(x) = Ŵ `â`(x), P (Ŵ 2,...,Ŵ `), x (E`) = P (Ŵ 2,...,Ŵ `), x ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) ) ≥ 1− δ ∑` `′=2 η `′ η . Theorem 4 (Network Compression). For ε, δ ∈ (0, 1), Algorithm 1 generates a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) of size nnz(θ̂) ≤ L∑ `=2 η`∑ i=1 (⌈ 32 (L− 1)2 (∆̂`→)2 S`i log(η η∗) log(8 η/δ) ε2 ⌉ + 1 ) in O ( η η∗ log ( η η∗/δ )) time such that Pθ̂, x∼D ( fθ̂(x) ∈ (1± ε)fθ(x) ) ≥ 1− δ. We note that we can obtain a guarantee for a set of n randomly drawn points by invoking Theorem 4 with δ′ = δ/n and union-bounding over the failure probabilities, while only increasing the sampling complexity logarithmically, as formalized in Corollary 12, Appendix A.2. 5.4 GENERALIZATION BOUNDS As a corollary to our main results, we obtain novel generalization bounds for neural networks in terms of empirical sensitivity. Following the terminology of Arora et al. (2018), the expected margin loss of a classifier fθ : Rd → Rk parameterized by θ with respect to a desired margin γ > 0 and distribution D is defined by Lγ(fθ) = P(x,y)∼DX ,Y (fθ(x)y ≤ γ + maxi 6=y fθ(x)i). We let L̂γ denote the empirical estimate of the margin loss. The following corollary follows directly from the argument presented in Arora et al. (2018) and Theorem 4. Corollary 5 (Generalization Bounds). For any δ ∈ (0, 1) and margin γ > 0, Alg. 1 generates weights θ̂ such that with probability at least 1 − δ, the expected error L0(fθ̂) with respect to the points in P ⊆ X , |P| = n, is bounded by L0(fθ̂) ≤ L̂γ(fθ) + Õ √maxx∈P ‖fθ(x)‖22 L2 ∑L`=2(∆̂`→)2 ∑η`i=1 S`i γ2 n . 6 RESULTS In this section, we evaluate the practical effectiveness of our compression algorithm on popular benchmark data sets (MNIST (LeCun et al., 1998), FashionMNIST (Xiao et al., 2017), and CIFAR10 (Krizhevsky & Hinton, 2009)) and varying fully-connected trained neural network configurations: 2 to 5 hidden layers, 100 to 1000 hidden units, either fixed hidden sizes or decreasing hidden size denoted by pyramid in the figures. We further compare the effectiveness of our sampling scheme in reducing the number of non-zero parameters of a network, i.e., in sparsifying the weight matrices, to that of uniform sampling, Singular Value Decomposition (SVD), and current state-of-the-art sampling schemes for matrix sparsification (Drineas & Zouzias, 2011; Achlioptas et al., 2013; Kundu & Drineas, 2014), which are based on matrix norms – `1 and `2 (Frobenius). The details of the experimental setup and results of additional evaluations may be found in Appendix B. Experiment Setup We compare against three variations of our compression algorithm: (i) sole edge sampling (CoreNet), (ii) edge sampling with neuron pruning (CoreNet+), and (iii) edge sampling with neuron pruning and amplification (CoreNet++). For comparison, we evaluated the average relative error in output (`1-norm) and average drop in classification accuracy relative to the accuracy of the uncompressed network. Both metrics were evaluated on a previously unseen test set. Results Results for varying architectures and datasets are depicted in Figures 1 and 2 for the average drop in classification accuracy and relative error (`1-norm), respectively. As apparent from Figure 1, we are able to compress networks to about 15% of their original size without significant loss of accuracy for networks trained on MNIST and FashionMNIST, and to about 50% of their original size for CIFAR. Discussion The simulation results presented in this section validate our theoretical results established in Sec. 5. In particular, our empirical results indicate that we are able to outperform networks compressed via competing methods in matrix sparsification across all considered experiments and trials. The results presented in this section further suggest that empirical sensitivity can effectively capture the relative importance of neural network parameters, leading to a more informed importance sampling scheme. Moreover, the relative performance of our algorithm tends to increase as we consider deeper architectures. These findings suggest that our algorithm may also be effective in compressing modern convolutional architectures, which tend to be very deep. 7 CONCLUSION We presented a coresets-based neural network compression algorithm for compressing the parameters of a trained fully-connected neural network in a manner that approximately preserves the network’s output. Our method and analysis extend traditional coreset constructions to the application of compressing parameters, which may be of independent interest. Our work distinguishes itself from prior approaches in that it establishes theoretical guarantees on the approximation accuracy and size of the generated compressed network. As a corollary to our analysis, we obtain generalization bounds for neural networks, which may provide novel insights on the generalization properties of neural networks. We empirically demonstrated the practical effectiveness of our compression algorithm on a variety of neural network configurations and real-world data sets. In future work, we plan to extend our algorithm and analysis to compress Convolutional Neural Networks (CNNs) and other network architectures. We conjecture that our compression algorithm can be used to reduce storage requirements of neural network models and enable fast inference in practical settings. ACKNOWLEDGMENTS This research was supported in part by the National Science Foundation award IIS-1723943. We thank Brandon Araki and Kiran Vodrahalli for valuable discussions and helpful suggestions. We would also like to thank Kasper Green Larsen, Alexander Mathiasen, and Allan Gronlund for pointing out an error in an earlier formulation of Lemma 6. A PROOFS OF THE ANALYTICAL RESULTS IN SECTION 5 This section includes the full proofs of the technical results given in Sec. 5. A.1 ANALYTICAL RESULTS FOR SECTION 5.2 (IMPORTANCE SAMPLING BOUNDS FOR POSITIVE WEIGHTS) A.1.1 ORDER STATISTIC SAMPLING We now establish a couple of technical results that will quantify the accuracy of our approximations of edge importance (i.e., sensitivity). Lemma 6. Let K > 0 be a universal constant and let D be a distribution with CDF F (·) satisfying F (M/K) ≤ exp(−1/K), where M = min{x ∈ [0, 1] : F (x) = 1}. Let P = {X1, . . . , Xn} be a set of n = |P| i.i.d. samples each drawn from the distribution D. Let Xn+1 ∼ D be an i.i.d. sample. Then, P ( K max X∈P X < Xn+1 ) ≤ exp(−n/K) Proof. Let Xmax = maxX∈P ; then, P(KXmax < Xn+1) = ∫ M 0 P(Xmax < x/K|Xn+1 = x) dP(x) = ∫ M 0 P (X < x/K)n dP(x) since X1, . . . , Xn are i.i.d. ≤ ∫ M 0 F (x/K)n dP(x) where F (·) is the CDF of X ∼ D ≤ F (M/K)n ∫ M 0 dP(x) by monotonicity of F = F (M/K)n ≤ exp(−n/K) CDF Assumption, and this completes the proof. We now proceed to establish that the notion of empirical sensitivity is a good approximation for the relative importance. For this purpose, let the relative importance ĝj(x) of an edge j after the previous layers have already been compressed be ĝj(x) = wj âj(x)∑ k∈W wk âk(x) . Lemma 7 (Empirical Sensitivity Approximation). Let ε ∈ (0, 1/2), δ ∈ (0, 1), ` ∈ {2, . . . , L}, Consider a set S = {x1, . . . , xn} ⊆ P of size |S| ≥ dlog (8 η η∗/δ) log(η η∗)e. Then, conditioned on the event E1/2 occurring, i.e., â(x) ∈ (1± 1/2)a(x), P x∼D ( ∃j ∈ W : C sj < ĝj(x) | E1/2 ) ≤ δ 8 η , where C = 3 log(η η∗) andW ⊆ [η`−1]. Proof. Consider an arbitrary j ∈ W and x′ ∈ S corresponding to gj(x′) with CDF Fj (·) and recall that M = min{x ∈ [0, 1] : Fj (x) = 1} as in Assumption 1. Note that by Assumption 1, we have F (M/K) ≤ exp(−1/K), and so the random variables gj(x′) for x′ ∈ S satisfy the CDF condition required by Lemma 6. Now let E be the event that K sj < gj(x) holds. Applying Lemma 6, we obtain P(E) = P(K sj < gj(x)) = P ( K max x′∈S gj(x ′) < gj(x) ) ≤ exp(−|S|/K). Now let Ê denote the event that the inequality Csj < ĝj(x) = wj âj(x)∑ k∈W wk âk(x) holds and note that the right side of the inequality is defined with respect to ĝj(x) and not gj(x). Observe that since we conditioned on the event E1/2, we have that â(x) ∈ (1± 1/2)a(x). Now assume that event Ê holds and note that by the implication above, we have C sj < ĝj(x) = wj âj(x)∑ k∈W wk âk(x) ≤ (1 + 1/2)wj aj(x) (1− 1/2) ∑ k∈W wk ak(x) ≤ 3 · wj aj(x)∑ k∈W wk ak(x) = 3 gj(x). where the second inequality follows from the fact that 1+1/2/1−1/2 ≤ 3. Moreover, since we know that C ≥ 3K, we conclude that if event Ê occurs, we obtain the inequality 3K sj ≤ 3 gj(x)⇔ K sj ≤ gj(x), which is precisely the definition of event E . Thus, we have shown the conditional implication ( Ê | E1/2 ) ⇒ E , which implies that P(Ê | E1/2) = P(C sj < ĝj(x) | E1/2) ≤ P(E) ≤ exp(−|S|/K). Since our choice of j ∈ W was arbitrary, the bound applies for any j ∈ W . Thus, we have by the union bound P(∃j ∈ W : C sj < ĝj(x) | E1/2) ≤ ∑ j∈W P(C sj < ĝj(x) | E1/2) ≤ |W| exp(−|S|/K) = ( |W| η∗ ) δ 8η ≤ δ 8η . In practice, the set S referenced above is chosen to be a subset of the original data points, i.e., S ⊆ P (see Alg. 1, Line 2). Thus, we henceforth assume that the size of the input points |P| is large enough (or the specified parameter δ ∈ (0, 1) is sufficiently large) so that |P| ≥ |S|. A.1.2 PROOF OF LEMMA 1 We now state the proof of Lemma 1. In this subsection, we establish approximation guarantees under the assumption that the weights are strictly positive. The next subsection will then relax this assumption to conclude that a neuron’s value can be approximated well even when the weights are not all positive. Lemma 1 (Positive-Weights Sparsification). Let ε, δ ∈ (0, 1), and x ∼ D. SPARSIFY(W, w, ε, δ,S, a(·)) generates a row vector ŵ such that P (∑ k∈W ŵk âk(x) /∈ (1± ε) ∑ k∈W wk âk(x) | E1/2 ) ≤ 3δ 8η where nnz(ŵ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , and S = ∑ j∈W sj . Proof. Let ε, δ ∈ (0, 1) be arbitrary. Moreover, let C be the coreset with respect to the weight indices W ⊆ [η`−1] used to construct ŵ. Note that as in SPARSIFY, C is a multiset sampled fromW of size m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , where S = ∑ j∈W sj and C is sampled according to the probability distribution q defined by qj = sj S ∀j ∈ W. Let â(·) be an arbitrary realization of the random variable â`−1(·), let x be a realization of x ∼ D, and let ẑ = ∑ k∈W ŵk âk(x) be the approximate intermediate value corresponding to the sparsified matrix ŵ and let z̃ = ∑ k∈W wk âk(x). Now define E to be the (favorable) event that ẑ ε-approximates z̃, i.e., ẑ ∈ (1±ε)z̃, We will now show that the complement of this event, Ec, occurs with sufficiently small probability. Let Z ⊆ supp(D) be the set of well-behaved points (defined implicitly with respect to neuron i ∈ [η`] and realization â) and defined as follows: Z = {x′ ∈ supp(D) : ĝj(x′) ≤ Csj ∀j ∈ W} , where C = 3 log(η η∗). Let EZ denote the event that x ∈ Z where x is a realization of x ∼ D. Conditioned on EZ , event Ec occurs with probability ≤ δ4η : Let x be a realization of x ∼ D such that x ∈ Z and let C = {c1, . . . , cm} be m samples fromW with respect to distribution q as before. Define m random variables Tc1 , . . . , Tcm such that for all j ∈ C Tj = wj âj(x) mqj = S wj âj(x) msj . (2) For any j ∈ C, we have for the conditional expectation of Tj : E [Tj | â(·),x, EZ , E1/2] = ∑ k∈W wk âk(x) mqk · qk = ∑ k∈W wk âk(x) m = z̃ m , where we use the expectation notation E [·] with the understanding that it denotes the conditional expectation E C | âl−1(·), x [·]. Moreover, we also note that conditioning on the event EZ (i.e., the event that x ∈ Z) does not affect the expectation of Tj . Let T = ∑ j∈C Tj = ẑ denote our approximation and note that by linearity of expectation, E [T | â(·),x, EZ , E1/2] = ∑ j∈C E [Tj | â(·),x, EZ , E1/2] = z̃ Thus, ẑ = T is an unbiased estimator of z̃ for any realization â(·) and x; thus, we will henceforth refer to E [T | â(·), x] as simply z̃ for brevity. For the remainder of the proof we will assume that z̃ > 0, since otherwise, z̃ = 0 if and only if Tj = 0 for all j ∈ C almost surely, which follows by the fact that Tj ≥ 0 for all j ∈ C by definition ofW and the non-negativity of the ReLU activation. Therefore, in the case that z̃ = 0, it follows that P(|ẑ − z̃| > εz̃ | â(·),x) = P(ẑ > 0 | â(·),x) = P(0 > 0) = 0, which trivially yields the statement of the lemma, where in the above expression, P(·) is short-hand for the conditional probability Pŵ | âl−1(·), x(·). We now proceed with the case where z̃ > 0 and leverage the fact that x ∈ Z3 to establish that for all j ∈ W : Csj ≥ ĝj(x) = wj âj(x)∑ k∈W wk âk(x) = wj âj(x) z̃ 3Since we conditioned on the event EZ . ⇔ wj âj(x) sj ≤ C z̃. (3) Utilizing the inequality established above, we bound the conditional variance of each Tj , j ∈ C as follows Var(Tj | â(·),x, EZ , E1/2) ≤ E [(Tj)2 | â(·),x, EZ , E1/2] = ∑ k∈W (wk âk(x)) 2 (mqk)2 · qk = S m2 ∑ k∈W (wk âk(x)) 2 sk ≤ S m2 (∑ k∈W wk âk(x) ) C z̃ = S C z̃2 m2 , where Var(·) is short-hand for VarC | âl−1(·), x (·). Since T is a sum of (conditionally) independent random variables, we obtain Var(T | â(·),x, EZ , E1/2) = mVar(Tj | â(·),x, EZ , E1/2) (4) ≤ S C z̃ 2 m . Now, for each j ∈ C let T̃j = Tj − E [Tj | â(·),x, EZ , E1/2] = Tj − z̃, and let T̃ = ∑ j∈C T̃j . Note that by the fact that we conditioned on the realization x of x such that x ∈ Z (event EZ ), we obtain by definition of Tj in (2) and the inequality (3): Tj = S wj âj(x) msj ≤ S C z̃ m . (5) We also have that S ≥ 1 by definition. More specifically, using the fact that the maximum over a set is greater than the average and rearranging sums, we obtain S = ∑ j∈W sj = ∑ j∈W max x′∈S gj(x ′) ≥ 1 |S| ∑ j∈W ∑ x′∈S gj(x ′) = 1 |S| ∑ x′∈S ∑ j∈W gj(x ′) = 1 |S| ∑ x′∈S 1 = 1. Thus, the inequality established in (5) with the fact that S ≥ 1 we obtain an upper bound on the absolute value of the centered random variables: |T̃j | = ∣∣∣∣Tj − z̃m ∣∣∣∣ ≤ S C z̃m = M, (6) which follows from the fact that: if Tj ≥ z̃m : Then, by our bound in (5) and the fact that z̃ m ≥ 0, it follows that ∣∣∣T̃j∣∣∣ = Tj − z̃ m ≤ S C z̃ m − z̃ m ≤ S C z̃ m . if Tj < z̃m : Then, using the fact that Tj ≥ 0 and S ≥ 1, we obtain∣∣∣T̃j∣∣∣ = z̃ m − Tj ≤ z̃ m ≤ S C z̃ m . Applying Bernstein’s inequality to both T̃ and −T̃ we have by symmetry and the union bound, P(Ec | â(·),x, EZ , E1/2) = P ( |T − z̃| ≥ εz̃ | â(·),x, EZ , E1/2 ) ≤ 2 exp ( − ε 2z̃2 2 Var(T | â(·),x) + 2 ε z̃M3 ) ≤ 2 exp ( − ε 2z̃2 2SC z̃2 m + 2S C z̃2 3m ) = 2 exp ( −3 ε 2m 8S C ) ≤ δ 4η , where the second inequality follows by our upper bounds on Var(T | â(·),x) and ∣∣∣T̃j∣∣∣ and the fact that ε ∈ (0, 1), and the last inequality follows by our choice of m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ . This establishes that for any realization â(·) of âl−1(·) and a realization x of x satisfying x ∈ Z , the event Ec occurs with probability at most δ4η . Removing the conditioning on EZ : We have by law of total probability P(E | â(·), E1/2) ≥ ∫ x∈Z P(E | â(·),x, EZ , E1/2) P x∼D (x = x | â(·), E1/2) dx ≥ ( 1− δ 4η )∫ x∈Z P x∼D (x = x | â(·), E1/2) dx = ( 1− δ 4η ) P x∼D (EZ | â(·), E1/2) ≥ ( 1− δ 4η )( 1− δ 8η ) ≥ 1− 3δ 8η where the second-to-last inequality follows from the fact that P(Ec | â(·),x, EZ , E1/2) ≤ δ4η as was established above and the last inequality follows by Lemma 7. Putting it all together Finally, we marginalize out the random variable â`−1(·) to establish P(E | E1/2) = ∫ â(·) P(E | â(·), E1/2)P(â(·) | E1/2) dâ(·) ≥ ( 1− 3δ 8η )∫ â(·) P(â(·) | E1/2) dâ(·) = 1− 3δ 8η . Consequently, P(Ec | E1/2) ≤ 1− ( 1− 3δ 8η ) = 3δ 8η , and this concludes the proof. A.2 ANALYTICAL RESULTS FOR SECTION 5.3 (IMPORTANCE SAMPLING BOUNDS) We begin by establishing an auxiliary result that we will need for the subsequent lemmas. A.2.1 EMPIRICAL ∆`i APPROXIMATION Lemma 8 (Empirical ∆`i Approximation). Let δ ∈ (0, 1), λ∗ = log(η η∗)/2, and define ∆̂` = ( 1 |S| max i∈[η`] ∑ x′∈S ∆`i(x ′) ) + κ, where κ = √ 2λ∗ ( 1 + √ 2λ∗ log (8 η η ∗/δ) ) and S ⊆ P is as in Alg. 1. Then, P x∼D ( max i∈[η`] ∆`i(x) ≤ ∆̂` ) ≥ 1− δ 4η . Proof. Define the random variables Yx′ = E [∆`i(x′)]−∆`i(x′) for each x′ ∈ S and consider the sum Y = ∑ x′∈S Yx′ = ∑ x′∈S ( E [∆`i(x)]−∆`i(x′) ) . We know that each random variable Yx′ satisfies E [Yx′ ] = 0 and by Assumption 2, is subexponential with parameter λ ≤ λ∗. Thus, Y is a sum of |S| independent, zero-mean λ∗-subexponential random variables, which implies that E [Y] = 0 and that we can readily apply Bernstein’s inequality for subexponential random variables (Vershynin, 2016) to obtain for t ≥ 0 P ( 1 |S| Y ≥ t ) ≤ exp ( −|S| min { t2 4λ2∗ , t 2λ∗ }) . Since S = dlog (8 η η∗/δ) log(η η∗)e ≥ log (8 η η∗/δ) 2λ∗, we have for t = √ 2λ∗, P ( E [∆`i(x)]− 1 |S| ∑ x′∈S ∆`i(x ′) ≥ t ) = P ( 1 |S| Y ≥ t ) ≤ exp ( −|S| t 2 4λ2∗ ) ≤ exp (− log (8 η η∗/δ)) = δ 8 η η∗ . Moreover, for a single Yx, we have by the equivalent definition of a subexponential random variable (Vershynin, 2016) that for u ≥ 0 P(∆`i(x)− E [∆`i(x)] ≥ u) ≤ exp ( −min { − u 2 4λ2∗ , u 2λ∗ }) . Thus, for u = 2λ∗ log (8 η η∗/δ) we obtain P(∆`i(x)− E [∆`i(x)] ≥ u) ≤ exp (− log (8 η η∗/δ)) = δ 8 η η∗ . Therefore, by the union bound, we have with probability at least 1− δ4η η∗ : ∆`i(x) ≤ E [∆`i(x)] + u ≤ ( 1 |S| ∑ x′∈S ∆`i(x ′) + t ) + u = 1 |S| ∑ x′∈S ∆`i(x ′) + (√ 2λ∗ + 2λ∗ log (8 η η ∗/δ) ) = 1 |S| ∑ x′∈S ∆`i(x ′) + κ ≤ ∆̂`, where the last inequality follows by definition of ∆̂`. Thus, by the union bound, we have P x∼D ( max i∈[η`] ∆`i(x) > ∆̂ ` ) = P ( ∃i ∈ [η`] : ∆`i(x) > ∆̂` ) ≤ ∑ i∈[η`] P ( ∆`i(x) > ∆̂ ` ) ≤ η` ( δ 4η η∗ ) ≤ δ 4 η , where the last line follows by definition of η∗ ≥ η`. A.2.2 NOTATION FOR THE SUBSEQUENT ANALYSIS Let ŵ`+i and ŵ `− i denote the sparsified row vectors generated when SPARSIFY is invoked with first two arguments corresponding to (W+, w`i ) and (W−,−w`i ), respectively (Alg. 1, Line 12). We will at times omit including the variables for the neuron i and layer ` in the proofs for clarity of exposition, and for example, refer to ŵ`+i and ŵ `− i as simply ŵ + and ŵ−, respectively. Let x ∼ D and define ẑ+(x) = ∑ k∈W+ ŵ+k âk(x) ≥ 0 and ẑ −(x) = ∑ k∈W− (−ŵ−k ) âk(x) ≥ 0 be the approximate intermediate values corresponding to the sparsified matrices ŵ+ and ŵ−; let z̃+(x) = ∑ k∈W+ wk âk(x) ≥ 0 and z̃−(x) = ∑ k∈W− (−wk) âk(x) ≥ 0 be the corresponding intermediate values with respect to the the original row vector w; and finally, let z+(x) = ∑ k∈W+ wk ak(x) ≥ 0 and z−(x) = ∑ k∈W− (−wk) ak(x) ≥ 0 be the true intermediate values corresponding to the positive and negative valued weights. Note that in this context, we have by definition ẑ`i (x) = 〈ŵ, â(x)〉 = ẑ+(x)− ẑ−(x), z̃`i (x) = 〈w, â(x)〉 = z̃+(x)− z̃−(x), and z`i (x) = 〈w, a(x)〉 = z+(x)− z−(x), where we used the fact that ŵ = ŵ+ − ŵ− ∈ R1×η`−1 . A.2.3 PROOF OF LEMMA 2 Lemma 2 (Conditional Neuron Value Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, i ∈ [η`], and x ∼ D. CORENET generates a row vector ŵ`i = ŵ `+ i − ŵ `− i ∈ R1×η `−1 such that P ( E`i | E`−1 ) = P ( ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) | E`−1 ) ≥ 1− δ/η, (1) where ε` = ε ′ ∆̂`→ and nnz(ŵ`i ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε`2 ⌉ + 1, where S = ∑ j∈W+ sj + ∑ j∈W− sj . Proof. Let ε, δ ∈ (0, 1) be arbitrary and let W+ = {j ∈ [η`−1] : wj > 0} and W− = {j ∈ [η`−1] : wj < 0} as in Alg. 1. Let ε` be defined as before, ε` = ε ′ ∆̂`→ , where ∆̂`→ = ∏L k=` ∆̂ k and ∆̂` = ( 1 |S| maxi∈[η`] ∑ x′∈S ∆ ` i(x ′) ) + κ. Observe that wj > 0 ∀j ∈ W+ and similarly, for all (−wj) > 0 ∀j ∈ W−. That is, each of index setsW+ andW− corresponds to strictly positive entries in the arguments w`i and −w`i , respectively passed into SPARSIFY. Observe that since we conditioned on the event E`−1, we have 2 (`− 2) ε` ≤ 2 (`− 2) ε 2 (L− 1) ∏L k=` ∆̂ k ≤ ε∏L k=` ∆̂ k ≤ ε 2L−`+1 Since ∆̂k ≥ 2 ∀k ∈ {`, . . . , L} ≤ ε 2 , where the inequality ∆̂k ≥ 2 follows from the fact that ∆̂k = ( 1 |S| max i∈[η`] ∑ x′∈S ∆`i(x ′) ) + κ ≥ 1 + κ Since ∆`i(x′) ≥ 1 ∀x′ ∈ supp(D) by definition ≥ 2. we obtain that â(x) ∈ (1 ± ε/2)a(x), where, as before, â and a are shorthand notations for â`−1 ∈ Rη`−1×1 and a`−1 ∈ Rη`−1×1, respectively. This implies that E`−1 ⇒ E1/2 and since m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ in Alg. 2 we can invoke Lemma 1 with ε = ε` on each of the SPARSIFY invocations to conclude that P ( ẑ+(x) /∈ (1± ε`)z̃+(x) | E`−1 ) ≤ P ( ẑ+(x) /∈ (1± ε`)z̃+(x) | E1/2 ) ≤ 3δ 8η , and P ( ẑ−(x) /∈ (1± ε`)z̃−(x) | E`−1 ) ≤ 3δ 8η . Therefore, by the union bound, we have P ( ẑ+(x) /∈ (1± ε`)z̃+(x) or ẑ−(x) /∈ (1± ε`)z̃−(x) | E`−1 ) ≤ 3δ 8η + 3δ 8η = 3δ 4η . Moreover, by Lemma 8, we have with probability at most δ4η that ∆`i(x) > ∆̂ `. Thus, by the union bound over the failure events, we have that with probability at least 1 − (3δ/4η + δ/4η) = 1− δ/η that both of the following events occur 1. ẑ+(x) ∈ (1± ε`)z̃+(x) and ẑ−(x) ∈ (1± ε`)z̃−(x) (7) 2. ∆`i(x) ≤ ∆̂` (8) Recall that ε′ = ε2 (L−1) , ε` = ε′ ∆̂`→ , and that event E`i denotes the (desirable) event that ẑ`i (x) (1± 2 (`− 1) ε`+1) z`i (x) holds, and similarly, E` = ∩i∈[η`] E`i denotes the vector-wise analogue where ẑ`(x) (1± 2 (`− 1) ε`+1) z`(x). Let k = 2 (`− 1) and note that by conditioning on the event E`−1, i.e., we have by definition â`−1(x) ∈ (1± 2 (`− 2)ε`)a`−1(x) = (1± k ε`)a`−1(x), which follows by definition of the ReLU function. Recall that our overarching goal is to establish that ẑ`i (x) ∈ (1± 2 (`− 1)ε`+1) z`i (x), which would immediately imply by definition of the ReLU function that â`i(x) ∈ (1± 2 (`− 1)ε`+1) a`i(x). Having clarified the conditioning and our objective, we will once again drop the index i from the expressions moving forward. Proceeding from above, we have with probability at least 1− δ/η ẑ(x) = ẑ+(x)− ẑ−(x) ≤ (1 + ε`) z̃+(x)− (1− ε`) z̃−(x) By Event (7) above ≤ (1 + ε`)(1 + k ε`) z+(x)− (1− ε`)(1− k ε`) z−(x) Conditioning on event E`−1 = ( 1 + ε`(k + 1) + kε 2 ` ) z+(x) + ( −1 + (k + 1)ε` − kε2` ) z−(x) = ( 1 + k ε2` ) z(x) + (k + 1) ε` ( z+(x) + z−(x) ) = ( 1 + k ε2` ) z(x) + (k + 1) ε′∏L k=` ∆̂ k ( z+(x) + z−(x) ) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε′ ∆`i(x) ∏L k=`+1 ∆̂ k ( z+(x) + z−(x) ) By Event (8) above = ( 1 + k ε2` ) z(x) + (k + 1) ε′∏L k=`+1 ∆̂ k |z(x)| By ∆`i(x) = z+(x) + z−(x) |z(x)| = ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)|. To upper bound the last expression above, we begin by observing that kε2` ≤ ε`, which follows from the fact that ε` ≤ 12 (L−1) ≤ 1 k by definition. Moreover, we also note that ε` ≤ ε`+1 by definition of ∆̂` ≥ 1. Now, we consider two cases. Case of z(x) ≥ 0: In this case, we have ẑ(x) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)| ≤ (1 + ε`)z(x) + (k + 1)ε`+1z(x) ≤ (1 + ε`+1)z(x) + (k + 1)ε`+1z(x) = (1 + (k + 2) ε`+1) z(x) = (1 + 2 (`− 1)ε`+1) z(x), where the last line follows by definition of k = 2 (`− 2), which implies that k + 2 = 2(`− 1). Thus, this establishes the desired upper bound in the case that z(x) ≥ 0. Case of z(x) < 0: Since z(x) is negative, we have that ( 1 + k ε2` ) z(x) ≤ z(x) and |z(x)| = −z(x) and thus ẑ(x) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)| ≤ z(x)− (k + 1)ε`+1z(x) ≤ (1− (k + 1)ε`+1) z(x) ≤ (1− (k + 2)ε`+1) z(x) = (1− 2 (`− 1)ε`+1) z(x), and this establishes the upper bound for the case of z(x) being negative. Putting the results of the case by case analysis together, we have the upper bound of ẑ(x) ≤ z(x) + 2 (` − 1)ε`+1|z(x)|. The proof for establishing the lower bound for z(x) is analogous to that given above, and yields ẑ(x) ≥ z(x)−2 (`−1)ε`+1|z(x)|. Putting both the upper and lower bound together, we have that with probability at least 1− δη : ẑ(x) ∈ (1± 2 (`− 1)ε`+1) z(x), and this completes the proof. A.2.4 REMARKS ON NEGATIVE ACTIVATIONS We note that up to now we assumed that the input a(x), i.e., the activations from the previous layer, are strictly nonnegative. For layers ` ∈ {3, . . . , L}, this is indeed true due to the nonnegativity of the ReLU activation function. For layer 2, the input is a(x) = x, which can be decomposed into a(x) = apos(x) − aneg(x), where apos(x) ≥ 0 ∈ Rη `−1 and aneg(x) ≥ 0 ∈ Rη `−1 . Furthermore, we can define the sensitivity over the set of points {apos(x), aneg(x) | x ∈ S} (instead of {a(x) | x ∈ S}), and thus maintain the required nonnegativity of the sensitivities. Then, in the terminology of Lemma 2, we let z+pos(x) = ∑ k∈W+ wk apos,k(x) ≥ 0 and z−neg(x) = ∑ k∈W− (−wk) aneg,k(x) ≥ 0 be the corresponding positive parts, and z+neg(x) = ∑ k∈W+ wk aneg,k(x) ≥ 0 and z−pos(x) = ∑ k∈W− (−wk) apos,k(x) ≥ 0 be the corresponding negative parts of the preactivation of the considered layer, such that z+(x) = z+pos(x) + z − neg(x) and z −(x) = z+neg(x) + z − pos(x). We also let ∆`i(x) = z+(x) + z−(x) |z(x)| be as before, with z+(x) and z−(x) defined as above. Equipped with above definitions, we can rederive Lemma 2 analogously in the more general setting, i.e., with potentially negative activations. We also note that we require a slightly larger sample size now since we have to take a union bound over the failure probabilities of all four approximations (i.e. ẑ+pos(x), ẑ − neg(x), ẑ + neg(x), and ẑ − pos(x)) to obtain the desired overall failure probability of δ/η. A.2.5 PROOF OF THEOREM 4 The following corollary immediately follows from Lemma 2 and establishes a layer-wise approximation guarantee. Corollary 9 (Conditional Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` = ( ŵ`1, . . . , ŵ ` η` )> ∈ Rη`×η`−1 such that P(E` | E`−1) = P ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) | E`−1 ) ≥ 1− δ η ` η , (9) where ε` = ε ′ ∆̂`→ , ẑ`(x) = Ŵ `â`(x), and z`(x) = W `a`(x). Proof. Since (1) established by Lemma 2 holds for any neuron i ∈ [η`] in layer ` and since (E`)c = ∪i∈[η`](E`i )c, it follows by the union bound over the failure events (E`i )c for all i ∈ [η`] that with probability at least 1− η `δ η ẑ`(x) = Ŵ `â`−1(x) ∈ (1± 2 (`− 1) ε`+1)W `a`−1(x) = (1± 2 (`− 1) ε`+1) z`(x). The following lemma removes the conditioning on E`−1 and explicitly considers the (compounding) error incurred by generating coresets Ŵ 2, . . . , Ŵ ` for multiple layers. Lemma 3 (Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` ∈ Rη`×η`−1 such that, for ẑ`(x) = Ŵ `â`(x), P (Ŵ 2,...,Ŵ `), x (E`) = P (Ŵ 2,...,Ŵ `), x ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) ) ≥ 1− δ ∑` `′=2 η `′ η . Proof. Invoking Corollary 9, we know that for any layer `′ ∈ {2, . . . , L}, P Ŵ `′ , x, â`′−1(·) (E` ′ | E` ′−1) ≥ 1− δ η `′ η . (10) We also have by the law of total probability that P(E` ′ ) = P(E` ′ | E` ′−1)P(E` ′−1) + P(E` ′ | (E` ′−1)c)P((E` ′−1)c) ≥ P(E` ′ | E` ′−1)P(E` ′−1) (11) Repeated applications of (10) and (11) in conjunction with the observation that P(E1) = 14 yield P(E`) ≥ P(E` ′ | E` ′−1)P(E` ′−1) ... Repeated applications of (11) ≥ ∏̀ `′=2 P(E` ′ | E` ′−1) ≥ ∏̀ `′=2 ( 1− δ η `′ η ) By (10) ≥ 1− δ η ∑̀ `′=2 η` ′ By the Weierstrass Product Inequality, where the last inequality follows by the Weierstrass Product Inequality5 and this establishes the lemma. Appropriately invoking Lemma 3, we can now establish the approximation guarantee for the entire neural network. This is stated in Theorem 4 and the proof can be found below. Theorem 4 (Network Compression). For ε, δ ∈ (0, 1), Algorithm 1 generates a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) of size nnz(θ̂) ≤ L∑ `=2 η`∑ i=1 (⌈ 32 (L− 1)2 (∆̂`→)2 S`i log(η η∗) log(8 η/δ) ε2 ⌉ + 1 ) in O ( η η∗ log ( η η∗/δ )) time such that Pθ̂, x∼D ( fθ̂(x) ∈ (1± ε)fθ(x) ) ≥ 1− δ. 4Since we do not compress the input layer. 5The Weierstrass Product Inequality (Doerr, 2018) states that for p1, . . . , pn ∈ [0, 1], n∏ i=1 (1− pi) ≥ 1− n∑ i=1 pi. Proof. Invoking Lemma 3 with ` = L, we have that for θ̂ = (Ŵ 2, . . . , ŴL), P̂ θ, x ( fθ̂(x) ∈ 2 (L− 1) εL+1fθ(x) ) = P̂ θ, x (ẑL(x) ∈ 2 (L− 1) εL+1zL(x)) = P(EL) ≥ 1− δ ∑L `′=2 η `′ η = 1− δ, where the last equality follows by definition of η = ∑L `=2 η `. Note that by definition, εL+1 = ε 2 (L− 1) ∏L k=L+1 ∆̂ k = ε 2 (L− 1) , where the last equality follows by the fact that the empty product ∏L k=L+1
1. What is the focus of the paper regarding additively decomposable functions in neural networks? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis and soundness? 3. What are the weaknesses of the paper, especially regarding its comparison to previous works and lacking empirical results? 4. How does the reviewer assess the implications of the result when applied to arbitrary input from P at inference time? 5. Can the authors provide an intuitive discussion of the presented bounds, specifically concerning the number of non-zero entries being approximately cubic in L?
Review
Review Given an additively decomposable function F(X, Q) = sum_over_x_in_X cost(x, Q), one can approximate it using either random sampling of x in X (unbiased, possibly high variance), or using importance sampling and replace the sum_over_x with a sum_over_coreset importance_of_a_point * cost(x, Q) which if properly defined can be both unbiased and have low variance [1]. In this work the authors consider the weighted sum of activations as F and suggest that for each neuron we can subsample the incoming edges. To construct the importance sampling strategy the authors adapt the classic notion of sensitivity from the coreset literature. Then, one has to carefully balance the approximation quality from one layer to the next and essentially union bound the results over all layers and all sampled points. The performed analysis is sound (up to my knowledge). Pro: - I commend the authors for a clean and polished writeup. - The analysis seems to be sound (apart from the issues discussed below) - The experimental results look promising, at least in the limited setup. Con: - There exists competing work with rigorous guarantees, for example [2]. - The analysis hinges on two assumptions which, in my opinion, make the problem feasible: having (sub) exponential tails allows for strong concentration results, and with proper analysis (as done by the authors), the fact that the additively decomposable function can be approximated given well-behaving summands is not surprising. The analysis is definitely non-trivial and I commend the authors for a clean writeup. - While rigorous guarantees are lacking for some previous work, previously introduced techniques were shown to be extremely effective in practice and across a spectrum of tasks. As the guarantees arguably stem from the assumptions 1 and 2, I feel that it’s unfair to not compare to those results empirically. Hence, failing to compare to results of at least [2, 3] is a major drawback of this work. - The result holds for n points drawn from P. However, in practice the network might receive essentially arbitrary input from P at inference time. Given that we need to decide on the number of edges to preserve apriori, what are the implications? - The presented bounds should be discussed on an intuitive level (i.e. the number of non zero entries is approximately cubic in L). I consider this to be a well-executed paper which brings together the main ideas from the coreset literature and shows one avenue of establishing provable results. However, given that no comparison to the state-of-the-art techniques is given I'm not confident that the community will apply these techniques in practice. On the other hand, the main strength -- the theoretical guarantees -- hinge on the introduced assumptions. As such, without additional empirical results demonstrating the utility with respect to the state-of-the-art methods (for the same capacity in terms of NNZ) I cannot recommend acceptance. [1] https://arxiv.org/abs/1601.00617 [2] papers.nips.cc/paper/6910-net-trim-convex-pruning-of-deep-neural-networks-with-performance-guarantee [3] https://arxiv.org/abs/1510.00149 ======== Thank you for the detailed responses. Given the additional experimental results and connections to existing work, I have updated my score from 5 to 6.
ICLR
Title Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds Abstract We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network’s output. Our approach is based on an importance sampling scheme that judiciously defines a sampling distribution over the neural network parameters, and as a result, retains parameters of high importance while discarding redundant ones. We leverage a novel, empirical notion of sensitivity and extend traditional coreset constructions to the application of compressing parameters. Our theoretical analysis establishes guarantees on the size and accuracy of the resulting compressed network and gives rise to generalization bounds that may provide new insights into the generalization properties of neural networks. We demonstrate the practical effectiveness of our algorithm on a variety of neural network configurations and real-world data sets. 1 INTRODUCTION Within the past decade, large-scale neural networks have demonstrated unprecedented empirical success in high-impact applications such as object classification, speech recognition, computer vision, and natural language processing. However, with the ever-increasing size of state-of-the-art neural networks, the resulting storage requirements and performance of these models are becoming increasingly prohibitive in terms of both time and space. Recently proposed architectures for neural networks, such as those in Krizhevsky et al. (2012); Long et al. (2015); Badrinarayanan et al. (2015), contain millions of parameters, rendering them prohibitive to deploy on platforms that are resource-constrained, e.g., embedded devices, mobile phones, or small scale robotic platforms. In this work, we consider the problem of sparsifying the parameters of a trained fully-connected neural network in a principled way so that the output of the compressed neural network is approximately preserved. We introduce a neural network compression approach based on identifying and removing weighted edges with low relative importance via coresets, small weighted subsets of the original set that approximate the pertinent cost function. Our compression algorithm hinges on extensions of the traditional sensitivity-based coresets framework (Langberg & Schulman, 2010; Braverman et al., 2016), and to the best of our knowledge, is the first to apply coresets to parameter downsizing. In this regard, our work aims to simultaneously introduce a practical algorithm for compressing neural network parameters with provable guarantees and close the research gap in prior coresets work, which has predominantly focused on compressing input data points. In particular, this paper contributes the following: 1. A coreset approach to compressing problem-specific parameters based on a novel, empirical notion of sensitivity that extends state-of-the-art coreset constructions. 2. An efficient neural network compression algorithm, CoreNet, based on our extended coreset approach that sparsifies the parameters via importance sampling of weighted edges. 3. Extensions of the CoreNet method, CoreNet+ and CoreNet++, that improve upon the edge sampling approach by additionally performing neuron pruning and amplification. †Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, emails: {baykal, lucasl, igilitschenski, rus}@mit.edu ‡Robotics and Big Data Laboratory, University of Haifa, email: dannyf.post@gmail.com *These authors contributed equally to this work 4. Analytical results establishing guarantees on the approximation accuracy, size, and generalization of the compressed neural network. 5. Evaluations on real-world data sets that demonstrate the practical effectiveness of our algorithm in compressing neural network parameters and validate our theoretical results. 2 RELATED WORK Our work builds upon the following prior work in coresets and compression approaches. Coresets Coreset constructions were originally introduced in the context of computational geometry (Agarwal et al., 2005) and subsequently generalized for applications to other problems via an importance sampling-based, sensitivity framework (Langberg & Schulman, 2010; Braverman et al., 2016). Coresets have been used successfully to accelerate various machine learning algorithms such as k-means clustering (Feldman & Langberg, 2011; Braverman et al., 2016), graphical model training (Molina et al., 2018), and logistic regression (Huggins et al., 2016) (see the surveys of Bachem et al. (2017) and Munteanu & Schwiegelshohn (2018) for a complete list). In contrast to prior work, we generate coresets for reducing the number of parameters – rather than data points – via a novel construction scheme based on an efficiently-computable notion of sensitivity. Low-rank Approximations and Weight-sharing Denil et al. (2013) were among the first to empirically demonstrate the existence of significant parameter redundancy in deep neural networks. A predominant class of compression approaches consists of using low-rank matrix decompositions, such as Singular Value Decomposition (SVD) (Denton et al., 2014), to approximate the weight matrices with their low-rank counterparts. Similar works entail the use of low-rank tensor decomposition approaches applicable both during and after training (Jaderberg et al., 2014; Kim et al., 2015; Tai et al., 2015; Ioannou et al., 2015; Alvarez & Salzmann, 2017; Yu et al., 2017). Another class of approaches uses feature hashing and weight sharing (Weinberger et al., 2009; Shi et al., 2009; Chen et al., 2015b;a; Ullrich et al., 2017). Building upon the idea of weight-sharing, quantization (Gong et al., 2014; Wu et al., 2016; Zhou et al., 2017) or regular structure of weight matrices was used to reduce the effective number of parameters (Zhao et al., 2017; Sindhwani et al., 2015; Cheng et al., 2015; Choromanska et al., 2016; Wen et al., 2016). Despite their practical effectiveness in compressing neural networks, these works generally lack performance guarantees on the quality of their approximations and/or the size of the resulting compressed network. Weight Pruning Similar to our proposed method, weight pruning (LeCun et al., 1990) hinges on the idea that only a few dominant weights within a layer are required to approximately preserve the output. Approaches of this flavor have been investigated by Lebedev & Lempitsky (2016); Dong et al. (2017), e.g., by embedding sparsity as a constraint (Iandola et al., 2016; Aghasi et al., 2017; Lin et al., 2017). Another related approach is that of Han et al. (2015), which considers a combination of weight pruning and weight sharing methods. Nevertheless, prior work in weight pruning lacks rigorous theoretical analysis of the effect that the discarded weights can have on the compressed network. To the best of our knowledge, our work is the first to introduce a practical, sampling-based weight pruning algorithm with provable guarantees. Generalization The generalization properties of neural networks have been extensively investigated in various contexts (Dziugaite & Roy, 2017; Neyshabur et al., 2017a; Bartlett et al., 2017). However, as was pointed out by Neyshabur et al. (2017b), current approaches to obtaining non-vacuous generalization bounds do not fully or accurately capture the empirical success of state-of-the-art neural network architectures. Recently, Arora et al. (2018) and Zhou et al. (2018) highlighted the close connection between compressibility and generalization of neural networks. Arora et al. (2018) presented a compression method based on the Johnson-Lindenstrauss (JL) Lemma (Johnson & Lindenstrauss, 1984) and proved generalization bounds based on succinct reparameterizations of the original neural network. Building upon the work of Arora et al. (2018), we extend our theoretical compression results to establish novel generalization bounds for fully-connected neural networks. Unlike the method of Arora et al. (2018), which exhibits guarantees of the compressed network’s performance only on the set of training points, our method’s guarantees hold (probabilistically) for any random point drawn from the distribution. In addition, we establish that our method can ε-approximate the neural network output neuron-wise, which is stronger than the norm-based guarantee of Arora et al. (2018). In contrast to prior work, this paper addresses the problem of compressing a fully-connected neural network while provably preserving the network’s output. Unlike previous theoretically-grounded compression approaches – which provide guarantees in terms of the normed difference –, our method provides the stronger entry-wise approximation guarantee, even for points outside of the available data set. As our empirical results show, ensuring that the output of the compressed network entry-wise approximates that of the original network is critical to retaining high classification accuracy. Overall, our compression approach remedies the shortcomings of prior approaches in that it (i) exhibits favorable theoretical properties, (ii) is computationally efficient, e.g., does not require retraining of the neural network, (iii) is easy to implement, and (iv) can be used in conjunction with other compression approaches – such as quantization or Huffman coding – to obtain further improved compression rates. 3 PROBLEM DEFINITION 3.1 FULLY-CONNECTED NEURAL NETWORKS A feedforward fully-connected neural network withL ∈ N+ layers and parameters θ defines a mapping fθ : X → Y for a given input x ∈ X ⊆ Rd to an output y ∈ Y ⊆ Rk as follows. Let η` ∈ N+ denote the number of neurons in layer ` ∈ [L], where [L] = {1, . . . , L} denotes the index set, and where η1 = d and ηL = k. Further, let η = ∑L `=2 η ` and η∗ = max`∈{2,...,L} η`. For layers ` ∈ {2, . . . , L}, let W ` ∈ Rη`×η`−1 be the weight matrix for layer ` with entries denoted by w`ij , rows denoted by w`i ∈ R1×η `−1 , and θ = (W 2, . . . ,WL). For notational simplicity, we assume that the bias is embedded in the weight matrix. Then for an input vector x ∈ Rd, let a1 = x and z` = W `a`−1 ∈ Rη` , ∀` ∈ {2, . . . , L}, where a`−1 = φ(z`−1) ∈ Rη`−1 denotes the activation. We consider the activation function to be the Rectified Linear Unit (ReLU) function, i.e., φ(·) = max{· , 0} (entry-wise, if the input is a vector). The output of the network for an input x is fθ(x) = zL, and in particular, for classification tasks the prediction is argmaxi∈[k] fθ(x)i = argmaxi∈[k] z L i . 3.2 NEURAL NETWORK CORESET PROBLEM Consider the setting where a neural network fθ(·) has been trained on a training set of independent and identically distributed (i.i.d.) samples from a joint distribution on X × Y , yielding parameters θ = (W 2, . . . ,WL). We further denote the input points of a validation data set as P = {xi}ni=1 ⊆ X and the marginal distribution over the input space X as D. We define the size of the parameter tuple θ, nnz(θ), to be the sum of the number of non-zero entries in the weight matrices W 2, . . . ,WL. For any given ε, δ ∈ (0, 1), our overarching goal is to generate a reparameterization θ̂, yielding the neural network fθ̂(·), using a randomized algorithm, such that nnz(θ̂) nnz(θ), and the neural network output fθ(x), x ∼ D can be approximated up to 1 ± ε multiplicative error with probability greater than 1 − δ. We define the 1 ± ε multiplicative error between two k-dimensional vectors a, b ∈ Rk as the following entry-wise bound: a ∈ (1± ε)b ⇔ ai ∈ (1± ε)bi ∀i ∈ [k], and formalize the definition of an (ε, δ)-coreset as follows. Definition 1 ((ε, δ)-coreset). Given user-specified ε, δ ∈ (0, 1), a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) is an (ε, δ)-coreset for the network parameterized by θ if for x ∼ D, it holds that P̂ θ,x (fθ̂(x) ∈ (1± ε)fθ(x)) ≥ 1− δ, where Pθ̂,x denotes a probability measure with respect to a random data point x and the output θ̂ generated by a randomized compression scheme. 4 METHOD In this section, we introduce our neural network compression algorithm as depicted in Alg. 1. Our method is based on an important sampling-scheme that extends traditional sensitivity-based coreset constructions to the application of compressing parameters. 4.1 CORENET Our method (Alg. 1) hinges on the insight that a validation set of data points P i.i.d.∼ Dn can be used to approximate the relative importance, i.e., sensitivity, of each weighted edge with respect to the input data distributionD. For this purpose, we first pick a subsample of the data points S ⊆ P of appropriate size (see Sec. 5 for details) and cache each neuron’s activation and compute a neuron-specific constant to be used to determine the required edge sampling complexity (Lines 2-6). Algorithm 1 CORENET Input: ε, δ ∈ (0, 1): error and failure probability, respectively; P ⊆ X : a set of n points from the input space X such that P i.i.d.∼ Dn; θ = (W 2, . . . ,WL): parameters of the original uncompressed neural network. Output: θ̂ = (Ŵ 2, . . . , ŴL): sparsified parameter set such that fθ̂(·) ∈ (1± ε)fθ(·) (see Sec. 5 for details). 1: ε′ ← ε 2 (L−1) ; η ∗ ← max`∈{2,...,L−1} η`; η ← ∑L `=2 η `; λ∗ ← log(η η∗)/2; 2: S ← Uniform sample (without replacement) of dlog (8 η η∗/δ) log(η η∗)e points from P; 3: a1(x)← x ∀x ∈ S; 4: for x ∈ S do 5: for ` ∈ {2, . . . , L} do 6: a`(x)← φ(W `a`−1(x)); ∆`i(x)← ∑ k∈[η`−1] |w ` ik a `−1 k (x)|∣∣∣∑ k∈[η`−1] w ` ik a`−1 k (x) ∣∣∣ ; 7: for ` ∈ {2, . . . , L} do 8: ∆̂` ← ( 1 |S| maxi∈[η`] ∑ x∈S ∆ ` i(x) ) + κ, where κ = √ 2λ∗ ( 1 + √ 2λ∗ log (8 η η ∗/δ) ) ; 9: Ŵ ` ← (~0, . . . ,~0) ∈ Rη `×η`−1 ; ∆̂`→ ← ∏L k=` ∆̂ k; ε` ← ε ′ ∆̂`→ ; 10: for all i ∈ [η`] do 11: W+ ← {j ∈ [η`−1] : w`ij > 0}; W− ← {j ∈ [η`−1] : w`ij < 0}; 12: ŵ`+i ← SPARSIFY(W+, w ` i , ε`, δ,S, a`−1); ŵ`−i ← SPARSIFY(W−,−w ` i , ε`, δ,S, a`−1); 13: ŵ`i ← ŵ`+i − ŵ `− i ; Ŵ ` i• ← ŵ`i ; . Consolidate the weights into the ith row of Ŵ `; 14: return θ̂ = (Ŵ 2, . . . , ŴL); Algorithm 2 SPARSIFY(W, w, ε, δ,S, a(·)) Input: W ⊆ [η`−1]: index set; w ∈ R1×η `−1 : row vector corresponding to the weights incoming to node i ∈ [η`] in layer ` ∈ {2, . . . , L}; ε, δ ∈ (0, 1): error and failure probability, respectively; S ⊆ P: subsample of the original point set; a(·): cached activations of previous layer for all x ∈ S. Output: ŵ: sparse weight vector. 1: for j ∈ W do 2: sj ← maxx∈S wjaj(x)∑ k∈W wkak(x) ; . Compute the sensitivity of each edge 3: S ← ∑ j∈W sj ; 4: for j ∈ W do . Generate the importance sampling distribution over the incoming edges 5: qj ← sjS ; 6: m← ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ ; . Compute the number of required samples 7: C ← a multiset of m samples fromW where each j ∈ W is sampled with probability qj ; 8: ŵ ← (0, . . . , 0) ∈ R1×η `−1 ; . Initialize the compressed weight vector 9: for j ∈ C do . Update the entries of the sparsified weight matrix according to the samples C 10: ŵj ← ŵj + wjmqj ; . Entries are reweighted by 1 mqj to ensure unbiasedness of our estimator 11: return ŵ; Subsequently, we apply our core sampling scheme to sparsify the set of incoming weighted edges to each neuron in all layers (Lines 7-13). For technical reasons (see Sec. 5), we perform the sparsification on the positive and negative weighted edges separately and then consolidate the results (Lines 11- 13). By repeating this procedure for all neurons in every layer, we obtain a set θ̂ = (Ŵ 2, . . . , ŴL) of sparse weight matrices such that the output of each layer and the entire network is approximately preserved, i.e., Ŵ `â`−1(x) ≈W `a`−1(x) and fθ̂(x) ≈ fθ(x), respectively 1. 1â`−1(x) denotes the approximation from previous layers for an input x ∼ D; see Sec. 5 for details. 4.2 SPARSIFYING WEIGHTS The crux of our compression scheme lies in Alg. 2 (invoked twice on Line 12, Alg. 1) and in particular, in the importance sampling scheme used to select a small subset of edges of high importance. The cached activations are used to compute the sensitivity, i.e., relative importance, of each considered incoming edge j ∈ W to neuron i ∈ [η`], ` ∈ {2, . . . , L} (Alg. 2, Lines 1-2). The relative importance of each edge j is computed as the maximum (over x ∈ S) ratio of the edge’s contribution to the sum of contributions of all edges. In other words, the sensitivity sj of an edge j captures the highest (relative) impact j had on the output of neuron i ∈ [η`] in layer ` across all x ∈ S . The sensitivities are then used to compute an importance sampling distribution over the incoming weighted edges (Lines 4-5). The intuition behind the importance sampling distribution is that if sj is high, then edge j is more likely to have a high impact on the output of neuron i, therefore we should keep edge j with a higher probability. m edges are then sampled with replacement (Lines 6-7) and the sampled weights are then reweighed to ensure unbiasedness of our estimator (Lines 9-10). 4.3 EXTENSIONS: NEURON PRUNING AND AMPLIFICATION In this subsection we outline two improvements to our algorithm that that do not violate any of our theoretical properties and may improve compression rates in practical settings. Neuron pruning (CoreNet+) Similar to removing redundant edges, we can use the empirical activations to gauge the importance of each neuron. In particular, if the maximum activation (over all evaluations x ∈ S) of a neuron is equal to 0, then the neuron – along with all of the incoming and outgoing edges – can be pruned without significantly affecting the output with reasonable probability. This intuition can be made rigorous under the assumptions outlined in Sec. 5. Amplification (CoreNet++) Coresets that provide stronger approximation guarantees can be constructed via amplification – the procedure of constructing multiple approximations (coresets) (ŵ`i )1, . . . , (ŵ ` i )τ over τ trials, and picking the best one. To evaluate the quality of each approximation, a different subset T ⊆ P \ S can be used to infer performance. In practice, amplification would entail constructing multiple approximations by executing Line 12 of Alg. 1 and picking the one that achieves the lowest relative error on T . 5 ANALYSIS In this section, we establish the theoretical guarantees of our neural network compression algorithm (Alg. 1). The full proofs of all the claims presented in this section can be found in the Appendix. 5.1 PRELIMINARIES Let x ∼ D be a randomly drawn input point. We explicitly refer to the pre-activation and activation values at layer ` ∈ {2, . . . , `} with respect to the input x ∈ supp(D) as z`(x) and a`(x), respectively. The values of z`(x) and a`(x) at each layer ` will depend on whether or not we compressed the previous layers `′ ∈ {2, . . . , `}. To formalize this interdependency, we let ẑ`(x) and â`(x) denote the respective quantities of layer ` when we replace the weight matrices W 2, . . . ,W ` in layers 2, . . . , ` by Ŵ 2, . . . , Ŵ `, respectively. For the remainder of this section (Sec. 5) we let ` ∈ {2, . . . , L} be an arbitrary layer and let i ∈ [η`] be an arbitrary neuron in layer `. For purposes of clarity and readability, we will omit the the variable denoting the layer ` ∈ {2, . . . , L}, the neuron i ∈ [η`], and the incoming edge index j ∈ [η`−1], whenever they are clear from the context. For example, when referring to the intermediate value of a neuron i ∈ [η`] in layer ` ∈ {2, . . . , L}, z`i (x) = 〈w`i , â`−1(x)〉 ∈ R with respect to a point x, we will simply write z(x) = 〈w, a(x)〉 ∈ R, where w := w`i ∈ R1×η `−1 and a(x) := a`−1(x) ∈ Rη`−1×1. Under this notation, the weight of an incoming edge j is denoted by wj ∈ R. 5.2 IMPORTANCE SAMPLING BOUNDS FOR POSITIVE WEIGHTS In this subsection, we establish approximation guarantees under the assumption that the weights are positive. Moreover, we will also assume that the input, i.e., the activation from the previous layer, is non-negative (entry-wise). The subsequent subsection will then relax these assumptions to conclude that a neuron’s value can be approximated well even when the weights and activations are not all positive and non-negative, respectively. Let W = {j ∈ [η`−1] : wj > 0} ⊆ [η`−1] be the set of indices of incoming edges with strictly positive weights. To sample the incoming edges to a neuron, we quantify the relative importance of each edge as follows. Definition 2 (Relative Importance). The importance of an incoming edge j ∈ W with respect to an input x ∈ supp(D) is given by the function gj(x), where gj(x) = wj aj(x)∑ k∈W wk ak(x) ∀j ∈ W. Note that gj(x) is a function of the random variable x ∼ D. We now present our first assumption that pertains to the Cumulative Distribution Function (CDF) of the relative importance random variable. Assumption 1. For all j ∈ W , the CDF of the random variable gj(x), denoted by Fj (·), satisfies Fj (M/K) ≤ exp(−1/K), where M = min{x ∈ [0, 1] : Fj (x) = 1}, and K ∈ [2, log(η η∗)] is a universal constant.2 Assumption 1 is a technical assumption on the ratio of the weighted activations that will enable us to rule out pathological problem instances where the relative importance of each edge cannot be well-approximated using a small number of data points S ⊆ P . Henceforth, we consider a uniformly drawn (without replacement) subsample S ⊆ P as in Line 2 of Alg. 1, where |S| = dlog (8 η η∗/δ) log(η η∗)e, and define the sensitivity of an edge as follows. Definition 3 (Empirical Sensitivity). Let S ⊆ P be a subset of distinct points from P i.i.d.∼ Dn.Then, the sensitivity over positive edges j ∈ W directed to a neuron is defined as sj = maxx∈S gj(x). Our first lemma establishes a core result that relates the weighted sum with respect to the sparse row vector ŵ, ∑ k∈W ŵk âk(x), to the value of the of the weighted sum with respect to the ground-truth row vector w, ∑ k∈W wk âk(x). We remark that there is randomness with respect to the randomly generated row vector ŵ`i , a randomly drawn input x ∼ D, and the function â(·) = â`−1(·) defined by the randomly generated matrices Ŵ 2, . . . , Ŵ `−1 in the previous layers. Unless otherwise stated, we will henceforth use the shorthand notation P(·) to denote Pŵ`, x, â`−1(·). Moreover, for ease of presentation, we will first condition on the event E1/2 that â(x) ∈ (1± 1/2)a(x) holds. This conditioning will simplify the preliminary analysis and will be removed in our subsequent results. Lemma 1 (Positive-Weights Sparsification). Let ε, δ ∈ (0, 1), and x ∼ D. SPARSIFY(W, w, ε, δ,S, a(·)) generates a row vector ŵ such that P (∑ k∈W ŵk âk(x) /∈ (1± ε) ∑ k∈W wk âk(x) | E1/2 ) ≤ 3δ 8η where nnz(ŵ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , and S = ∑ j∈W sj . 5.3 IMPORTANCE SAMPLING BOUNDS We now relax the requirement that the weights are strictly positive and instead consider the following index sets that partition the weighted edges: W+ = {j ∈ [η`−1] : wj > 0} andW− = {j ∈ [η`−1] : wj < 0}. We still assume that the incoming activations from the previous layers are positive (this assumption can be relaxed as discussed in Appendix A.2.4). We define ∆`i(x) for a point x ∼ D and neuron i ∈ [η`] as ∆`i(x) = ∑ k∈[η`−1] |w ` ik a `−1 k (x)| |∑k∈[η`−1] w`ik a`−1k (x)| . The following assumption serves a similar purpose as does Assumption 1 in that it enables us to approximate the random variable ∆`i(x) via an empirical estimate over a small-sized sample of data points S ⊆ P . Assumption 2 (Subexponentiality of ∆`i(x)). For any layer ` ∈ {2, . . . , L} and neuron i ∈ [η`], the centered random variable ∆ = ∆`i(x) − E x∼D[∆`i(x)] is subexponential (Vershynin, 2016) with parameter λ ≤ log(η η∗)/2, i.e., E [exp (s∆)] ≤ exp(s2λ2) ∀|s| ≤ 1λ . 2 2The upper bound of log(ηη∗) for K and λ can be considered somewhat arbitrary in the sense that, more generally, we only require that K,λ ∈ O(polylog(ηη∗|P|). Defining the upper bound in this way simplifies the presentation of the core ideas without having to deal with the constants involved in the asymptotic notation. For ε ∈ (0, 1) and ` ∈ {2, . . . , L}, we let ε′ = ε2 (L−1) and define ε` = ε′ ∆̂`→ = ε 2 (L−1) ∏L k=` ∆̂ k , where ∆̂` = ( 1 |S| maxi∈[η`] ∑ x′∈S ∆ ` i(x ′) ) + κ. To formalize the interlayer dependencies, for each i ∈ [η`] we let E`i denote the (desirable) event that ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) holds, and let E` = ∩i∈[η`] E`i be the intersection over the events corresponding to each neuron in layer `. Lemma 2 (Conditional Neuron Value Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, i ∈ [η`], and x ∼ D. CORENET generates a row vector ŵ`i = ŵ `+ i − ŵ `− i ∈ R1×η `−1 such that P ( E`i | E`−1 ) = P ( ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) | E`−1 ) ≥ 1− δ/η, (1) where ε` = ε ′ ∆̂`→ and nnz(ŵ`i ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε`2 ⌉ + 1, where S = ∑ j∈W+ sj + ∑ j∈W− sj . The following core result establishes unconditional layer-wise approximation guarantees and culminates in our main compression theorem. Lemma 3 (Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` ∈ Rη`×η`−1 such that, for ẑ`(x) = Ŵ `â`(x), P (Ŵ 2,...,Ŵ `), x (E`) = P (Ŵ 2,...,Ŵ `), x ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) ) ≥ 1− δ ∑` `′=2 η `′ η . Theorem 4 (Network Compression). For ε, δ ∈ (0, 1), Algorithm 1 generates a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) of size nnz(θ̂) ≤ L∑ `=2 η`∑ i=1 (⌈ 32 (L− 1)2 (∆̂`→)2 S`i log(η η∗) log(8 η/δ) ε2 ⌉ + 1 ) in O ( η η∗ log ( η η∗/δ )) time such that Pθ̂, x∼D ( fθ̂(x) ∈ (1± ε)fθ(x) ) ≥ 1− δ. We note that we can obtain a guarantee for a set of n randomly drawn points by invoking Theorem 4 with δ′ = δ/n and union-bounding over the failure probabilities, while only increasing the sampling complexity logarithmically, as formalized in Corollary 12, Appendix A.2. 5.4 GENERALIZATION BOUNDS As a corollary to our main results, we obtain novel generalization bounds for neural networks in terms of empirical sensitivity. Following the terminology of Arora et al. (2018), the expected margin loss of a classifier fθ : Rd → Rk parameterized by θ with respect to a desired margin γ > 0 and distribution D is defined by Lγ(fθ) = P(x,y)∼DX ,Y (fθ(x)y ≤ γ + maxi 6=y fθ(x)i). We let L̂γ denote the empirical estimate of the margin loss. The following corollary follows directly from the argument presented in Arora et al. (2018) and Theorem 4. Corollary 5 (Generalization Bounds). For any δ ∈ (0, 1) and margin γ > 0, Alg. 1 generates weights θ̂ such that with probability at least 1 − δ, the expected error L0(fθ̂) with respect to the points in P ⊆ X , |P| = n, is bounded by L0(fθ̂) ≤ L̂γ(fθ) + Õ √maxx∈P ‖fθ(x)‖22 L2 ∑L`=2(∆̂`→)2 ∑η`i=1 S`i γ2 n . 6 RESULTS In this section, we evaluate the practical effectiveness of our compression algorithm on popular benchmark data sets (MNIST (LeCun et al., 1998), FashionMNIST (Xiao et al., 2017), and CIFAR10 (Krizhevsky & Hinton, 2009)) and varying fully-connected trained neural network configurations: 2 to 5 hidden layers, 100 to 1000 hidden units, either fixed hidden sizes or decreasing hidden size denoted by pyramid in the figures. We further compare the effectiveness of our sampling scheme in reducing the number of non-zero parameters of a network, i.e., in sparsifying the weight matrices, to that of uniform sampling, Singular Value Decomposition (SVD), and current state-of-the-art sampling schemes for matrix sparsification (Drineas & Zouzias, 2011; Achlioptas et al., 2013; Kundu & Drineas, 2014), which are based on matrix norms – `1 and `2 (Frobenius). The details of the experimental setup and results of additional evaluations may be found in Appendix B. Experiment Setup We compare against three variations of our compression algorithm: (i) sole edge sampling (CoreNet), (ii) edge sampling with neuron pruning (CoreNet+), and (iii) edge sampling with neuron pruning and amplification (CoreNet++). For comparison, we evaluated the average relative error in output (`1-norm) and average drop in classification accuracy relative to the accuracy of the uncompressed network. Both metrics were evaluated on a previously unseen test set. Results Results for varying architectures and datasets are depicted in Figures 1 and 2 for the average drop in classification accuracy and relative error (`1-norm), respectively. As apparent from Figure 1, we are able to compress networks to about 15% of their original size without significant loss of accuracy for networks trained on MNIST and FashionMNIST, and to about 50% of their original size for CIFAR. Discussion The simulation results presented in this section validate our theoretical results established in Sec. 5. In particular, our empirical results indicate that we are able to outperform networks compressed via competing methods in matrix sparsification across all considered experiments and trials. The results presented in this section further suggest that empirical sensitivity can effectively capture the relative importance of neural network parameters, leading to a more informed importance sampling scheme. Moreover, the relative performance of our algorithm tends to increase as we consider deeper architectures. These findings suggest that our algorithm may also be effective in compressing modern convolutional architectures, which tend to be very deep. 7 CONCLUSION We presented a coresets-based neural network compression algorithm for compressing the parameters of a trained fully-connected neural network in a manner that approximately preserves the network’s output. Our method and analysis extend traditional coreset constructions to the application of compressing parameters, which may be of independent interest. Our work distinguishes itself from prior approaches in that it establishes theoretical guarantees on the approximation accuracy and size of the generated compressed network. As a corollary to our analysis, we obtain generalization bounds for neural networks, which may provide novel insights on the generalization properties of neural networks. We empirically demonstrated the practical effectiveness of our compression algorithm on a variety of neural network configurations and real-world data sets. In future work, we plan to extend our algorithm and analysis to compress Convolutional Neural Networks (CNNs) and other network architectures. We conjecture that our compression algorithm can be used to reduce storage requirements of neural network models and enable fast inference in practical settings. ACKNOWLEDGMENTS This research was supported in part by the National Science Foundation award IIS-1723943. We thank Brandon Araki and Kiran Vodrahalli for valuable discussions and helpful suggestions. We would also like to thank Kasper Green Larsen, Alexander Mathiasen, and Allan Gronlund for pointing out an error in an earlier formulation of Lemma 6. A PROOFS OF THE ANALYTICAL RESULTS IN SECTION 5 This section includes the full proofs of the technical results given in Sec. 5. A.1 ANALYTICAL RESULTS FOR SECTION 5.2 (IMPORTANCE SAMPLING BOUNDS FOR POSITIVE WEIGHTS) A.1.1 ORDER STATISTIC SAMPLING We now establish a couple of technical results that will quantify the accuracy of our approximations of edge importance (i.e., sensitivity). Lemma 6. Let K > 0 be a universal constant and let D be a distribution with CDF F (·) satisfying F (M/K) ≤ exp(−1/K), where M = min{x ∈ [0, 1] : F (x) = 1}. Let P = {X1, . . . , Xn} be a set of n = |P| i.i.d. samples each drawn from the distribution D. Let Xn+1 ∼ D be an i.i.d. sample. Then, P ( K max X∈P X < Xn+1 ) ≤ exp(−n/K) Proof. Let Xmax = maxX∈P ; then, P(KXmax < Xn+1) = ∫ M 0 P(Xmax < x/K|Xn+1 = x) dP(x) = ∫ M 0 P (X < x/K)n dP(x) since X1, . . . , Xn are i.i.d. ≤ ∫ M 0 F (x/K)n dP(x) where F (·) is the CDF of X ∼ D ≤ F (M/K)n ∫ M 0 dP(x) by monotonicity of F = F (M/K)n ≤ exp(−n/K) CDF Assumption, and this completes the proof. We now proceed to establish that the notion of empirical sensitivity is a good approximation for the relative importance. For this purpose, let the relative importance ĝj(x) of an edge j after the previous layers have already been compressed be ĝj(x) = wj âj(x)∑ k∈W wk âk(x) . Lemma 7 (Empirical Sensitivity Approximation). Let ε ∈ (0, 1/2), δ ∈ (0, 1), ` ∈ {2, . . . , L}, Consider a set S = {x1, . . . , xn} ⊆ P of size |S| ≥ dlog (8 η η∗/δ) log(η η∗)e. Then, conditioned on the event E1/2 occurring, i.e., â(x) ∈ (1± 1/2)a(x), P x∼D ( ∃j ∈ W : C sj < ĝj(x) | E1/2 ) ≤ δ 8 η , where C = 3 log(η η∗) andW ⊆ [η`−1]. Proof. Consider an arbitrary j ∈ W and x′ ∈ S corresponding to gj(x′) with CDF Fj (·) and recall that M = min{x ∈ [0, 1] : Fj (x) = 1} as in Assumption 1. Note that by Assumption 1, we have F (M/K) ≤ exp(−1/K), and so the random variables gj(x′) for x′ ∈ S satisfy the CDF condition required by Lemma 6. Now let E be the event that K sj < gj(x) holds. Applying Lemma 6, we obtain P(E) = P(K sj < gj(x)) = P ( K max x′∈S gj(x ′) < gj(x) ) ≤ exp(−|S|/K). Now let Ê denote the event that the inequality Csj < ĝj(x) = wj âj(x)∑ k∈W wk âk(x) holds and note that the right side of the inequality is defined with respect to ĝj(x) and not gj(x). Observe that since we conditioned on the event E1/2, we have that â(x) ∈ (1± 1/2)a(x). Now assume that event Ê holds and note that by the implication above, we have C sj < ĝj(x) = wj âj(x)∑ k∈W wk âk(x) ≤ (1 + 1/2)wj aj(x) (1− 1/2) ∑ k∈W wk ak(x) ≤ 3 · wj aj(x)∑ k∈W wk ak(x) = 3 gj(x). where the second inequality follows from the fact that 1+1/2/1−1/2 ≤ 3. Moreover, since we know that C ≥ 3K, we conclude that if event Ê occurs, we obtain the inequality 3K sj ≤ 3 gj(x)⇔ K sj ≤ gj(x), which is precisely the definition of event E . Thus, we have shown the conditional implication ( Ê | E1/2 ) ⇒ E , which implies that P(Ê | E1/2) = P(C sj < ĝj(x) | E1/2) ≤ P(E) ≤ exp(−|S|/K). Since our choice of j ∈ W was arbitrary, the bound applies for any j ∈ W . Thus, we have by the union bound P(∃j ∈ W : C sj < ĝj(x) | E1/2) ≤ ∑ j∈W P(C sj < ĝj(x) | E1/2) ≤ |W| exp(−|S|/K) = ( |W| η∗ ) δ 8η ≤ δ 8η . In practice, the set S referenced above is chosen to be a subset of the original data points, i.e., S ⊆ P (see Alg. 1, Line 2). Thus, we henceforth assume that the size of the input points |P| is large enough (or the specified parameter δ ∈ (0, 1) is sufficiently large) so that |P| ≥ |S|. A.1.2 PROOF OF LEMMA 1 We now state the proof of Lemma 1. In this subsection, we establish approximation guarantees under the assumption that the weights are strictly positive. The next subsection will then relax this assumption to conclude that a neuron’s value can be approximated well even when the weights are not all positive. Lemma 1 (Positive-Weights Sparsification). Let ε, δ ∈ (0, 1), and x ∼ D. SPARSIFY(W, w, ε, δ,S, a(·)) generates a row vector ŵ such that P (∑ k∈W ŵk âk(x) /∈ (1± ε) ∑ k∈W wk âk(x) | E1/2 ) ≤ 3δ 8η where nnz(ŵ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , and S = ∑ j∈W sj . Proof. Let ε, δ ∈ (0, 1) be arbitrary. Moreover, let C be the coreset with respect to the weight indices W ⊆ [η`−1] used to construct ŵ. Note that as in SPARSIFY, C is a multiset sampled fromW of size m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , where S = ∑ j∈W sj and C is sampled according to the probability distribution q defined by qj = sj S ∀j ∈ W. Let â(·) be an arbitrary realization of the random variable â`−1(·), let x be a realization of x ∼ D, and let ẑ = ∑ k∈W ŵk âk(x) be the approximate intermediate value corresponding to the sparsified matrix ŵ and let z̃ = ∑ k∈W wk âk(x). Now define E to be the (favorable) event that ẑ ε-approximates z̃, i.e., ẑ ∈ (1±ε)z̃, We will now show that the complement of this event, Ec, occurs with sufficiently small probability. Let Z ⊆ supp(D) be the set of well-behaved points (defined implicitly with respect to neuron i ∈ [η`] and realization â) and defined as follows: Z = {x′ ∈ supp(D) : ĝj(x′) ≤ Csj ∀j ∈ W} , where C = 3 log(η η∗). Let EZ denote the event that x ∈ Z where x is a realization of x ∼ D. Conditioned on EZ , event Ec occurs with probability ≤ δ4η : Let x be a realization of x ∼ D such that x ∈ Z and let C = {c1, . . . , cm} be m samples fromW with respect to distribution q as before. Define m random variables Tc1 , . . . , Tcm such that for all j ∈ C Tj = wj âj(x) mqj = S wj âj(x) msj . (2) For any j ∈ C, we have for the conditional expectation of Tj : E [Tj | â(·),x, EZ , E1/2] = ∑ k∈W wk âk(x) mqk · qk = ∑ k∈W wk âk(x) m = z̃ m , where we use the expectation notation E [·] with the understanding that it denotes the conditional expectation E C | âl−1(·), x [·]. Moreover, we also note that conditioning on the event EZ (i.e., the event that x ∈ Z) does not affect the expectation of Tj . Let T = ∑ j∈C Tj = ẑ denote our approximation and note that by linearity of expectation, E [T | â(·),x, EZ , E1/2] = ∑ j∈C E [Tj | â(·),x, EZ , E1/2] = z̃ Thus, ẑ = T is an unbiased estimator of z̃ for any realization â(·) and x; thus, we will henceforth refer to E [T | â(·), x] as simply z̃ for brevity. For the remainder of the proof we will assume that z̃ > 0, since otherwise, z̃ = 0 if and only if Tj = 0 for all j ∈ C almost surely, which follows by the fact that Tj ≥ 0 for all j ∈ C by definition ofW and the non-negativity of the ReLU activation. Therefore, in the case that z̃ = 0, it follows that P(|ẑ − z̃| > εz̃ | â(·),x) = P(ẑ > 0 | â(·),x) = P(0 > 0) = 0, which trivially yields the statement of the lemma, where in the above expression, P(·) is short-hand for the conditional probability Pŵ | âl−1(·), x(·). We now proceed with the case where z̃ > 0 and leverage the fact that x ∈ Z3 to establish that for all j ∈ W : Csj ≥ ĝj(x) = wj âj(x)∑ k∈W wk âk(x) = wj âj(x) z̃ 3Since we conditioned on the event EZ . ⇔ wj âj(x) sj ≤ C z̃. (3) Utilizing the inequality established above, we bound the conditional variance of each Tj , j ∈ C as follows Var(Tj | â(·),x, EZ , E1/2) ≤ E [(Tj)2 | â(·),x, EZ , E1/2] = ∑ k∈W (wk âk(x)) 2 (mqk)2 · qk = S m2 ∑ k∈W (wk âk(x)) 2 sk ≤ S m2 (∑ k∈W wk âk(x) ) C z̃ = S C z̃2 m2 , where Var(·) is short-hand for VarC | âl−1(·), x (·). Since T is a sum of (conditionally) independent random variables, we obtain Var(T | â(·),x, EZ , E1/2) = mVar(Tj | â(·),x, EZ , E1/2) (4) ≤ S C z̃ 2 m . Now, for each j ∈ C let T̃j = Tj − E [Tj | â(·),x, EZ , E1/2] = Tj − z̃, and let T̃ = ∑ j∈C T̃j . Note that by the fact that we conditioned on the realization x of x such that x ∈ Z (event EZ ), we obtain by definition of Tj in (2) and the inequality (3): Tj = S wj âj(x) msj ≤ S C z̃ m . (5) We also have that S ≥ 1 by definition. More specifically, using the fact that the maximum over a set is greater than the average and rearranging sums, we obtain S = ∑ j∈W sj = ∑ j∈W max x′∈S gj(x ′) ≥ 1 |S| ∑ j∈W ∑ x′∈S gj(x ′) = 1 |S| ∑ x′∈S ∑ j∈W gj(x ′) = 1 |S| ∑ x′∈S 1 = 1. Thus, the inequality established in (5) with the fact that S ≥ 1 we obtain an upper bound on the absolute value of the centered random variables: |T̃j | = ∣∣∣∣Tj − z̃m ∣∣∣∣ ≤ S C z̃m = M, (6) which follows from the fact that: if Tj ≥ z̃m : Then, by our bound in (5) and the fact that z̃ m ≥ 0, it follows that ∣∣∣T̃j∣∣∣ = Tj − z̃ m ≤ S C z̃ m − z̃ m ≤ S C z̃ m . if Tj < z̃m : Then, using the fact that Tj ≥ 0 and S ≥ 1, we obtain∣∣∣T̃j∣∣∣ = z̃ m − Tj ≤ z̃ m ≤ S C z̃ m . Applying Bernstein’s inequality to both T̃ and −T̃ we have by symmetry and the union bound, P(Ec | â(·),x, EZ , E1/2) = P ( |T − z̃| ≥ εz̃ | â(·),x, EZ , E1/2 ) ≤ 2 exp ( − ε 2z̃2 2 Var(T | â(·),x) + 2 ε z̃M3 ) ≤ 2 exp ( − ε 2z̃2 2SC z̃2 m + 2S C z̃2 3m ) = 2 exp ( −3 ε 2m 8S C ) ≤ δ 4η , where the second inequality follows by our upper bounds on Var(T | â(·),x) and ∣∣∣T̃j∣∣∣ and the fact that ε ∈ (0, 1), and the last inequality follows by our choice of m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ . This establishes that for any realization â(·) of âl−1(·) and a realization x of x satisfying x ∈ Z , the event Ec occurs with probability at most δ4η . Removing the conditioning on EZ : We have by law of total probability P(E | â(·), E1/2) ≥ ∫ x∈Z P(E | â(·),x, EZ , E1/2) P x∼D (x = x | â(·), E1/2) dx ≥ ( 1− δ 4η )∫ x∈Z P x∼D (x = x | â(·), E1/2) dx = ( 1− δ 4η ) P x∼D (EZ | â(·), E1/2) ≥ ( 1− δ 4η )( 1− δ 8η ) ≥ 1− 3δ 8η where the second-to-last inequality follows from the fact that P(Ec | â(·),x, EZ , E1/2) ≤ δ4η as was established above and the last inequality follows by Lemma 7. Putting it all together Finally, we marginalize out the random variable â`−1(·) to establish P(E | E1/2) = ∫ â(·) P(E | â(·), E1/2)P(â(·) | E1/2) dâ(·) ≥ ( 1− 3δ 8η )∫ â(·) P(â(·) | E1/2) dâ(·) = 1− 3δ 8η . Consequently, P(Ec | E1/2) ≤ 1− ( 1− 3δ 8η ) = 3δ 8η , and this concludes the proof. A.2 ANALYTICAL RESULTS FOR SECTION 5.3 (IMPORTANCE SAMPLING BOUNDS) We begin by establishing an auxiliary result that we will need for the subsequent lemmas. A.2.1 EMPIRICAL ∆`i APPROXIMATION Lemma 8 (Empirical ∆`i Approximation). Let δ ∈ (0, 1), λ∗ = log(η η∗)/2, and define ∆̂` = ( 1 |S| max i∈[η`] ∑ x′∈S ∆`i(x ′) ) + κ, where κ = √ 2λ∗ ( 1 + √ 2λ∗ log (8 η η ∗/δ) ) and S ⊆ P is as in Alg. 1. Then, P x∼D ( max i∈[η`] ∆`i(x) ≤ ∆̂` ) ≥ 1− δ 4η . Proof. Define the random variables Yx′ = E [∆`i(x′)]−∆`i(x′) for each x′ ∈ S and consider the sum Y = ∑ x′∈S Yx′ = ∑ x′∈S ( E [∆`i(x)]−∆`i(x′) ) . We know that each random variable Yx′ satisfies E [Yx′ ] = 0 and by Assumption 2, is subexponential with parameter λ ≤ λ∗. Thus, Y is a sum of |S| independent, zero-mean λ∗-subexponential random variables, which implies that E [Y] = 0 and that we can readily apply Bernstein’s inequality for subexponential random variables (Vershynin, 2016) to obtain for t ≥ 0 P ( 1 |S| Y ≥ t ) ≤ exp ( −|S| min { t2 4λ2∗ , t 2λ∗ }) . Since S = dlog (8 η η∗/δ) log(η η∗)e ≥ log (8 η η∗/δ) 2λ∗, we have for t = √ 2λ∗, P ( E [∆`i(x)]− 1 |S| ∑ x′∈S ∆`i(x ′) ≥ t ) = P ( 1 |S| Y ≥ t ) ≤ exp ( −|S| t 2 4λ2∗ ) ≤ exp (− log (8 η η∗/δ)) = δ 8 η η∗ . Moreover, for a single Yx, we have by the equivalent definition of a subexponential random variable (Vershynin, 2016) that for u ≥ 0 P(∆`i(x)− E [∆`i(x)] ≥ u) ≤ exp ( −min { − u 2 4λ2∗ , u 2λ∗ }) . Thus, for u = 2λ∗ log (8 η η∗/δ) we obtain P(∆`i(x)− E [∆`i(x)] ≥ u) ≤ exp (− log (8 η η∗/δ)) = δ 8 η η∗ . Therefore, by the union bound, we have with probability at least 1− δ4η η∗ : ∆`i(x) ≤ E [∆`i(x)] + u ≤ ( 1 |S| ∑ x′∈S ∆`i(x ′) + t ) + u = 1 |S| ∑ x′∈S ∆`i(x ′) + (√ 2λ∗ + 2λ∗ log (8 η η ∗/δ) ) = 1 |S| ∑ x′∈S ∆`i(x ′) + κ ≤ ∆̂`, where the last inequality follows by definition of ∆̂`. Thus, by the union bound, we have P x∼D ( max i∈[η`] ∆`i(x) > ∆̂ ` ) = P ( ∃i ∈ [η`] : ∆`i(x) > ∆̂` ) ≤ ∑ i∈[η`] P ( ∆`i(x) > ∆̂ ` ) ≤ η` ( δ 4η η∗ ) ≤ δ 4 η , where the last line follows by definition of η∗ ≥ η`. A.2.2 NOTATION FOR THE SUBSEQUENT ANALYSIS Let ŵ`+i and ŵ `− i denote the sparsified row vectors generated when SPARSIFY is invoked with first two arguments corresponding to (W+, w`i ) and (W−,−w`i ), respectively (Alg. 1, Line 12). We will at times omit including the variables for the neuron i and layer ` in the proofs for clarity of exposition, and for example, refer to ŵ`+i and ŵ `− i as simply ŵ + and ŵ−, respectively. Let x ∼ D and define ẑ+(x) = ∑ k∈W+ ŵ+k âk(x) ≥ 0 and ẑ −(x) = ∑ k∈W− (−ŵ−k ) âk(x) ≥ 0 be the approximate intermediate values corresponding to the sparsified matrices ŵ+ and ŵ−; let z̃+(x) = ∑ k∈W+ wk âk(x) ≥ 0 and z̃−(x) = ∑ k∈W− (−wk) âk(x) ≥ 0 be the corresponding intermediate values with respect to the the original row vector w; and finally, let z+(x) = ∑ k∈W+ wk ak(x) ≥ 0 and z−(x) = ∑ k∈W− (−wk) ak(x) ≥ 0 be the true intermediate values corresponding to the positive and negative valued weights. Note that in this context, we have by definition ẑ`i (x) = 〈ŵ, â(x)〉 = ẑ+(x)− ẑ−(x), z̃`i (x) = 〈w, â(x)〉 = z̃+(x)− z̃−(x), and z`i (x) = 〈w, a(x)〉 = z+(x)− z−(x), where we used the fact that ŵ = ŵ+ − ŵ− ∈ R1×η`−1 . A.2.3 PROOF OF LEMMA 2 Lemma 2 (Conditional Neuron Value Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, i ∈ [η`], and x ∼ D. CORENET generates a row vector ŵ`i = ŵ `+ i − ŵ `− i ∈ R1×η `−1 such that P ( E`i | E`−1 ) = P ( ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) | E`−1 ) ≥ 1− δ/η, (1) where ε` = ε ′ ∆̂`→ and nnz(ŵ`i ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε`2 ⌉ + 1, where S = ∑ j∈W+ sj + ∑ j∈W− sj . Proof. Let ε, δ ∈ (0, 1) be arbitrary and let W+ = {j ∈ [η`−1] : wj > 0} and W− = {j ∈ [η`−1] : wj < 0} as in Alg. 1. Let ε` be defined as before, ε` = ε ′ ∆̂`→ , where ∆̂`→ = ∏L k=` ∆̂ k and ∆̂` = ( 1 |S| maxi∈[η`] ∑ x′∈S ∆ ` i(x ′) ) + κ. Observe that wj > 0 ∀j ∈ W+ and similarly, for all (−wj) > 0 ∀j ∈ W−. That is, each of index setsW+ andW− corresponds to strictly positive entries in the arguments w`i and −w`i , respectively passed into SPARSIFY. Observe that since we conditioned on the event E`−1, we have 2 (`− 2) ε` ≤ 2 (`− 2) ε 2 (L− 1) ∏L k=` ∆̂ k ≤ ε∏L k=` ∆̂ k ≤ ε 2L−`+1 Since ∆̂k ≥ 2 ∀k ∈ {`, . . . , L} ≤ ε 2 , where the inequality ∆̂k ≥ 2 follows from the fact that ∆̂k = ( 1 |S| max i∈[η`] ∑ x′∈S ∆`i(x ′) ) + κ ≥ 1 + κ Since ∆`i(x′) ≥ 1 ∀x′ ∈ supp(D) by definition ≥ 2. we obtain that â(x) ∈ (1 ± ε/2)a(x), where, as before, â and a are shorthand notations for â`−1 ∈ Rη`−1×1 and a`−1 ∈ Rη`−1×1, respectively. This implies that E`−1 ⇒ E1/2 and since m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ in Alg. 2 we can invoke Lemma 1 with ε = ε` on each of the SPARSIFY invocations to conclude that P ( ẑ+(x) /∈ (1± ε`)z̃+(x) | E`−1 ) ≤ P ( ẑ+(x) /∈ (1± ε`)z̃+(x) | E1/2 ) ≤ 3δ 8η , and P ( ẑ−(x) /∈ (1± ε`)z̃−(x) | E`−1 ) ≤ 3δ 8η . Therefore, by the union bound, we have P ( ẑ+(x) /∈ (1± ε`)z̃+(x) or ẑ−(x) /∈ (1± ε`)z̃−(x) | E`−1 ) ≤ 3δ 8η + 3δ 8η = 3δ 4η . Moreover, by Lemma 8, we have with probability at most δ4η that ∆`i(x) > ∆̂ `. Thus, by the union bound over the failure events, we have that with probability at least 1 − (3δ/4η + δ/4η) = 1− δ/η that both of the following events occur 1. ẑ+(x) ∈ (1± ε`)z̃+(x) and ẑ−(x) ∈ (1± ε`)z̃−(x) (7) 2. ∆`i(x) ≤ ∆̂` (8) Recall that ε′ = ε2 (L−1) , ε` = ε′ ∆̂`→ , and that event E`i denotes the (desirable) event that ẑ`i (x) (1± 2 (`− 1) ε`+1) z`i (x) holds, and similarly, E` = ∩i∈[η`] E`i denotes the vector-wise analogue where ẑ`(x) (1± 2 (`− 1) ε`+1) z`(x). Let k = 2 (`− 1) and note that by conditioning on the event E`−1, i.e., we have by definition â`−1(x) ∈ (1± 2 (`− 2)ε`)a`−1(x) = (1± k ε`)a`−1(x), which follows by definition of the ReLU function. Recall that our overarching goal is to establish that ẑ`i (x) ∈ (1± 2 (`− 1)ε`+1) z`i (x), which would immediately imply by definition of the ReLU function that â`i(x) ∈ (1± 2 (`− 1)ε`+1) a`i(x). Having clarified the conditioning and our objective, we will once again drop the index i from the expressions moving forward. Proceeding from above, we have with probability at least 1− δ/η ẑ(x) = ẑ+(x)− ẑ−(x) ≤ (1 + ε`) z̃+(x)− (1− ε`) z̃−(x) By Event (7) above ≤ (1 + ε`)(1 + k ε`) z+(x)− (1− ε`)(1− k ε`) z−(x) Conditioning on event E`−1 = ( 1 + ε`(k + 1) + kε 2 ` ) z+(x) + ( −1 + (k + 1)ε` − kε2` ) z−(x) = ( 1 + k ε2` ) z(x) + (k + 1) ε` ( z+(x) + z−(x) ) = ( 1 + k ε2` ) z(x) + (k + 1) ε′∏L k=` ∆̂ k ( z+(x) + z−(x) ) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε′ ∆`i(x) ∏L k=`+1 ∆̂ k ( z+(x) + z−(x) ) By Event (8) above = ( 1 + k ε2` ) z(x) + (k + 1) ε′∏L k=`+1 ∆̂ k |z(x)| By ∆`i(x) = z+(x) + z−(x) |z(x)| = ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)|. To upper bound the last expression above, we begin by observing that kε2` ≤ ε`, which follows from the fact that ε` ≤ 12 (L−1) ≤ 1 k by definition. Moreover, we also note that ε` ≤ ε`+1 by definition of ∆̂` ≥ 1. Now, we consider two cases. Case of z(x) ≥ 0: In this case, we have ẑ(x) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)| ≤ (1 + ε`)z(x) + (k + 1)ε`+1z(x) ≤ (1 + ε`+1)z(x) + (k + 1)ε`+1z(x) = (1 + (k + 2) ε`+1) z(x) = (1 + 2 (`− 1)ε`+1) z(x), where the last line follows by definition of k = 2 (`− 2), which implies that k + 2 = 2(`− 1). Thus, this establishes the desired upper bound in the case that z(x) ≥ 0. Case of z(x) < 0: Since z(x) is negative, we have that ( 1 + k ε2` ) z(x) ≤ z(x) and |z(x)| = −z(x) and thus ẑ(x) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)| ≤ z(x)− (k + 1)ε`+1z(x) ≤ (1− (k + 1)ε`+1) z(x) ≤ (1− (k + 2)ε`+1) z(x) = (1− 2 (`− 1)ε`+1) z(x), and this establishes the upper bound for the case of z(x) being negative. Putting the results of the case by case analysis together, we have the upper bound of ẑ(x) ≤ z(x) + 2 (` − 1)ε`+1|z(x)|. The proof for establishing the lower bound for z(x) is analogous to that given above, and yields ẑ(x) ≥ z(x)−2 (`−1)ε`+1|z(x)|. Putting both the upper and lower bound together, we have that with probability at least 1− δη : ẑ(x) ∈ (1± 2 (`− 1)ε`+1) z(x), and this completes the proof. A.2.4 REMARKS ON NEGATIVE ACTIVATIONS We note that up to now we assumed that the input a(x), i.e., the activations from the previous layer, are strictly nonnegative. For layers ` ∈ {3, . . . , L}, this is indeed true due to the nonnegativity of the ReLU activation function. For layer 2, the input is a(x) = x, which can be decomposed into a(x) = apos(x) − aneg(x), where apos(x) ≥ 0 ∈ Rη `−1 and aneg(x) ≥ 0 ∈ Rη `−1 . Furthermore, we can define the sensitivity over the set of points {apos(x), aneg(x) | x ∈ S} (instead of {a(x) | x ∈ S}), and thus maintain the required nonnegativity of the sensitivities. Then, in the terminology of Lemma 2, we let z+pos(x) = ∑ k∈W+ wk apos,k(x) ≥ 0 and z−neg(x) = ∑ k∈W− (−wk) aneg,k(x) ≥ 0 be the corresponding positive parts, and z+neg(x) = ∑ k∈W+ wk aneg,k(x) ≥ 0 and z−pos(x) = ∑ k∈W− (−wk) apos,k(x) ≥ 0 be the corresponding negative parts of the preactivation of the considered layer, such that z+(x) = z+pos(x) + z − neg(x) and z −(x) = z+neg(x) + z − pos(x). We also let ∆`i(x) = z+(x) + z−(x) |z(x)| be as before, with z+(x) and z−(x) defined as above. Equipped with above definitions, we can rederive Lemma 2 analogously in the more general setting, i.e., with potentially negative activations. We also note that we require a slightly larger sample size now since we have to take a union bound over the failure probabilities of all four approximations (i.e. ẑ+pos(x), ẑ − neg(x), ẑ + neg(x), and ẑ − pos(x)) to obtain the desired overall failure probability of δ/η. A.2.5 PROOF OF THEOREM 4 The following corollary immediately follows from Lemma 2 and establishes a layer-wise approximation guarantee. Corollary 9 (Conditional Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` = ( ŵ`1, . . . , ŵ ` η` )> ∈ Rη`×η`−1 such that P(E` | E`−1) = P ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) | E`−1 ) ≥ 1− δ η ` η , (9) where ε` = ε ′ ∆̂`→ , ẑ`(x) = Ŵ `â`(x), and z`(x) = W `a`(x). Proof. Since (1) established by Lemma 2 holds for any neuron i ∈ [η`] in layer ` and since (E`)c = ∪i∈[η`](E`i )c, it follows by the union bound over the failure events (E`i )c for all i ∈ [η`] that with probability at least 1− η `δ η ẑ`(x) = Ŵ `â`−1(x) ∈ (1± 2 (`− 1) ε`+1)W `a`−1(x) = (1± 2 (`− 1) ε`+1) z`(x). The following lemma removes the conditioning on E`−1 and explicitly considers the (compounding) error incurred by generating coresets Ŵ 2, . . . , Ŵ ` for multiple layers. Lemma 3 (Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` ∈ Rη`×η`−1 such that, for ẑ`(x) = Ŵ `â`(x), P (Ŵ 2,...,Ŵ `), x (E`) = P (Ŵ 2,...,Ŵ `), x ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) ) ≥ 1− δ ∑` `′=2 η `′ η . Proof. Invoking Corollary 9, we know that for any layer `′ ∈ {2, . . . , L}, P Ŵ `′ , x, â`′−1(·) (E` ′ | E` ′−1) ≥ 1− δ η `′ η . (10) We also have by the law of total probability that P(E` ′ ) = P(E` ′ | E` ′−1)P(E` ′−1) + P(E` ′ | (E` ′−1)c)P((E` ′−1)c) ≥ P(E` ′ | E` ′−1)P(E` ′−1) (11) Repeated applications of (10) and (11) in conjunction with the observation that P(E1) = 14 yield P(E`) ≥ P(E` ′ | E` ′−1)P(E` ′−1) ... Repeated applications of (11) ≥ ∏̀ `′=2 P(E` ′ | E` ′−1) ≥ ∏̀ `′=2 ( 1− δ η `′ η ) By (10) ≥ 1− δ η ∑̀ `′=2 η` ′ By the Weierstrass Product Inequality, where the last inequality follows by the Weierstrass Product Inequality5 and this establishes the lemma. Appropriately invoking Lemma 3, we can now establish the approximation guarantee for the entire neural network. This is stated in Theorem 4 and the proof can be found below. Theorem 4 (Network Compression). For ε, δ ∈ (0, 1), Algorithm 1 generates a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) of size nnz(θ̂) ≤ L∑ `=2 η`∑ i=1 (⌈ 32 (L− 1)2 (∆̂`→)2 S`i log(η η∗) log(8 η/δ) ε2 ⌉ + 1 ) in O ( η η∗ log ( η η∗/δ )) time such that Pθ̂, x∼D ( fθ̂(x) ∈ (1± ε)fθ(x) ) ≥ 1− δ. 4Since we do not compress the input layer. 5The Weierstrass Product Inequality (Doerr, 2018) states that for p1, . . . , pn ∈ [0, 1], n∏ i=1 (1− pi) ≥ 1− n∑ i=1 pi. Proof. Invoking Lemma 3 with ` = L, we have that for θ̂ = (Ŵ 2, . . . , ŴL), P̂ θ, x ( fθ̂(x) ∈ 2 (L− 1) εL+1fθ(x) ) = P̂ θ, x (ẑL(x) ∈ 2 (L− 1) εL+1zL(x)) = P(EL) ≥ 1− δ ∑L `′=2 η `′ η = 1− δ, where the last equality follows by definition of η = ∑L `=2 η `. Note that by definition, εL+1 = ε 2 (L− 1) ∏L k=L+1 ∆̂ k = ε 2 (L− 1) , where the last equality follows by the fact that the empty product ∏L k=L+1
1. What is the focus of the paper regarding neural network reduction? 2. What are the pros and cons of the proposed method according to the reviewer? 3. What are some concerns regarding the theory presented in the paper? 4. How does the reviewer assess the performance of the method compared to other approaches? 5. Are there any suggestions for improving the method or its presentation in the review?
Review
Review The authors propose to reduce the size of fully connected neural networks, defined as the total number of nonzeros in the weight matrices, by calculating sensitivity scores for each incoming connection to a neuron, and randomly keeping only some of the incoming connections with probability proportional to their share of the total sensitivity. They provide a specific definition for the sensitivity scores and establish that the sparsified neural network, with constant probability for any sample from the training population, provides an output that is a small multiplicative factor away from the output of the unsparisfied neural network. The cost of the sparsification is essentially the application of the trained neural network to a small number of data points in order to compute the sensitivity scores Pros: - the method works empirically, in that their empirical evaluations on MNIST, CIFAR, and FashionMNIST classification problems show that the drop in accuracy is lower when the neural net is sparsified using their CoreNet algorithm and variations than when it is randomly sparsified or the neural network size is reduced by using SVD. - theory is provided to argue the consistency of the sparsified neural network Cons: - no comparison is made to the baseline of using matrix sparsification algorithms on the weight matrices themselves. I do not see why CoreNet should be expected to perform empirically better than simply using e.g. the entry-wise sampling scheme from "Near-optimal entrywise sampling for data matrices" by Achlioptas and co-authors, or earlier works addressing the same problem of sparsifying matrices. - the theory makes very strong assumptions (Assumptions 1 and 2) that are not explained or justified well. Both depend on the specific weight matrices being sparsified, and it isn't clear a priori when the weight matrices obtained from whatever optimization procedure was used to train the neural net will be such that these assumptions hold. - despite the suggestions of the theory, the accuracy drop can be quite large in practice, as in the CIFAR panel of Figure 1 I think the ICLR audience will appreciate the attempt to provide a principled approach to decreasing the size of neural networks, but I do not think this approach is widely compelling as : (1) no true guaranteed control on the trade-off between accuracy loss and network size is available (2) empirically the method does not perform well consistently (3) comparisons with reasonable and informative baselines are missing Updated in response to author response: the inclusion of experimental comparisons with linear algebraic sparsification baselines, showing that the proposed method can be significantly more accurate, strengthens the appeal of the method.
ICLR
Title Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds Abstract We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network’s output. Our approach is based on an importance sampling scheme that judiciously defines a sampling distribution over the neural network parameters, and as a result, retains parameters of high importance while discarding redundant ones. We leverage a novel, empirical notion of sensitivity and extend traditional coreset constructions to the application of compressing parameters. Our theoretical analysis establishes guarantees on the size and accuracy of the resulting compressed network and gives rise to generalization bounds that may provide new insights into the generalization properties of neural networks. We demonstrate the practical effectiveness of our algorithm on a variety of neural network configurations and real-world data sets. 1 INTRODUCTION Within the past decade, large-scale neural networks have demonstrated unprecedented empirical success in high-impact applications such as object classification, speech recognition, computer vision, and natural language processing. However, with the ever-increasing size of state-of-the-art neural networks, the resulting storage requirements and performance of these models are becoming increasingly prohibitive in terms of both time and space. Recently proposed architectures for neural networks, such as those in Krizhevsky et al. (2012); Long et al. (2015); Badrinarayanan et al. (2015), contain millions of parameters, rendering them prohibitive to deploy on platforms that are resource-constrained, e.g., embedded devices, mobile phones, or small scale robotic platforms. In this work, we consider the problem of sparsifying the parameters of a trained fully-connected neural network in a principled way so that the output of the compressed neural network is approximately preserved. We introduce a neural network compression approach based on identifying and removing weighted edges with low relative importance via coresets, small weighted subsets of the original set that approximate the pertinent cost function. Our compression algorithm hinges on extensions of the traditional sensitivity-based coresets framework (Langberg & Schulman, 2010; Braverman et al., 2016), and to the best of our knowledge, is the first to apply coresets to parameter downsizing. In this regard, our work aims to simultaneously introduce a practical algorithm for compressing neural network parameters with provable guarantees and close the research gap in prior coresets work, which has predominantly focused on compressing input data points. In particular, this paper contributes the following: 1. A coreset approach to compressing problem-specific parameters based on a novel, empirical notion of sensitivity that extends state-of-the-art coreset constructions. 2. An efficient neural network compression algorithm, CoreNet, based on our extended coreset approach that sparsifies the parameters via importance sampling of weighted edges. 3. Extensions of the CoreNet method, CoreNet+ and CoreNet++, that improve upon the edge sampling approach by additionally performing neuron pruning and amplification. †Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, emails: {baykal, lucasl, igilitschenski, rus}@mit.edu ‡Robotics and Big Data Laboratory, University of Haifa, email: dannyf.post@gmail.com *These authors contributed equally to this work 4. Analytical results establishing guarantees on the approximation accuracy, size, and generalization of the compressed neural network. 5. Evaluations on real-world data sets that demonstrate the practical effectiveness of our algorithm in compressing neural network parameters and validate our theoretical results. 2 RELATED WORK Our work builds upon the following prior work in coresets and compression approaches. Coresets Coreset constructions were originally introduced in the context of computational geometry (Agarwal et al., 2005) and subsequently generalized for applications to other problems via an importance sampling-based, sensitivity framework (Langberg & Schulman, 2010; Braverman et al., 2016). Coresets have been used successfully to accelerate various machine learning algorithms such as k-means clustering (Feldman & Langberg, 2011; Braverman et al., 2016), graphical model training (Molina et al., 2018), and logistic regression (Huggins et al., 2016) (see the surveys of Bachem et al. (2017) and Munteanu & Schwiegelshohn (2018) for a complete list). In contrast to prior work, we generate coresets for reducing the number of parameters – rather than data points – via a novel construction scheme based on an efficiently-computable notion of sensitivity. Low-rank Approximations and Weight-sharing Denil et al. (2013) were among the first to empirically demonstrate the existence of significant parameter redundancy in deep neural networks. A predominant class of compression approaches consists of using low-rank matrix decompositions, such as Singular Value Decomposition (SVD) (Denton et al., 2014), to approximate the weight matrices with their low-rank counterparts. Similar works entail the use of low-rank tensor decomposition approaches applicable both during and after training (Jaderberg et al., 2014; Kim et al., 2015; Tai et al., 2015; Ioannou et al., 2015; Alvarez & Salzmann, 2017; Yu et al., 2017). Another class of approaches uses feature hashing and weight sharing (Weinberger et al., 2009; Shi et al., 2009; Chen et al., 2015b;a; Ullrich et al., 2017). Building upon the idea of weight-sharing, quantization (Gong et al., 2014; Wu et al., 2016; Zhou et al., 2017) or regular structure of weight matrices was used to reduce the effective number of parameters (Zhao et al., 2017; Sindhwani et al., 2015; Cheng et al., 2015; Choromanska et al., 2016; Wen et al., 2016). Despite their practical effectiveness in compressing neural networks, these works generally lack performance guarantees on the quality of their approximations and/or the size of the resulting compressed network. Weight Pruning Similar to our proposed method, weight pruning (LeCun et al., 1990) hinges on the idea that only a few dominant weights within a layer are required to approximately preserve the output. Approaches of this flavor have been investigated by Lebedev & Lempitsky (2016); Dong et al. (2017), e.g., by embedding sparsity as a constraint (Iandola et al., 2016; Aghasi et al., 2017; Lin et al., 2017). Another related approach is that of Han et al. (2015), which considers a combination of weight pruning and weight sharing methods. Nevertheless, prior work in weight pruning lacks rigorous theoretical analysis of the effect that the discarded weights can have on the compressed network. To the best of our knowledge, our work is the first to introduce a practical, sampling-based weight pruning algorithm with provable guarantees. Generalization The generalization properties of neural networks have been extensively investigated in various contexts (Dziugaite & Roy, 2017; Neyshabur et al., 2017a; Bartlett et al., 2017). However, as was pointed out by Neyshabur et al. (2017b), current approaches to obtaining non-vacuous generalization bounds do not fully or accurately capture the empirical success of state-of-the-art neural network architectures. Recently, Arora et al. (2018) and Zhou et al. (2018) highlighted the close connection between compressibility and generalization of neural networks. Arora et al. (2018) presented a compression method based on the Johnson-Lindenstrauss (JL) Lemma (Johnson & Lindenstrauss, 1984) and proved generalization bounds based on succinct reparameterizations of the original neural network. Building upon the work of Arora et al. (2018), we extend our theoretical compression results to establish novel generalization bounds for fully-connected neural networks. Unlike the method of Arora et al. (2018), which exhibits guarantees of the compressed network’s performance only on the set of training points, our method’s guarantees hold (probabilistically) for any random point drawn from the distribution. In addition, we establish that our method can ε-approximate the neural network output neuron-wise, which is stronger than the norm-based guarantee of Arora et al. (2018). In contrast to prior work, this paper addresses the problem of compressing a fully-connected neural network while provably preserving the network’s output. Unlike previous theoretically-grounded compression approaches – which provide guarantees in terms of the normed difference –, our method provides the stronger entry-wise approximation guarantee, even for points outside of the available data set. As our empirical results show, ensuring that the output of the compressed network entry-wise approximates that of the original network is critical to retaining high classification accuracy. Overall, our compression approach remedies the shortcomings of prior approaches in that it (i) exhibits favorable theoretical properties, (ii) is computationally efficient, e.g., does not require retraining of the neural network, (iii) is easy to implement, and (iv) can be used in conjunction with other compression approaches – such as quantization or Huffman coding – to obtain further improved compression rates. 3 PROBLEM DEFINITION 3.1 FULLY-CONNECTED NEURAL NETWORKS A feedforward fully-connected neural network withL ∈ N+ layers and parameters θ defines a mapping fθ : X → Y for a given input x ∈ X ⊆ Rd to an output y ∈ Y ⊆ Rk as follows. Let η` ∈ N+ denote the number of neurons in layer ` ∈ [L], where [L] = {1, . . . , L} denotes the index set, and where η1 = d and ηL = k. Further, let η = ∑L `=2 η ` and η∗ = max`∈{2,...,L} η`. For layers ` ∈ {2, . . . , L}, let W ` ∈ Rη`×η`−1 be the weight matrix for layer ` with entries denoted by w`ij , rows denoted by w`i ∈ R1×η `−1 , and θ = (W 2, . . . ,WL). For notational simplicity, we assume that the bias is embedded in the weight matrix. Then for an input vector x ∈ Rd, let a1 = x and z` = W `a`−1 ∈ Rη` , ∀` ∈ {2, . . . , L}, where a`−1 = φ(z`−1) ∈ Rη`−1 denotes the activation. We consider the activation function to be the Rectified Linear Unit (ReLU) function, i.e., φ(·) = max{· , 0} (entry-wise, if the input is a vector). The output of the network for an input x is fθ(x) = zL, and in particular, for classification tasks the prediction is argmaxi∈[k] fθ(x)i = argmaxi∈[k] z L i . 3.2 NEURAL NETWORK CORESET PROBLEM Consider the setting where a neural network fθ(·) has been trained on a training set of independent and identically distributed (i.i.d.) samples from a joint distribution on X × Y , yielding parameters θ = (W 2, . . . ,WL). We further denote the input points of a validation data set as P = {xi}ni=1 ⊆ X and the marginal distribution over the input space X as D. We define the size of the parameter tuple θ, nnz(θ), to be the sum of the number of non-zero entries in the weight matrices W 2, . . . ,WL. For any given ε, δ ∈ (0, 1), our overarching goal is to generate a reparameterization θ̂, yielding the neural network fθ̂(·), using a randomized algorithm, such that nnz(θ̂) nnz(θ), and the neural network output fθ(x), x ∼ D can be approximated up to 1 ± ε multiplicative error with probability greater than 1 − δ. We define the 1 ± ε multiplicative error between two k-dimensional vectors a, b ∈ Rk as the following entry-wise bound: a ∈ (1± ε)b ⇔ ai ∈ (1± ε)bi ∀i ∈ [k], and formalize the definition of an (ε, δ)-coreset as follows. Definition 1 ((ε, δ)-coreset). Given user-specified ε, δ ∈ (0, 1), a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) is an (ε, δ)-coreset for the network parameterized by θ if for x ∼ D, it holds that P̂ θ,x (fθ̂(x) ∈ (1± ε)fθ(x)) ≥ 1− δ, where Pθ̂,x denotes a probability measure with respect to a random data point x and the output θ̂ generated by a randomized compression scheme. 4 METHOD In this section, we introduce our neural network compression algorithm as depicted in Alg. 1. Our method is based on an important sampling-scheme that extends traditional sensitivity-based coreset constructions to the application of compressing parameters. 4.1 CORENET Our method (Alg. 1) hinges on the insight that a validation set of data points P i.i.d.∼ Dn can be used to approximate the relative importance, i.e., sensitivity, of each weighted edge with respect to the input data distributionD. For this purpose, we first pick a subsample of the data points S ⊆ P of appropriate size (see Sec. 5 for details) and cache each neuron’s activation and compute a neuron-specific constant to be used to determine the required edge sampling complexity (Lines 2-6). Algorithm 1 CORENET Input: ε, δ ∈ (0, 1): error and failure probability, respectively; P ⊆ X : a set of n points from the input space X such that P i.i.d.∼ Dn; θ = (W 2, . . . ,WL): parameters of the original uncompressed neural network. Output: θ̂ = (Ŵ 2, . . . , ŴL): sparsified parameter set such that fθ̂(·) ∈ (1± ε)fθ(·) (see Sec. 5 for details). 1: ε′ ← ε 2 (L−1) ; η ∗ ← max`∈{2,...,L−1} η`; η ← ∑L `=2 η `; λ∗ ← log(η η∗)/2; 2: S ← Uniform sample (without replacement) of dlog (8 η η∗/δ) log(η η∗)e points from P; 3: a1(x)← x ∀x ∈ S; 4: for x ∈ S do 5: for ` ∈ {2, . . . , L} do 6: a`(x)← φ(W `a`−1(x)); ∆`i(x)← ∑ k∈[η`−1] |w ` ik a `−1 k (x)|∣∣∣∑ k∈[η`−1] w ` ik a`−1 k (x) ∣∣∣ ; 7: for ` ∈ {2, . . . , L} do 8: ∆̂` ← ( 1 |S| maxi∈[η`] ∑ x∈S ∆ ` i(x) ) + κ, where κ = √ 2λ∗ ( 1 + √ 2λ∗ log (8 η η ∗/δ) ) ; 9: Ŵ ` ← (~0, . . . ,~0) ∈ Rη `×η`−1 ; ∆̂`→ ← ∏L k=` ∆̂ k; ε` ← ε ′ ∆̂`→ ; 10: for all i ∈ [η`] do 11: W+ ← {j ∈ [η`−1] : w`ij > 0}; W− ← {j ∈ [η`−1] : w`ij < 0}; 12: ŵ`+i ← SPARSIFY(W+, w ` i , ε`, δ,S, a`−1); ŵ`−i ← SPARSIFY(W−,−w ` i , ε`, δ,S, a`−1); 13: ŵ`i ← ŵ`+i − ŵ `− i ; Ŵ ` i• ← ŵ`i ; . Consolidate the weights into the ith row of Ŵ `; 14: return θ̂ = (Ŵ 2, . . . , ŴL); Algorithm 2 SPARSIFY(W, w, ε, δ,S, a(·)) Input: W ⊆ [η`−1]: index set; w ∈ R1×η `−1 : row vector corresponding to the weights incoming to node i ∈ [η`] in layer ` ∈ {2, . . . , L}; ε, δ ∈ (0, 1): error and failure probability, respectively; S ⊆ P: subsample of the original point set; a(·): cached activations of previous layer for all x ∈ S. Output: ŵ: sparse weight vector. 1: for j ∈ W do 2: sj ← maxx∈S wjaj(x)∑ k∈W wkak(x) ; . Compute the sensitivity of each edge 3: S ← ∑ j∈W sj ; 4: for j ∈ W do . Generate the importance sampling distribution over the incoming edges 5: qj ← sjS ; 6: m← ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ ; . Compute the number of required samples 7: C ← a multiset of m samples fromW where each j ∈ W is sampled with probability qj ; 8: ŵ ← (0, . . . , 0) ∈ R1×η `−1 ; . Initialize the compressed weight vector 9: for j ∈ C do . Update the entries of the sparsified weight matrix according to the samples C 10: ŵj ← ŵj + wjmqj ; . Entries are reweighted by 1 mqj to ensure unbiasedness of our estimator 11: return ŵ; Subsequently, we apply our core sampling scheme to sparsify the set of incoming weighted edges to each neuron in all layers (Lines 7-13). For technical reasons (see Sec. 5), we perform the sparsification on the positive and negative weighted edges separately and then consolidate the results (Lines 11- 13). By repeating this procedure for all neurons in every layer, we obtain a set θ̂ = (Ŵ 2, . . . , ŴL) of sparse weight matrices such that the output of each layer and the entire network is approximately preserved, i.e., Ŵ `â`−1(x) ≈W `a`−1(x) and fθ̂(x) ≈ fθ(x), respectively 1. 1â`−1(x) denotes the approximation from previous layers for an input x ∼ D; see Sec. 5 for details. 4.2 SPARSIFYING WEIGHTS The crux of our compression scheme lies in Alg. 2 (invoked twice on Line 12, Alg. 1) and in particular, in the importance sampling scheme used to select a small subset of edges of high importance. The cached activations are used to compute the sensitivity, i.e., relative importance, of each considered incoming edge j ∈ W to neuron i ∈ [η`], ` ∈ {2, . . . , L} (Alg. 2, Lines 1-2). The relative importance of each edge j is computed as the maximum (over x ∈ S) ratio of the edge’s contribution to the sum of contributions of all edges. In other words, the sensitivity sj of an edge j captures the highest (relative) impact j had on the output of neuron i ∈ [η`] in layer ` across all x ∈ S . The sensitivities are then used to compute an importance sampling distribution over the incoming weighted edges (Lines 4-5). The intuition behind the importance sampling distribution is that if sj is high, then edge j is more likely to have a high impact on the output of neuron i, therefore we should keep edge j with a higher probability. m edges are then sampled with replacement (Lines 6-7) and the sampled weights are then reweighed to ensure unbiasedness of our estimator (Lines 9-10). 4.3 EXTENSIONS: NEURON PRUNING AND AMPLIFICATION In this subsection we outline two improvements to our algorithm that that do not violate any of our theoretical properties and may improve compression rates in practical settings. Neuron pruning (CoreNet+) Similar to removing redundant edges, we can use the empirical activations to gauge the importance of each neuron. In particular, if the maximum activation (over all evaluations x ∈ S) of a neuron is equal to 0, then the neuron – along with all of the incoming and outgoing edges – can be pruned without significantly affecting the output with reasonable probability. This intuition can be made rigorous under the assumptions outlined in Sec. 5. Amplification (CoreNet++) Coresets that provide stronger approximation guarantees can be constructed via amplification – the procedure of constructing multiple approximations (coresets) (ŵ`i )1, . . . , (ŵ ` i )τ over τ trials, and picking the best one. To evaluate the quality of each approximation, a different subset T ⊆ P \ S can be used to infer performance. In practice, amplification would entail constructing multiple approximations by executing Line 12 of Alg. 1 and picking the one that achieves the lowest relative error on T . 5 ANALYSIS In this section, we establish the theoretical guarantees of our neural network compression algorithm (Alg. 1). The full proofs of all the claims presented in this section can be found in the Appendix. 5.1 PRELIMINARIES Let x ∼ D be a randomly drawn input point. We explicitly refer to the pre-activation and activation values at layer ` ∈ {2, . . . , `} with respect to the input x ∈ supp(D) as z`(x) and a`(x), respectively. The values of z`(x) and a`(x) at each layer ` will depend on whether or not we compressed the previous layers `′ ∈ {2, . . . , `}. To formalize this interdependency, we let ẑ`(x) and â`(x) denote the respective quantities of layer ` when we replace the weight matrices W 2, . . . ,W ` in layers 2, . . . , ` by Ŵ 2, . . . , Ŵ `, respectively. For the remainder of this section (Sec. 5) we let ` ∈ {2, . . . , L} be an arbitrary layer and let i ∈ [η`] be an arbitrary neuron in layer `. For purposes of clarity and readability, we will omit the the variable denoting the layer ` ∈ {2, . . . , L}, the neuron i ∈ [η`], and the incoming edge index j ∈ [η`−1], whenever they are clear from the context. For example, when referring to the intermediate value of a neuron i ∈ [η`] in layer ` ∈ {2, . . . , L}, z`i (x) = 〈w`i , â`−1(x)〉 ∈ R with respect to a point x, we will simply write z(x) = 〈w, a(x)〉 ∈ R, where w := w`i ∈ R1×η `−1 and a(x) := a`−1(x) ∈ Rη`−1×1. Under this notation, the weight of an incoming edge j is denoted by wj ∈ R. 5.2 IMPORTANCE SAMPLING BOUNDS FOR POSITIVE WEIGHTS In this subsection, we establish approximation guarantees under the assumption that the weights are positive. Moreover, we will also assume that the input, i.e., the activation from the previous layer, is non-negative (entry-wise). The subsequent subsection will then relax these assumptions to conclude that a neuron’s value can be approximated well even when the weights and activations are not all positive and non-negative, respectively. Let W = {j ∈ [η`−1] : wj > 0} ⊆ [η`−1] be the set of indices of incoming edges with strictly positive weights. To sample the incoming edges to a neuron, we quantify the relative importance of each edge as follows. Definition 2 (Relative Importance). The importance of an incoming edge j ∈ W with respect to an input x ∈ supp(D) is given by the function gj(x), where gj(x) = wj aj(x)∑ k∈W wk ak(x) ∀j ∈ W. Note that gj(x) is a function of the random variable x ∼ D. We now present our first assumption that pertains to the Cumulative Distribution Function (CDF) of the relative importance random variable. Assumption 1. For all j ∈ W , the CDF of the random variable gj(x), denoted by Fj (·), satisfies Fj (M/K) ≤ exp(−1/K), where M = min{x ∈ [0, 1] : Fj (x) = 1}, and K ∈ [2, log(η η∗)] is a universal constant.2 Assumption 1 is a technical assumption on the ratio of the weighted activations that will enable us to rule out pathological problem instances where the relative importance of each edge cannot be well-approximated using a small number of data points S ⊆ P . Henceforth, we consider a uniformly drawn (without replacement) subsample S ⊆ P as in Line 2 of Alg. 1, where |S| = dlog (8 η η∗/δ) log(η η∗)e, and define the sensitivity of an edge as follows. Definition 3 (Empirical Sensitivity). Let S ⊆ P be a subset of distinct points from P i.i.d.∼ Dn.Then, the sensitivity over positive edges j ∈ W directed to a neuron is defined as sj = maxx∈S gj(x). Our first lemma establishes a core result that relates the weighted sum with respect to the sparse row vector ŵ, ∑ k∈W ŵk âk(x), to the value of the of the weighted sum with respect to the ground-truth row vector w, ∑ k∈W wk âk(x). We remark that there is randomness with respect to the randomly generated row vector ŵ`i , a randomly drawn input x ∼ D, and the function â(·) = â`−1(·) defined by the randomly generated matrices Ŵ 2, . . . , Ŵ `−1 in the previous layers. Unless otherwise stated, we will henceforth use the shorthand notation P(·) to denote Pŵ`, x, â`−1(·). Moreover, for ease of presentation, we will first condition on the event E1/2 that â(x) ∈ (1± 1/2)a(x) holds. This conditioning will simplify the preliminary analysis and will be removed in our subsequent results. Lemma 1 (Positive-Weights Sparsification). Let ε, δ ∈ (0, 1), and x ∼ D. SPARSIFY(W, w, ε, δ,S, a(·)) generates a row vector ŵ such that P (∑ k∈W ŵk âk(x) /∈ (1± ε) ∑ k∈W wk âk(x) | E1/2 ) ≤ 3δ 8η where nnz(ŵ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , and S = ∑ j∈W sj . 5.3 IMPORTANCE SAMPLING BOUNDS We now relax the requirement that the weights are strictly positive and instead consider the following index sets that partition the weighted edges: W+ = {j ∈ [η`−1] : wj > 0} andW− = {j ∈ [η`−1] : wj < 0}. We still assume that the incoming activations from the previous layers are positive (this assumption can be relaxed as discussed in Appendix A.2.4). We define ∆`i(x) for a point x ∼ D and neuron i ∈ [η`] as ∆`i(x) = ∑ k∈[η`−1] |w ` ik a `−1 k (x)| |∑k∈[η`−1] w`ik a`−1k (x)| . The following assumption serves a similar purpose as does Assumption 1 in that it enables us to approximate the random variable ∆`i(x) via an empirical estimate over a small-sized sample of data points S ⊆ P . Assumption 2 (Subexponentiality of ∆`i(x)). For any layer ` ∈ {2, . . . , L} and neuron i ∈ [η`], the centered random variable ∆ = ∆`i(x) − E x∼D[∆`i(x)] is subexponential (Vershynin, 2016) with parameter λ ≤ log(η η∗)/2, i.e., E [exp (s∆)] ≤ exp(s2λ2) ∀|s| ≤ 1λ . 2 2The upper bound of log(ηη∗) for K and λ can be considered somewhat arbitrary in the sense that, more generally, we only require that K,λ ∈ O(polylog(ηη∗|P|). Defining the upper bound in this way simplifies the presentation of the core ideas without having to deal with the constants involved in the asymptotic notation. For ε ∈ (0, 1) and ` ∈ {2, . . . , L}, we let ε′ = ε2 (L−1) and define ε` = ε′ ∆̂`→ = ε 2 (L−1) ∏L k=` ∆̂ k , where ∆̂` = ( 1 |S| maxi∈[η`] ∑ x′∈S ∆ ` i(x ′) ) + κ. To formalize the interlayer dependencies, for each i ∈ [η`] we let E`i denote the (desirable) event that ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) holds, and let E` = ∩i∈[η`] E`i be the intersection over the events corresponding to each neuron in layer `. Lemma 2 (Conditional Neuron Value Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, i ∈ [η`], and x ∼ D. CORENET generates a row vector ŵ`i = ŵ `+ i − ŵ `− i ∈ R1×η `−1 such that P ( E`i | E`−1 ) = P ( ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) | E`−1 ) ≥ 1− δ/η, (1) where ε` = ε ′ ∆̂`→ and nnz(ŵ`i ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε`2 ⌉ + 1, where S = ∑ j∈W+ sj + ∑ j∈W− sj . The following core result establishes unconditional layer-wise approximation guarantees and culminates in our main compression theorem. Lemma 3 (Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` ∈ Rη`×η`−1 such that, for ẑ`(x) = Ŵ `â`(x), P (Ŵ 2,...,Ŵ `), x (E`) = P (Ŵ 2,...,Ŵ `), x ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) ) ≥ 1− δ ∑` `′=2 η `′ η . Theorem 4 (Network Compression). For ε, δ ∈ (0, 1), Algorithm 1 generates a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) of size nnz(θ̂) ≤ L∑ `=2 η`∑ i=1 (⌈ 32 (L− 1)2 (∆̂`→)2 S`i log(η η∗) log(8 η/δ) ε2 ⌉ + 1 ) in O ( η η∗ log ( η η∗/δ )) time such that Pθ̂, x∼D ( fθ̂(x) ∈ (1± ε)fθ(x) ) ≥ 1− δ. We note that we can obtain a guarantee for a set of n randomly drawn points by invoking Theorem 4 with δ′ = δ/n and union-bounding over the failure probabilities, while only increasing the sampling complexity logarithmically, as formalized in Corollary 12, Appendix A.2. 5.4 GENERALIZATION BOUNDS As a corollary to our main results, we obtain novel generalization bounds for neural networks in terms of empirical sensitivity. Following the terminology of Arora et al. (2018), the expected margin loss of a classifier fθ : Rd → Rk parameterized by θ with respect to a desired margin γ > 0 and distribution D is defined by Lγ(fθ) = P(x,y)∼DX ,Y (fθ(x)y ≤ γ + maxi 6=y fθ(x)i). We let L̂γ denote the empirical estimate of the margin loss. The following corollary follows directly from the argument presented in Arora et al. (2018) and Theorem 4. Corollary 5 (Generalization Bounds). For any δ ∈ (0, 1) and margin γ > 0, Alg. 1 generates weights θ̂ such that with probability at least 1 − δ, the expected error L0(fθ̂) with respect to the points in P ⊆ X , |P| = n, is bounded by L0(fθ̂) ≤ L̂γ(fθ) + Õ √maxx∈P ‖fθ(x)‖22 L2 ∑L`=2(∆̂`→)2 ∑η`i=1 S`i γ2 n . 6 RESULTS In this section, we evaluate the practical effectiveness of our compression algorithm on popular benchmark data sets (MNIST (LeCun et al., 1998), FashionMNIST (Xiao et al., 2017), and CIFAR10 (Krizhevsky & Hinton, 2009)) and varying fully-connected trained neural network configurations: 2 to 5 hidden layers, 100 to 1000 hidden units, either fixed hidden sizes or decreasing hidden size denoted by pyramid in the figures. We further compare the effectiveness of our sampling scheme in reducing the number of non-zero parameters of a network, i.e., in sparsifying the weight matrices, to that of uniform sampling, Singular Value Decomposition (SVD), and current state-of-the-art sampling schemes for matrix sparsification (Drineas & Zouzias, 2011; Achlioptas et al., 2013; Kundu & Drineas, 2014), which are based on matrix norms – `1 and `2 (Frobenius). The details of the experimental setup and results of additional evaluations may be found in Appendix B. Experiment Setup We compare against three variations of our compression algorithm: (i) sole edge sampling (CoreNet), (ii) edge sampling with neuron pruning (CoreNet+), and (iii) edge sampling with neuron pruning and amplification (CoreNet++). For comparison, we evaluated the average relative error in output (`1-norm) and average drop in classification accuracy relative to the accuracy of the uncompressed network. Both metrics were evaluated on a previously unseen test set. Results Results for varying architectures and datasets are depicted in Figures 1 and 2 for the average drop in classification accuracy and relative error (`1-norm), respectively. As apparent from Figure 1, we are able to compress networks to about 15% of their original size without significant loss of accuracy for networks trained on MNIST and FashionMNIST, and to about 50% of their original size for CIFAR. Discussion The simulation results presented in this section validate our theoretical results established in Sec. 5. In particular, our empirical results indicate that we are able to outperform networks compressed via competing methods in matrix sparsification across all considered experiments and trials. The results presented in this section further suggest that empirical sensitivity can effectively capture the relative importance of neural network parameters, leading to a more informed importance sampling scheme. Moreover, the relative performance of our algorithm tends to increase as we consider deeper architectures. These findings suggest that our algorithm may also be effective in compressing modern convolutional architectures, which tend to be very deep. 7 CONCLUSION We presented a coresets-based neural network compression algorithm for compressing the parameters of a trained fully-connected neural network in a manner that approximately preserves the network’s output. Our method and analysis extend traditional coreset constructions to the application of compressing parameters, which may be of independent interest. Our work distinguishes itself from prior approaches in that it establishes theoretical guarantees on the approximation accuracy and size of the generated compressed network. As a corollary to our analysis, we obtain generalization bounds for neural networks, which may provide novel insights on the generalization properties of neural networks. We empirically demonstrated the practical effectiveness of our compression algorithm on a variety of neural network configurations and real-world data sets. In future work, we plan to extend our algorithm and analysis to compress Convolutional Neural Networks (CNNs) and other network architectures. We conjecture that our compression algorithm can be used to reduce storage requirements of neural network models and enable fast inference in practical settings. ACKNOWLEDGMENTS This research was supported in part by the National Science Foundation award IIS-1723943. We thank Brandon Araki and Kiran Vodrahalli for valuable discussions and helpful suggestions. We would also like to thank Kasper Green Larsen, Alexander Mathiasen, and Allan Gronlund for pointing out an error in an earlier formulation of Lemma 6. A PROOFS OF THE ANALYTICAL RESULTS IN SECTION 5 This section includes the full proofs of the technical results given in Sec. 5. A.1 ANALYTICAL RESULTS FOR SECTION 5.2 (IMPORTANCE SAMPLING BOUNDS FOR POSITIVE WEIGHTS) A.1.1 ORDER STATISTIC SAMPLING We now establish a couple of technical results that will quantify the accuracy of our approximations of edge importance (i.e., sensitivity). Lemma 6. Let K > 0 be a universal constant and let D be a distribution with CDF F (·) satisfying F (M/K) ≤ exp(−1/K), where M = min{x ∈ [0, 1] : F (x) = 1}. Let P = {X1, . . . , Xn} be a set of n = |P| i.i.d. samples each drawn from the distribution D. Let Xn+1 ∼ D be an i.i.d. sample. Then, P ( K max X∈P X < Xn+1 ) ≤ exp(−n/K) Proof. Let Xmax = maxX∈P ; then, P(KXmax < Xn+1) = ∫ M 0 P(Xmax < x/K|Xn+1 = x) dP(x) = ∫ M 0 P (X < x/K)n dP(x) since X1, . . . , Xn are i.i.d. ≤ ∫ M 0 F (x/K)n dP(x) where F (·) is the CDF of X ∼ D ≤ F (M/K)n ∫ M 0 dP(x) by monotonicity of F = F (M/K)n ≤ exp(−n/K) CDF Assumption, and this completes the proof. We now proceed to establish that the notion of empirical sensitivity is a good approximation for the relative importance. For this purpose, let the relative importance ĝj(x) of an edge j after the previous layers have already been compressed be ĝj(x) = wj âj(x)∑ k∈W wk âk(x) . Lemma 7 (Empirical Sensitivity Approximation). Let ε ∈ (0, 1/2), δ ∈ (0, 1), ` ∈ {2, . . . , L}, Consider a set S = {x1, . . . , xn} ⊆ P of size |S| ≥ dlog (8 η η∗/δ) log(η η∗)e. Then, conditioned on the event E1/2 occurring, i.e., â(x) ∈ (1± 1/2)a(x), P x∼D ( ∃j ∈ W : C sj < ĝj(x) | E1/2 ) ≤ δ 8 η , where C = 3 log(η η∗) andW ⊆ [η`−1]. Proof. Consider an arbitrary j ∈ W and x′ ∈ S corresponding to gj(x′) with CDF Fj (·) and recall that M = min{x ∈ [0, 1] : Fj (x) = 1} as in Assumption 1. Note that by Assumption 1, we have F (M/K) ≤ exp(−1/K), and so the random variables gj(x′) for x′ ∈ S satisfy the CDF condition required by Lemma 6. Now let E be the event that K sj < gj(x) holds. Applying Lemma 6, we obtain P(E) = P(K sj < gj(x)) = P ( K max x′∈S gj(x ′) < gj(x) ) ≤ exp(−|S|/K). Now let Ê denote the event that the inequality Csj < ĝj(x) = wj âj(x)∑ k∈W wk âk(x) holds and note that the right side of the inequality is defined with respect to ĝj(x) and not gj(x). Observe that since we conditioned on the event E1/2, we have that â(x) ∈ (1± 1/2)a(x). Now assume that event Ê holds and note that by the implication above, we have C sj < ĝj(x) = wj âj(x)∑ k∈W wk âk(x) ≤ (1 + 1/2)wj aj(x) (1− 1/2) ∑ k∈W wk ak(x) ≤ 3 · wj aj(x)∑ k∈W wk ak(x) = 3 gj(x). where the second inequality follows from the fact that 1+1/2/1−1/2 ≤ 3. Moreover, since we know that C ≥ 3K, we conclude that if event Ê occurs, we obtain the inequality 3K sj ≤ 3 gj(x)⇔ K sj ≤ gj(x), which is precisely the definition of event E . Thus, we have shown the conditional implication ( Ê | E1/2 ) ⇒ E , which implies that P(Ê | E1/2) = P(C sj < ĝj(x) | E1/2) ≤ P(E) ≤ exp(−|S|/K). Since our choice of j ∈ W was arbitrary, the bound applies for any j ∈ W . Thus, we have by the union bound P(∃j ∈ W : C sj < ĝj(x) | E1/2) ≤ ∑ j∈W P(C sj < ĝj(x) | E1/2) ≤ |W| exp(−|S|/K) = ( |W| η∗ ) δ 8η ≤ δ 8η . In practice, the set S referenced above is chosen to be a subset of the original data points, i.e., S ⊆ P (see Alg. 1, Line 2). Thus, we henceforth assume that the size of the input points |P| is large enough (or the specified parameter δ ∈ (0, 1) is sufficiently large) so that |P| ≥ |S|. A.1.2 PROOF OF LEMMA 1 We now state the proof of Lemma 1. In this subsection, we establish approximation guarantees under the assumption that the weights are strictly positive. The next subsection will then relax this assumption to conclude that a neuron’s value can be approximated well even when the weights are not all positive. Lemma 1 (Positive-Weights Sparsification). Let ε, δ ∈ (0, 1), and x ∼ D. SPARSIFY(W, w, ε, δ,S, a(·)) generates a row vector ŵ such that P (∑ k∈W ŵk âk(x) /∈ (1± ε) ∑ k∈W wk âk(x) | E1/2 ) ≤ 3δ 8η where nnz(ŵ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , and S = ∑ j∈W sj . Proof. Let ε, δ ∈ (0, 1) be arbitrary. Moreover, let C be the coreset with respect to the weight indices W ⊆ [η`−1] used to construct ŵ. Note that as in SPARSIFY, C is a multiset sampled fromW of size m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ , where S = ∑ j∈W sj and C is sampled according to the probability distribution q defined by qj = sj S ∀j ∈ W. Let â(·) be an arbitrary realization of the random variable â`−1(·), let x be a realization of x ∼ D, and let ẑ = ∑ k∈W ŵk âk(x) be the approximate intermediate value corresponding to the sparsified matrix ŵ and let z̃ = ∑ k∈W wk âk(x). Now define E to be the (favorable) event that ẑ ε-approximates z̃, i.e., ẑ ∈ (1±ε)z̃, We will now show that the complement of this event, Ec, occurs with sufficiently small probability. Let Z ⊆ supp(D) be the set of well-behaved points (defined implicitly with respect to neuron i ∈ [η`] and realization â) and defined as follows: Z = {x′ ∈ supp(D) : ĝj(x′) ≤ Csj ∀j ∈ W} , where C = 3 log(η η∗). Let EZ denote the event that x ∈ Z where x is a realization of x ∼ D. Conditioned on EZ , event Ec occurs with probability ≤ δ4η : Let x be a realization of x ∼ D such that x ∈ Z and let C = {c1, . . . , cm} be m samples fromW with respect to distribution q as before. Define m random variables Tc1 , . . . , Tcm such that for all j ∈ C Tj = wj âj(x) mqj = S wj âj(x) msj . (2) For any j ∈ C, we have for the conditional expectation of Tj : E [Tj | â(·),x, EZ , E1/2] = ∑ k∈W wk âk(x) mqk · qk = ∑ k∈W wk âk(x) m = z̃ m , where we use the expectation notation E [·] with the understanding that it denotes the conditional expectation E C | âl−1(·), x [·]. Moreover, we also note that conditioning on the event EZ (i.e., the event that x ∈ Z) does not affect the expectation of Tj . Let T = ∑ j∈C Tj = ẑ denote our approximation and note that by linearity of expectation, E [T | â(·),x, EZ , E1/2] = ∑ j∈C E [Tj | â(·),x, EZ , E1/2] = z̃ Thus, ẑ = T is an unbiased estimator of z̃ for any realization â(·) and x; thus, we will henceforth refer to E [T | â(·), x] as simply z̃ for brevity. For the remainder of the proof we will assume that z̃ > 0, since otherwise, z̃ = 0 if and only if Tj = 0 for all j ∈ C almost surely, which follows by the fact that Tj ≥ 0 for all j ∈ C by definition ofW and the non-negativity of the ReLU activation. Therefore, in the case that z̃ = 0, it follows that P(|ẑ − z̃| > εz̃ | â(·),x) = P(ẑ > 0 | â(·),x) = P(0 > 0) = 0, which trivially yields the statement of the lemma, where in the above expression, P(·) is short-hand for the conditional probability Pŵ | âl−1(·), x(·). We now proceed with the case where z̃ > 0 and leverage the fact that x ∈ Z3 to establish that for all j ∈ W : Csj ≥ ĝj(x) = wj âj(x)∑ k∈W wk âk(x) = wj âj(x) z̃ 3Since we conditioned on the event EZ . ⇔ wj âj(x) sj ≤ C z̃. (3) Utilizing the inequality established above, we bound the conditional variance of each Tj , j ∈ C as follows Var(Tj | â(·),x, EZ , E1/2) ≤ E [(Tj)2 | â(·),x, EZ , E1/2] = ∑ k∈W (wk âk(x)) 2 (mqk)2 · qk = S m2 ∑ k∈W (wk âk(x)) 2 sk ≤ S m2 (∑ k∈W wk âk(x) ) C z̃ = S C z̃2 m2 , where Var(·) is short-hand for VarC | âl−1(·), x (·). Since T is a sum of (conditionally) independent random variables, we obtain Var(T | â(·),x, EZ , E1/2) = mVar(Tj | â(·),x, EZ , E1/2) (4) ≤ S C z̃ 2 m . Now, for each j ∈ C let T̃j = Tj − E [Tj | â(·),x, EZ , E1/2] = Tj − z̃, and let T̃ = ∑ j∈C T̃j . Note that by the fact that we conditioned on the realization x of x such that x ∈ Z (event EZ ), we obtain by definition of Tj in (2) and the inequality (3): Tj = S wj âj(x) msj ≤ S C z̃ m . (5) We also have that S ≥ 1 by definition. More specifically, using the fact that the maximum over a set is greater than the average and rearranging sums, we obtain S = ∑ j∈W sj = ∑ j∈W max x′∈S gj(x ′) ≥ 1 |S| ∑ j∈W ∑ x′∈S gj(x ′) = 1 |S| ∑ x′∈S ∑ j∈W gj(x ′) = 1 |S| ∑ x′∈S 1 = 1. Thus, the inequality established in (5) with the fact that S ≥ 1 we obtain an upper bound on the absolute value of the centered random variables: |T̃j | = ∣∣∣∣Tj − z̃m ∣∣∣∣ ≤ S C z̃m = M, (6) which follows from the fact that: if Tj ≥ z̃m : Then, by our bound in (5) and the fact that z̃ m ≥ 0, it follows that ∣∣∣T̃j∣∣∣ = Tj − z̃ m ≤ S C z̃ m − z̃ m ≤ S C z̃ m . if Tj < z̃m : Then, using the fact that Tj ≥ 0 and S ≥ 1, we obtain∣∣∣T̃j∣∣∣ = z̃ m − Tj ≤ z̃ m ≤ S C z̃ m . Applying Bernstein’s inequality to both T̃ and −T̃ we have by symmetry and the union bound, P(Ec | â(·),x, EZ , E1/2) = P ( |T − z̃| ≥ εz̃ | â(·),x, EZ , E1/2 ) ≤ 2 exp ( − ε 2z̃2 2 Var(T | â(·),x) + 2 ε z̃M3 ) ≤ 2 exp ( − ε 2z̃2 2SC z̃2 m + 2S C z̃2 3m ) = 2 exp ( −3 ε 2m 8S C ) ≤ δ 4η , where the second inequality follows by our upper bounds on Var(T | â(·),x) and ∣∣∣T̃j∣∣∣ and the fact that ε ∈ (0, 1), and the last inequality follows by our choice of m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ . This establishes that for any realization â(·) of âl−1(·) and a realization x of x satisfying x ∈ Z , the event Ec occurs with probability at most δ4η . Removing the conditioning on EZ : We have by law of total probability P(E | â(·), E1/2) ≥ ∫ x∈Z P(E | â(·),x, EZ , E1/2) P x∼D (x = x | â(·), E1/2) dx ≥ ( 1− δ 4η )∫ x∈Z P x∼D (x = x | â(·), E1/2) dx = ( 1− δ 4η ) P x∼D (EZ | â(·), E1/2) ≥ ( 1− δ 4η )( 1− δ 8η ) ≥ 1− 3δ 8η where the second-to-last inequality follows from the fact that P(Ec | â(·),x, EZ , E1/2) ≤ δ4η as was established above and the last inequality follows by Lemma 7. Putting it all together Finally, we marginalize out the random variable â`−1(·) to establish P(E | E1/2) = ∫ â(·) P(E | â(·), E1/2)P(â(·) | E1/2) dâ(·) ≥ ( 1− 3δ 8η )∫ â(·) P(â(·) | E1/2) dâ(·) = 1− 3δ 8η . Consequently, P(Ec | E1/2) ≤ 1− ( 1− 3δ 8η ) = 3δ 8η , and this concludes the proof. A.2 ANALYTICAL RESULTS FOR SECTION 5.3 (IMPORTANCE SAMPLING BOUNDS) We begin by establishing an auxiliary result that we will need for the subsequent lemmas. A.2.1 EMPIRICAL ∆`i APPROXIMATION Lemma 8 (Empirical ∆`i Approximation). Let δ ∈ (0, 1), λ∗ = log(η η∗)/2, and define ∆̂` = ( 1 |S| max i∈[η`] ∑ x′∈S ∆`i(x ′) ) + κ, where κ = √ 2λ∗ ( 1 + √ 2λ∗ log (8 η η ∗/δ) ) and S ⊆ P is as in Alg. 1. Then, P x∼D ( max i∈[η`] ∆`i(x) ≤ ∆̂` ) ≥ 1− δ 4η . Proof. Define the random variables Yx′ = E [∆`i(x′)]−∆`i(x′) for each x′ ∈ S and consider the sum Y = ∑ x′∈S Yx′ = ∑ x′∈S ( E [∆`i(x)]−∆`i(x′) ) . We know that each random variable Yx′ satisfies E [Yx′ ] = 0 and by Assumption 2, is subexponential with parameter λ ≤ λ∗. Thus, Y is a sum of |S| independent, zero-mean λ∗-subexponential random variables, which implies that E [Y] = 0 and that we can readily apply Bernstein’s inequality for subexponential random variables (Vershynin, 2016) to obtain for t ≥ 0 P ( 1 |S| Y ≥ t ) ≤ exp ( −|S| min { t2 4λ2∗ , t 2λ∗ }) . Since S = dlog (8 η η∗/δ) log(η η∗)e ≥ log (8 η η∗/δ) 2λ∗, we have for t = √ 2λ∗, P ( E [∆`i(x)]− 1 |S| ∑ x′∈S ∆`i(x ′) ≥ t ) = P ( 1 |S| Y ≥ t ) ≤ exp ( −|S| t 2 4λ2∗ ) ≤ exp (− log (8 η η∗/δ)) = δ 8 η η∗ . Moreover, for a single Yx, we have by the equivalent definition of a subexponential random variable (Vershynin, 2016) that for u ≥ 0 P(∆`i(x)− E [∆`i(x)] ≥ u) ≤ exp ( −min { − u 2 4λ2∗ , u 2λ∗ }) . Thus, for u = 2λ∗ log (8 η η∗/δ) we obtain P(∆`i(x)− E [∆`i(x)] ≥ u) ≤ exp (− log (8 η η∗/δ)) = δ 8 η η∗ . Therefore, by the union bound, we have with probability at least 1− δ4η η∗ : ∆`i(x) ≤ E [∆`i(x)] + u ≤ ( 1 |S| ∑ x′∈S ∆`i(x ′) + t ) + u = 1 |S| ∑ x′∈S ∆`i(x ′) + (√ 2λ∗ + 2λ∗ log (8 η η ∗/δ) ) = 1 |S| ∑ x′∈S ∆`i(x ′) + κ ≤ ∆̂`, where the last inequality follows by definition of ∆̂`. Thus, by the union bound, we have P x∼D ( max i∈[η`] ∆`i(x) > ∆̂ ` ) = P ( ∃i ∈ [η`] : ∆`i(x) > ∆̂` ) ≤ ∑ i∈[η`] P ( ∆`i(x) > ∆̂ ` ) ≤ η` ( δ 4η η∗ ) ≤ δ 4 η , where the last line follows by definition of η∗ ≥ η`. A.2.2 NOTATION FOR THE SUBSEQUENT ANALYSIS Let ŵ`+i and ŵ `− i denote the sparsified row vectors generated when SPARSIFY is invoked with first two arguments corresponding to (W+, w`i ) and (W−,−w`i ), respectively (Alg. 1, Line 12). We will at times omit including the variables for the neuron i and layer ` in the proofs for clarity of exposition, and for example, refer to ŵ`+i and ŵ `− i as simply ŵ + and ŵ−, respectively. Let x ∼ D and define ẑ+(x) = ∑ k∈W+ ŵ+k âk(x) ≥ 0 and ẑ −(x) = ∑ k∈W− (−ŵ−k ) âk(x) ≥ 0 be the approximate intermediate values corresponding to the sparsified matrices ŵ+ and ŵ−; let z̃+(x) = ∑ k∈W+ wk âk(x) ≥ 0 and z̃−(x) = ∑ k∈W− (−wk) âk(x) ≥ 0 be the corresponding intermediate values with respect to the the original row vector w; and finally, let z+(x) = ∑ k∈W+ wk ak(x) ≥ 0 and z−(x) = ∑ k∈W− (−wk) ak(x) ≥ 0 be the true intermediate values corresponding to the positive and negative valued weights. Note that in this context, we have by definition ẑ`i (x) = 〈ŵ, â(x)〉 = ẑ+(x)− ẑ−(x), z̃`i (x) = 〈w, â(x)〉 = z̃+(x)− z̃−(x), and z`i (x) = 〈w, a(x)〉 = z+(x)− z−(x), where we used the fact that ŵ = ŵ+ − ŵ− ∈ R1×η`−1 . A.2.3 PROOF OF LEMMA 2 Lemma 2 (Conditional Neuron Value Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, i ∈ [η`], and x ∼ D. CORENET generates a row vector ŵ`i = ŵ `+ i − ŵ `− i ∈ R1×η `−1 such that P ( E`i | E`−1 ) = P ( ẑ`i (x) ∈ (1± 2 (`− 1) ε`+1) z`i (x) | E`−1 ) ≥ 1− δ/η, (1) where ε` = ε ′ ∆̂`→ and nnz(ŵ`i ) ≤ ⌈ 8S log(η η∗) log(8 η/δ) ε`2 ⌉ + 1, where S = ∑ j∈W+ sj + ∑ j∈W− sj . Proof. Let ε, δ ∈ (0, 1) be arbitrary and let W+ = {j ∈ [η`−1] : wj > 0} and W− = {j ∈ [η`−1] : wj < 0} as in Alg. 1. Let ε` be defined as before, ε` = ε ′ ∆̂`→ , where ∆̂`→ = ∏L k=` ∆̂ k and ∆̂` = ( 1 |S| maxi∈[η`] ∑ x′∈S ∆ ` i(x ′) ) + κ. Observe that wj > 0 ∀j ∈ W+ and similarly, for all (−wj) > 0 ∀j ∈ W−. That is, each of index setsW+ andW− corresponds to strictly positive entries in the arguments w`i and −w`i , respectively passed into SPARSIFY. Observe that since we conditioned on the event E`−1, we have 2 (`− 2) ε` ≤ 2 (`− 2) ε 2 (L− 1) ∏L k=` ∆̂ k ≤ ε∏L k=` ∆̂ k ≤ ε 2L−`+1 Since ∆̂k ≥ 2 ∀k ∈ {`, . . . , L} ≤ ε 2 , where the inequality ∆̂k ≥ 2 follows from the fact that ∆̂k = ( 1 |S| max i∈[η`] ∑ x′∈S ∆`i(x ′) ) + κ ≥ 1 + κ Since ∆`i(x′) ≥ 1 ∀x′ ∈ supp(D) by definition ≥ 2. we obtain that â(x) ∈ (1 ± ε/2)a(x), where, as before, â and a are shorthand notations for â`−1 ∈ Rη`−1×1 and a`−1 ∈ Rη`−1×1, respectively. This implies that E`−1 ⇒ E1/2 and since m = ⌈ 8S log(η η∗) log(8 η/δ) ε2 ⌉ in Alg. 2 we can invoke Lemma 1 with ε = ε` on each of the SPARSIFY invocations to conclude that P ( ẑ+(x) /∈ (1± ε`)z̃+(x) | E`−1 ) ≤ P ( ẑ+(x) /∈ (1± ε`)z̃+(x) | E1/2 ) ≤ 3δ 8η , and P ( ẑ−(x) /∈ (1± ε`)z̃−(x) | E`−1 ) ≤ 3δ 8η . Therefore, by the union bound, we have P ( ẑ+(x) /∈ (1± ε`)z̃+(x) or ẑ−(x) /∈ (1± ε`)z̃−(x) | E`−1 ) ≤ 3δ 8η + 3δ 8η = 3δ 4η . Moreover, by Lemma 8, we have with probability at most δ4η that ∆`i(x) > ∆̂ `. Thus, by the union bound over the failure events, we have that with probability at least 1 − (3δ/4η + δ/4η) = 1− δ/η that both of the following events occur 1. ẑ+(x) ∈ (1± ε`)z̃+(x) and ẑ−(x) ∈ (1± ε`)z̃−(x) (7) 2. ∆`i(x) ≤ ∆̂` (8) Recall that ε′ = ε2 (L−1) , ε` = ε′ ∆̂`→ , and that event E`i denotes the (desirable) event that ẑ`i (x) (1± 2 (`− 1) ε`+1) z`i (x) holds, and similarly, E` = ∩i∈[η`] E`i denotes the vector-wise analogue where ẑ`(x) (1± 2 (`− 1) ε`+1) z`(x). Let k = 2 (`− 1) and note that by conditioning on the event E`−1, i.e., we have by definition â`−1(x) ∈ (1± 2 (`− 2)ε`)a`−1(x) = (1± k ε`)a`−1(x), which follows by definition of the ReLU function. Recall that our overarching goal is to establish that ẑ`i (x) ∈ (1± 2 (`− 1)ε`+1) z`i (x), which would immediately imply by definition of the ReLU function that â`i(x) ∈ (1± 2 (`− 1)ε`+1) a`i(x). Having clarified the conditioning and our objective, we will once again drop the index i from the expressions moving forward. Proceeding from above, we have with probability at least 1− δ/η ẑ(x) = ẑ+(x)− ẑ−(x) ≤ (1 + ε`) z̃+(x)− (1− ε`) z̃−(x) By Event (7) above ≤ (1 + ε`)(1 + k ε`) z+(x)− (1− ε`)(1− k ε`) z−(x) Conditioning on event E`−1 = ( 1 + ε`(k + 1) + kε 2 ` ) z+(x) + ( −1 + (k + 1)ε` − kε2` ) z−(x) = ( 1 + k ε2` ) z(x) + (k + 1) ε` ( z+(x) + z−(x) ) = ( 1 + k ε2` ) z(x) + (k + 1) ε′∏L k=` ∆̂ k ( z+(x) + z−(x) ) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε′ ∆`i(x) ∏L k=`+1 ∆̂ k ( z+(x) + z−(x) ) By Event (8) above = ( 1 + k ε2` ) z(x) + (k + 1) ε′∏L k=`+1 ∆̂ k |z(x)| By ∆`i(x) = z+(x) + z−(x) |z(x)| = ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)|. To upper bound the last expression above, we begin by observing that kε2` ≤ ε`, which follows from the fact that ε` ≤ 12 (L−1) ≤ 1 k by definition. Moreover, we also note that ε` ≤ ε`+1 by definition of ∆̂` ≥ 1. Now, we consider two cases. Case of z(x) ≥ 0: In this case, we have ẑ(x) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)| ≤ (1 + ε`)z(x) + (k + 1)ε`+1z(x) ≤ (1 + ε`+1)z(x) + (k + 1)ε`+1z(x) = (1 + (k + 2) ε`+1) z(x) = (1 + 2 (`− 1)ε`+1) z(x), where the last line follows by definition of k = 2 (`− 2), which implies that k + 2 = 2(`− 1). Thus, this establishes the desired upper bound in the case that z(x) ≥ 0. Case of z(x) < 0: Since z(x) is negative, we have that ( 1 + k ε2` ) z(x) ≤ z(x) and |z(x)| = −z(x) and thus ẑ(x) ≤ ( 1 + k ε2` ) z(x) + (k + 1) ε`+1 |z(x)| ≤ z(x)− (k + 1)ε`+1z(x) ≤ (1− (k + 1)ε`+1) z(x) ≤ (1− (k + 2)ε`+1) z(x) = (1− 2 (`− 1)ε`+1) z(x), and this establishes the upper bound for the case of z(x) being negative. Putting the results of the case by case analysis together, we have the upper bound of ẑ(x) ≤ z(x) + 2 (` − 1)ε`+1|z(x)|. The proof for establishing the lower bound for z(x) is analogous to that given above, and yields ẑ(x) ≥ z(x)−2 (`−1)ε`+1|z(x)|. Putting both the upper and lower bound together, we have that with probability at least 1− δη : ẑ(x) ∈ (1± 2 (`− 1)ε`+1) z(x), and this completes the proof. A.2.4 REMARKS ON NEGATIVE ACTIVATIONS We note that up to now we assumed that the input a(x), i.e., the activations from the previous layer, are strictly nonnegative. For layers ` ∈ {3, . . . , L}, this is indeed true due to the nonnegativity of the ReLU activation function. For layer 2, the input is a(x) = x, which can be decomposed into a(x) = apos(x) − aneg(x), where apos(x) ≥ 0 ∈ Rη `−1 and aneg(x) ≥ 0 ∈ Rη `−1 . Furthermore, we can define the sensitivity over the set of points {apos(x), aneg(x) | x ∈ S} (instead of {a(x) | x ∈ S}), and thus maintain the required nonnegativity of the sensitivities. Then, in the terminology of Lemma 2, we let z+pos(x) = ∑ k∈W+ wk apos,k(x) ≥ 0 and z−neg(x) = ∑ k∈W− (−wk) aneg,k(x) ≥ 0 be the corresponding positive parts, and z+neg(x) = ∑ k∈W+ wk aneg,k(x) ≥ 0 and z−pos(x) = ∑ k∈W− (−wk) apos,k(x) ≥ 0 be the corresponding negative parts of the preactivation of the considered layer, such that z+(x) = z+pos(x) + z − neg(x) and z −(x) = z+neg(x) + z − pos(x). We also let ∆`i(x) = z+(x) + z−(x) |z(x)| be as before, with z+(x) and z−(x) defined as above. Equipped with above definitions, we can rederive Lemma 2 analogously in the more general setting, i.e., with potentially negative activations. We also note that we require a slightly larger sample size now since we have to take a union bound over the failure probabilities of all four approximations (i.e. ẑ+pos(x), ẑ − neg(x), ẑ + neg(x), and ẑ − pos(x)) to obtain the desired overall failure probability of δ/η. A.2.5 PROOF OF THEOREM 4 The following corollary immediately follows from Lemma 2 and establishes a layer-wise approximation guarantee. Corollary 9 (Conditional Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` = ( ŵ`1, . . . , ŵ ` η` )> ∈ Rη`×η`−1 such that P(E` | E`−1) = P ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) | E`−1 ) ≥ 1− δ η ` η , (9) where ε` = ε ′ ∆̂`→ , ẑ`(x) = Ŵ `â`(x), and z`(x) = W `a`(x). Proof. Since (1) established by Lemma 2 holds for any neuron i ∈ [η`] in layer ` and since (E`)c = ∪i∈[η`](E`i )c, it follows by the union bound over the failure events (E`i )c for all i ∈ [η`] that with probability at least 1− η `δ η ẑ`(x) = Ŵ `â`−1(x) ∈ (1± 2 (`− 1) ε`+1)W `a`−1(x) = (1± 2 (`− 1) ε`+1) z`(x). The following lemma removes the conditioning on E`−1 and explicitly considers the (compounding) error incurred by generating coresets Ŵ 2, . . . , Ŵ ` for multiple layers. Lemma 3 (Layer-wise Approximation). Let ε, δ ∈ (0, 1), ` ∈ {2, . . . , L}, and x ∼ D. CORENET generates a sparse weight matrix Ŵ ` ∈ Rη`×η`−1 such that, for ẑ`(x) = Ŵ `â`(x), P (Ŵ 2,...,Ŵ `), x (E`) = P (Ŵ 2,...,Ŵ `), x ( ẑ`(x) ∈ (1± 2 (`− 1) ε`+1) z`(x) ) ≥ 1− δ ∑` `′=2 η `′ η . Proof. Invoking Corollary 9, we know that for any layer `′ ∈ {2, . . . , L}, P Ŵ `′ , x, â`′−1(·) (E` ′ | E` ′−1) ≥ 1− δ η `′ η . (10) We also have by the law of total probability that P(E` ′ ) = P(E` ′ | E` ′−1)P(E` ′−1) + P(E` ′ | (E` ′−1)c)P((E` ′−1)c) ≥ P(E` ′ | E` ′−1)P(E` ′−1) (11) Repeated applications of (10) and (11) in conjunction with the observation that P(E1) = 14 yield P(E`) ≥ P(E` ′ | E` ′−1)P(E` ′−1) ... Repeated applications of (11) ≥ ∏̀ `′=2 P(E` ′ | E` ′−1) ≥ ∏̀ `′=2 ( 1− δ η `′ η ) By (10) ≥ 1− δ η ∑̀ `′=2 η` ′ By the Weierstrass Product Inequality, where the last inequality follows by the Weierstrass Product Inequality5 and this establishes the lemma. Appropriately invoking Lemma 3, we can now establish the approximation guarantee for the entire neural network. This is stated in Theorem 4 and the proof can be found below. Theorem 4 (Network Compression). For ε, δ ∈ (0, 1), Algorithm 1 generates a set of parameters θ̂ = (Ŵ 2, . . . , ŴL) of size nnz(θ̂) ≤ L∑ `=2 η`∑ i=1 (⌈ 32 (L− 1)2 (∆̂`→)2 S`i log(η η∗) log(8 η/δ) ε2 ⌉ + 1 ) in O ( η η∗ log ( η η∗/δ )) time such that Pθ̂, x∼D ( fθ̂(x) ∈ (1± ε)fθ(x) ) ≥ 1− δ. 4Since we do not compress the input layer. 5The Weierstrass Product Inequality (Doerr, 2018) states that for p1, . . . , pn ∈ [0, 1], n∏ i=1 (1− pi) ≥ 1− n∑ i=1 pi. Proof. Invoking Lemma 3 with ` = L, we have that for θ̂ = (Ŵ 2, . . . , ŴL), P̂ θ, x ( fθ̂(x) ∈ 2 (L− 1) εL+1fθ(x) ) = P̂ θ, x (ẑL(x) ∈ 2 (L− 1) εL+1zL(x)) = P(EL) ≥ 1− δ ∑L `′=2 η `′ η = 1− δ, where the last equality follows by definition of η = ∑L `=2 η `. Note that by definition, εL+1 = ε 2 (L− 1) ∏L k=L+1 ∆̂ k = ε 2 (L− 1) , where the last equality follows by the fact that the empty product ∏L k=L+1
1. What is the focus of the paper regarding neural network compression? 2. What are the strengths of the proposed approach, particularly in terms of reducing the number of effective parameters? 3. What are the weaknesses of the paper, especially in its experimental analysis? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or ideas for future research related to this work?
Review
Review In this work the authors improve upon the work of Arora et al. mainly with respect to one aspect, i.e., They provide eps-approximation of a fully connected neural network output neuron-wise. The idea of compression is very natural and has been explored by various previous works (key refs are cited). Intuitively, the number of effective parameters is significantly less than the number of parameters in the neural network. The authors introduce the notion of the coreset that is suitable for compressing the weight parameters in definition 1. Their main result is stated as Theorem 4. Finally, the authors experiment on standard benchmarks, perform a careful experimental analysis (i.e., they ensure fairness of comparison between methods such as SVD and the rest). It would be interesting to see the histogram/distribution of the weights per layer and at an aggregate level for the datasets used. Also, in the light of the recent results of Arora et al. that show that the signal out of a layer is correlated with the top singular values, how would coresets developed in the numerical linear algebraic community (e.g., Near-optimal Coresets For Least-Squares Regression by Boutsidis et al.) perform, even as an experimental heuristic compared to the proposed method?
ICLR
Title Joint autoencoders: a flexible meta-learning framework Abstract The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a datadriven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network. 1 INTRODUCTION A major goal of inductive learning is the selection of a rule that generalizes well based on a finite set of examples. It is well-known ((Hume, 1748)) that inductive learning is impossible unless some regularity assumptions are made about the world. Such assumptions, by their nature, go beyond the data, and are based on prior knowledge achieved through previous interactions with ’similar’ problems. Following its early origins ((Baxter, 2000; Thrun and Pratt, 1998)), the incorporation of prior knowledge into learning has become a major effort recently, and is gaining increasing success by relying on the rich representational flexibility available through current deep learning schemes (Bengio et al., 2013). Various aspects of prior knowledge are captured in different settings of meta-learning, such as learning-to-learn, domain adaptation, transfer learning, multi-task learning, etc. (e.g., (Goodfellow et al., 2016)). In this work, we consider the setup of multi-task learning, first formalized in (Baxter, 2000), where a set of tasks is available for learning, and the objective is to extract knowledge from a subset of tasks in order to facilitate learning of other, related, tasks. Within the framework of representation learning, the core idea is that of shared representations, allowing a given task to benefit from what has been learned from other tasks, since the shared aspects of the representation are based on more information (Zhang et al., 2008). We consider both unsupervised and semi-supervised learning setups. In the former setting we have several related datasets, arising from possibly different domains, and aim to compress each dataset based on features that are shared between the datasets, and on features that are unique to each problem. Neither the shared nor the individual features are given apriori, but are learned using a deep neural network architecture within an autoencoding scheme. While such a joint representation could, in principle, serve as a basis for supervised learning, it has become increasingly evident that representations should contain some information about the output (label) identity in order to perform well, and that using pre-training based on unlabeled data is not always advantageous (e.g., chap. 15 in (Goodfellow et al., 2016)). However, since unlabeled data is far more abundant than labeled data, much useful information can be gained from it. We therefore propose a joint encoding-classification scheme where both labeled and unlabeled data are used for the multiple tasks, so that internal representations found reflect both types of data, but are learned simultaneously. The main contributions of this work are: (i) A generic and flexible modular setup for combining unsupervised, supervised and transfer learning. (ii) Efficient end-to-end transfer learning using mostly unsupervised data (i.e., very few labeled examples are required for successful transfer learning). (iii) Explicit extraction of task-specific and shared representations. 2 RELATED WORK Previous related work can be broadly separated into two classes of models: (i) Generative models attempting to learn the input representations. (ii) Non-generative methods that construct separate or shared representations in a bottom-up fashion driven by the inputs. We first discuss several works within the non-generative setting. The Deep Domain Confusion (DDC) algorithm in (Tzeng et al., 2014) studies the problems of unsupervised domain adaptation based on sets of unlabeled samples from the source and target domains, and supervised domain adaptation where a (usually small) subset of the target domain is labeled . By incorporating an adaptation layer and a domain confusion loss they learn a representation that optimizes both classification accuracy and domain invariance, where the latter is achieved by minimizing an appropriate discrepancy measure. By maintaining a small distance between the source and target representations, the classifier makes good use of the relevant prior knowledge. The algorithm suggested in (Ganin and Lempitsky, 2015) augments standard deep learning with a domain classifier that is connected to the feature extractor, and acts to modify the gradient during backpropagation. This adaptation promotes the similarity between the feature distributions in a domain adaptation task. The Deep Reconstruction Classification Network (DRCN) in (Ghifary et al., 2016) tackles the unsupervised domain adaptation task by jointly learning a shared encoding representation of the source and target domains based on minimizing a loss function that balances between the classification loss of the (labeled) source data and the reconstruction cost of the target data. The shared encoding parameters allow the target representation to benefit from the ample source supervised data. In addition to these mostly algorithmic approaches, a number of theoretical papers have attempted to provide a deeper understanding of the benefits available within this setting (Ben-David et al., 2009; Maurer et al., 2016). Next, we mention some recent work within the generative approach, briefly. Recent work has suggested several extensions of the increasingly popular Generative Adversarial Networks (GAN) framework (Goodfellow et al., 2014). The Coupled Generative Adversarial Network (CoGAN) framework in (Liu and Tuzel, 2016) aims to generate pairs of corresponding representations from inputs arising from different domains. They propose learning joint distributions over two domains based only on samples from the marginals. This yields good results for small datasets, but is unfortunately challenging to achieve for large adaptation tasks, and is computationally cumbersome. The Adversarial Discriminative Domain Adaptation (ADDA) approach (Tzeng et al., 2017) subsumes some previous results within the GAN framework of domain adaptation. The approach learns a discriminative representation using the data in the labeled source domain, and then learns to adapt the model for use in the (unlabeled) target domain through a domain adversarial loss function. The idea is implemented through a minimax formulation similar to the original GAN setup. The extraction of shared and task-specific representations is the subject of a number of works, such as (Evgeniou and Pontil, 2004) and (Parameswaran and Weinberger, 2010). However, works in this direction typically require inputs of the same dimension and for the sizes of their shared and task-specific features to be the same. A great deal of work has been devoted to multi-modal learning where the inputs arise from different modalities. Exploiting data from multiple sources (or views) to extract meaningful features, is often done by seeking representations that are sensitive only to the common variability in the views and are indifferent to view-specific variations. Many methods in this category attempt to maximize the correlation between the learned representations, as in the linear canonical correlation analysis (CCA) technique and its various nonlinear extensions (Andrew et al., 2013; Michaeli et al., 2016). Other methods use losses based on both correlation and reconstruction error (in an auto-encoding like scheme) (Wang et al., 2015), or employ diffusion processes to reveal the common underlying manifold (Lederman and Talmon, 2015). However, all multi-view representation learning algorithms rely on paired examples from the two views. This setting is thus very different from transfer learning, multi-task learning, or domain adaptation, where one has access only to unpaired samples from each of the domains. While GANs provide a powerful approach to multi-task learning and domain adaptation, they are often hard to train and fine tune ((Goodfellow, 2016)). Our approach offers a complementary nongenerative perspective, and operates in an end-to-end fashion allowing the parallel training of multiple tasks, incorporating both unsupervised, supervised and transfer settings within a single architecture. This simplicity allows the utilization of standard optimization techniques for regular deep feedforward networks, so that any advances in that domain translate directly into improvements in our results. The approach does not require paired inputs and can operate with inputs arising from entirely different domains, such as speech and audio (although this has not been demonstrated empirically here). Our work is closest to (Bousmalis et al., 2016)which shares with us the separation into common and private branches. They base their optimization on several loss functions beyond the reconstruction and classification losses, enforcing constraints on intermediate representations. Specifically, they penalize differences between the common and private branches of the same task, and encourage similarity between the different representations of the source and target in the common branch. This multiplicity of loss functions adds several free parameters to the problem that require further fine-tuning. Our framework uses only losses penalizing reconstruction and classification errors, thereby directly focusing on the task without adding internal constrains. Moreover, since DSN does not use a classification error for the target it cannot use labeled targets, and thus can only perform unsupervised transfer learning. Also, due to the internal loss functions, it is not clear how to extend DSN to multi-task learning, which is immediate in our formalism. Practically, the proposed DSN architecture is costly; it is larger by more than on order of magnitude than either the models we have studied or ADDA. Thus it is computationally challenging as well as relatively struggling to deal with small datasets. 3 JOINT AUTOENCODERS In this section, we introduce joint autoencoders (JAE), a general method for multi-task learning by unsupervised extraction of features shared by the tasks as well as features specific to each task. We begin by presenting a simple case, point out the various possible generalizations, and finally describe two transfer and multi-task learning procedures utilizing joint autoencoders. 3.1 JOINT AUTOENCODERS FOR RECONSTRUCTION Consider a multi-task learning scenario with T tasks t1, ..., tT defined by domains {( X i )}T i=1 . Each task ti is equipped with a set of unlabeled samples { xin ∈ X i }Ni,u n=1 ,whereN i,u denotes the size of the unlabeled data set, and with a reconstruction loss function `ir ( xin, x̃ i n ) , where x̃in is the reconstruction of the sample xin. Throughout the paper, we will interpret ` i r as the L2 distance between x i n and x̃ i n, but in principle `ir can represent any unsupervised learning goal. The tasks are assumed to be related, and we are interested in exploiting this similarity to improve the reconstruction. To do this, we make the following two observations: (i) Certain aspects of the unsupervised tasks we are facing may be similar, but other aspects may be quite different (e.g., when two domains contain color and grayscale images, respectively). (ii) The similarity between the tasks can be rather “deep”. For example, cartoon images and natural images may benefit from different low-level features, but may certainly share high-level structures. To accommodate these two observations, we associate with each task ti a pair of functions: f ip ( x; θip ) , the “private branch”, and f is ( x; θis, θ̃s ) , the “shared branch” . The functions f ip are responsible for the task-specific representations of ti and are parametrized by parameters θip. The functions f i s are responsible for the shared representations, and are parametrized, in addition to parameters θis, by θ̃s shared by all tasks. The key idea is that the weight sharing forces the common branches to learn to represent the common features of the two sources. Consequently, the private branches are implicitly forced to capture only the features that are not common to the other task. We aim at minimizing the cumulative weighted loss Lr = T∑ i=1 wir Ni,u∑ n=1 `ir ( xin, f i p ( xin; θ i p ) , f is ( xin; θ i s, θ̃s )) . (1) In practice, we implement all functions as autoencoders and the shared parameters θ̃s as the bottleneck of the shared branch of each task, with identical weights across the tasks. Our framework, however, supports more flexible sharing as well, such as sharing more than a single layer, or even partially shared layers. The resulting network can be trained with standard backpropagation on all reconstruction losses simultaneously. Figure 1(a) illustrates a typical autoencoder for the MNIST dataset, and Figure 1(b) illustrates the architecture obtained from implementing all branches in the formal description above with such autoencoders (AE). We call this architecture a joint autoencoder (JAE). As mentioned before, in this simple example, both inputs are MNIST digits, all branches have the same architecture, and the bottlenecks are single layers of the same dimension. However, this need not be the case. The inputs can be entirely different (e.g., image and text), all branches may have different architectures, the bottleneck sizes can vary, and more than a single layer can be shared. Furthermore, the shared layers need not be the bottlenecks, in general. Finally, the generalization to more than two tasks is straightforward - we simply add a pair of autoencoders for each task, and share some of the layers of the common-feature autoencoders. Weight sharing can take place between subsets of tasks, and can occur at different levels for the different tasks. 3.2 JOINT AUTOENCODERS FOR MULTI-TASK, SEMI-SUPERVISED AND TRANSFER LEARNING Consider now a situation in which, in addition to the unlabeled samples from all domains X i, we also have datasets of labeled pairs {( xik, y i k )}Ni,l k=1 where N i,l is the size of the labeled set for task ti and is assumed to be much smaller than N i,u. The supervised component of each task ti is reflected in the supervised loss `ic ( yin, ỹ i n ) , typically multi-class classification. We extend our loss definition in Equation 1 to be L = Lr + Lc = Lr + T∑ i=1 wic Ni,l∑ n=1 `ic ( yin, f i p ( xin; θ i p ) , f is ( xin; θ i s, θ̃s )) , (2) where we now interpret the functions f is,f i p to also output a classification. Figure 1(c) illustrates the schematic structure of a JAE extended to include supervised losses. Note that this framework supports various learning scenarios. Indeed, if a subset of the tasks has N i,l = 0, the problem becomes one of unsupervised domain adaptation. The case where N i,l are all or mostly small describes semi-supervised learning. If some of the labeled sets are large while the others are either small or empty, we find ourselves facing a transfer learning challenge. Finally, when all labeled sets are of comparable sizes, this is multi-task learning, either supervised (when N i,l are all positive) or unsupervised (when N i,l = 0). We describe two strategies to improve supervised learning by exploiting shared features. Common-branch transfer In this approach, we first train joint autoencoders on both source and target tasks simultaneously, using all available unlabeled data. Then, for the source tasks (the ones with more labeled examples), we fine-tune the branches up to the shared layer using the sets of labeled samples, and freeze the learned shared layers. Finally, for the target tasks, we use the available labeled data to train only its private branches while fixing the shared layers fine-tuned on the source data. End-to-end learning The second, end-to-end approach, combines supervised and unsupervised training. Here we extend the JAE architecture by adding new layers, with supervised loss functions for each task; see Figure 1(c). We train the new network using all losses from all tasks simultaneously - reconstruction losses using unlabeled data, and supervised losses using labeled data. When the size of the labeled sets is highly non-uniform, the network is naturally suitable for transfer learning. When the labeled sample sizes are roughly of the same order of magnitude, the setup is suitable for semi-supervised learning. 3.3 ON THE DEPTH OF SHARING It is common knowledge that similar low-level features are often helpful for similar tasks. For example, in many vision applications, CNNs exhibit the same Gabor-type filters in their first layer, regardless of the objects they are trained to classify. This observation makes low-level features immediate candidates for sharing in multi-task learning settings. However, unsurprisingly, sharing low-level features is not as beneficial when working with domains of different nature (e.g., handwritten digits vs. street signs). Our approach allows to share weights in deeper layers of a neural net, while leaving the shallow layers un-linked. The key idea is that by forcing all shared-branch nets to share deep weights, their preceding shallow layers must learn to transform the data from the different domains into a common form. We support this intuition through several experiments. As our preliminary results in Section 4.2.1 show, for similar domains, sharing deep layers provides the same performance boost as sharing shallow layers. Thus, we pay no price for relying only on “deep similarities”. But for domains of a different nature, sharing deep layers has a clear advantage. 4 EXPERIMENTS All experiments were implemented in Keras over Tensorflow. The code will be made available soon, and the network architectures used are given in detail in the appendix. 4.1 UNSUPERVISED LEARNING We present experimental results demonstrating the improvement in unsupervised learning of multiple tasks on the MNIST and CIFAR-10 datasets. For the MNIST experiment, we have separated the training images into two subsets: X1, containing the digits {0− 4} and X2, containing the digits {5− 9}. We compared the L2 reconstruction error achieved by the JAE to a baseline of a pair of AEs trained on each dataset with architecture identical to a single branch of the JAE. The joint autoencoder (MSE =5.4) out-performed the baseline (MSE = 5.6) by 4%. The autoencoders had the same cumulative bottleneck size as the JAE, to ensure the same hidden representation size. To ensure we did not benefit solely from increased capacity, we also compared the AEs to a JAE with the same total number of parameters as the baseline, obtained by reducing the size of each layer by√ 2. This model achieved an MSE of 5.52, a1.4% improvement over the baseline. To further understand the features learned by the shared and private bottlenecks, we visualize the activations of the bottlenecks on 1000 samples from each dataset, using 2D t-SNE embeddings (van der Maaten and Hinton, 2008). Figure 2(a) demonstrates that the common branches containing the shared layer (green and magenta) are much more mixed between themselves than the private branches (red and black), indicating that they indeed extract shared features. Figure 2(b) displays examples of digits reconstructions. The columns show (from left to right) the original digit, the image reconstructed by the full JAE, the output of the private branches and the shared branches. We see that the common branches capture the general shape of the digit, while the private branches capture the fine details which are specific to each subset. We verify quantitatively the claim about the differences in separation between the private and shared branches. The Fisher criterion for the separation between the t-SNE embeddings of the private branches is 7.22 · 10−4, whereas its counterpart for the shared branches is 2.77 · 10−4, 2.6 times less. Moreover, the shared branch embedding variance for both datasets is approximately identical, whereas the private branches map the dataset they were trained on to locations with variance greater by 1.35 than the dataset they had no access to. This illustrates the extent to which the shared branches learn to separate both datasets better than the private ones. For CIFAR-10 we trained the baseline autoencoder on single-class subsets of the database (e.g., all airplane images) and trained the JAE on pairs of such subsets. Table 1 shows a few typical results, demonstrating a consistent advantage for JAEs. Besides the lower reconstruction error, we can see that visually similar image classes, enjoy a greater boost in performance. For instance, the pair deer-horses enjoyed a performance boost of 37%, greater than the typical boost of 33 − 35%. As with MNIST, we also compared the pair of autoencoders to a JAE with the same total number of parameters (obtained by √ 2 size reduction of each layer), achieving 22 − 24% boost. Thus, the observed improvement is clearly not a result of mere increased network capacity. Performance of JAEs and JAEs reduced by a √ 2 factor vs standard AEs in terms of reconstruction MSE on pairs of objects in CIFAR-10: airplanes (A), deer (D), horses (H), ships(S). For each pair of objects, we give the standard AE error, JAE and JAE-reduced error and the improvement percentage. We remark that we experimented with an extension of unsupervised JAEs to variational autoencoders ((Kingma and Welling, 2014)). Unlike standard VAEs, we trained three hidden code layers, requiring each to have a small Kullback-Leibler divergence from a given normal distribution. One of these layers was used to reconstruct both datasets (analogous to the shared bottleneck in a JAE), while the other two were dedicated each to one of the datasets (analogous to the private branches). The reconstruction results on the halves of the MNIST dataset were promising, yielding an improvement of 12% over a pair of VAEs of the same cumulative size. Unfortunately, we were not able to achieve similar results on the CIFAR-10 dataset, nor to perform efficient multi-task\ transfer learning with joint VAEs. This remains an intriguing project for the future, 4.2 TRANSFER LEARNING Next, we compare the performance on MNIST of the two JAE-based transfer learning methods detailed in Section 3.2. For both methods, X1 contains digits from {0− 4} and X2 contains the digits {5− 9}. The source and target datasets comprise 2000 and 500 samples, respectively. All results are measured on the full MNIST test set. The common-branch transfer method yields 92.3% and 96.1% classification precision for the X1 → X2 and X2 → X1 transfer tasks, respectively. The end-to-end approach results in 96.6% and 98.3% scores on the same tasks, which demonstrates the superiority of the end-to-end approach. 4.2.1 SHARED LAYER DEPTH We investigate the influence of shared layer depth on the transfer performance. We see in Table 2 that for highly similar pairs of tasks such as the two halves of the MNIST dataset, the depth has little significance, while for dissimilar pairs such as MNIST-USPS, “deeper is better” - the performance improves with the shared layer depth. Moreover, when the input dimensions differ, early sharing is impossible - the data must first be transformed to have the same dimensions. 4.2.2 MNIST, USPS AND SVHN DIGITS DATASETS We have seen that the end-to-end JAE-with-transfer algorithm outperforms the alternative approach. We now compare it to other domain adaptation methods that use little to no target samples for Influence of the shared layer depth on the transfer learning performance. For the MNIST-USPS pair, only partial data are available for dimensional reasons. supervised learning, applied to the MNIST, USPS and SVHN digits datasets. The transfer tasks we consider are MNIST→USPS , USPS→MNIST and SVHN→MNIST. Following (Tzeng et al., 2017) and (Long et al., 2013), we use 2000 samples for MNIST and 1800 samples from USPS. For SVHN→MNIST, we use the complete training sets. In all three tasks, both the source and the target samples are used for the unsupervised JAE training. In addition, the source samples are used for the source supervised element of the network. We study the weakly-supervised performance of JAE and ADDA allowing access to a small number of target samples, ranging from 5 to 50 per digit. For the supervised version of ADDA, we fine-tune the classifiers using the small labeled target sets after the domain adaptation. Figure 3 (a)− (c) provides the results of our experiments. For recent methods such as CoGAN, gradient reversal, domain confusion and DSN, we display results with zero supervision, as they do not support weakly-supervised training. For DSN, we provide preliminary results on MNIST↔USPS, without model optimization that is likely to prevent over-fitting. On all tasks, we achieve results comparable or superior to existing methods using very limited supervision, despite JAE being both conceptually and computationally simpler than competing approaches. In particular, we do not train a GAN as in CoGAN, and require a single end-to-end training period, unlike ADDA that trains three separate networks in three steps. Computationally, the models used for MNIST→USPS and USPS→MNIST have 1.36M parameters, whereas ADDA uses over 1.5M weights. For SVHN→MNIST, we use a model with 3M weights, comparable to the 1.5M parameters in ADDA and smaller by an order of magnitude than DSN. The SVHN→MNIST task is considered the hardest (for instance, GAN-based approaches fail to address it) yet the abundance of unsupervised training data allows us to achieve good results, relative to previous methods. We provide further demonstration that knowledge is indeed transferred from the source to the target in the MNIST→USPS transfer task with 50 samples per digit. Source supervised learning, target unsupervised learning and target classifier training are frozen after the source classifier saturates (epoch 4). The subsequent target test improvement by 2% is due solely to the source dataset reconstruction training, passed to the target via the shared bottleneck layer (Figure 3(d)). 4.2.3 THREE-WAY TRANSFER LEARNING We demonstrate the ability to extend our approach to multiple tasks with ease by transferring knowledge from SVHN to MNIST and USPS simultaneously. That is, we train a triple-task JAE reconstructing all three datasets, with additional supervised training on SVHN and weakly-supervised training on the target sets. All labeled samples are used for the source, while the targets use 50 samples per digit. The results illustrate the benefits of multi-task learning: 94.5% classification accuracy for MNIST, a 0.8% improvement over the SVHN→MNIST task, and 88.9% accuracy in UPS, a 1.2% improvement over SVHN→USPS. This is consistent with unsupervised learning being useful for the classification. USPS is much smaller, thus it has a lower score, but it benefits relatively more from the presence of the other, larger, task. We stress that the extension to multiple tasks was straightforward, and indeed we did not tweak the various ’ models, opting instead for previously used JAEs, with a single shared bottleneck. Most state-of-the-art transfer methods do not allow for an obvious, immediate adaptation for transfer learning between multiple tasks. 5 CONCLUSION We presented a general scheme for incorporating prior knowledge within deep feedforward neural networks for domain adaptation, multi-task and transfer learning problems. The approach is general and flexible, operates in an end-to-end setting, and enables the system to self-organize to solve tasks based on prior or concomitant exposure to similar tasks, requiring standard gradient based optimization for learning. The basic idea of the approach is the sharing of representations for aspects which are common to all domains/tasks while maintaining private branches for task-specific features. The method is applicable to data from multiple sources and types, and has the advantage of being able to share weights at arbitrary network levels, enabling abstract levels of sharing. We demonstrated the efficacy of our approach on several domain adaptation and transfer learning problems, and provided intuition about the meaning of the representations in various branches. In a broader context, it is well known that the imposition of structural constraints on neural networks, usually based on prior domain knowledge, can significantly enhance their performance. The prime example of this is, of course, the convolutional neural network. Our work can be viewed within that general philosophy, showing that improved functionality can be attained by the modular prior structures imposed on the system, while maintaining simple learning rules.
1. What is the main contribution of the paper on end-to-end transfer learning and domain adaptation? 2. What are the strengths and weaknesses of the proposed framework, particularly in comparison to existing works? 3. How does the reviewer assess the novelty and significance of the work? 4. Are there any concerns regarding the experimental results and their interpretation? 5. Does the reviewer have any suggestions for future research directions related to this topic?
Review
Review The work proposed a generic framework for end-to-end transfer learning / domain adaptation with deep neural networks. The idea is to learn a joint autoencoders, containing private branch with task/domain-specific weights, as well as common branch consisting of shared weights used across tasks/domains, as well as task/domain-specific weights. Supervised losses are added after the encoders to utilize labeled samples from different tasks. Experiments on the MNIST and CIFAR datasets showed improvements over baseline models. Its performance is comparable to / worse than several existing deep domain adaptation works on the MNIST, USPS and SVHN digit datasets. The structure of the paper is good, and easy to read. The idea is fairly straight-forward. It reads as an extension of "frustratingly easy domain adaptation" to DNN (please cite this work). Different from most existing work on DNN for multi-task/transfer learning, which focuses on weight sharing in bottom layers, the work emphasizes the importance of weight sharing in deeper layers. The overall novelty of the work is limited though. The authors brought up two strategies on learning the shared and private weights at the end of section 3.2. However, no follow-up comparison between the two are provided. It seems like most of the results are coming from the end-to-end learning. Experimental results: section 4.1: Figure 2 is flawed. The colors do not correspond to the sub-tasks. For example, there are digits 1, 4 in color magenta, which is supposed to be the shared branch of digits of 5~9. Vice versa. From reducing the capacity of JAE to be the same as the baseline, most of the improvement is gone. It is not clear how much of the improvement will remain if the baseline model gets to see all the samples instead of just those from each sub-task. section 4.2.1: The authors demonstrate the influence of shared layer depth in table 2. While it does seem to matter for tasks of dissimilar inputs, have the authors compare having a completely shared branch or sharing more than just a single layer? The authors suggested in section 4.1 CIFAR experiment that the proposed method provides more performance boost when the two tasks are more similar, which seems to be contradicting to the results shown in Figure 3, where its performance is worse when transferring between USPS and MNIST, which are more similar tasks vs between SVHN and MNIST. Do the authors have any insight?
ICLR
Title Joint autoencoders: a flexible meta-learning framework Abstract The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a datadriven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network. 1 INTRODUCTION A major goal of inductive learning is the selection of a rule that generalizes well based on a finite set of examples. It is well-known ((Hume, 1748)) that inductive learning is impossible unless some regularity assumptions are made about the world. Such assumptions, by their nature, go beyond the data, and are based on prior knowledge achieved through previous interactions with ’similar’ problems. Following its early origins ((Baxter, 2000; Thrun and Pratt, 1998)), the incorporation of prior knowledge into learning has become a major effort recently, and is gaining increasing success by relying on the rich representational flexibility available through current deep learning schemes (Bengio et al., 2013). Various aspects of prior knowledge are captured in different settings of meta-learning, such as learning-to-learn, domain adaptation, transfer learning, multi-task learning, etc. (e.g., (Goodfellow et al., 2016)). In this work, we consider the setup of multi-task learning, first formalized in (Baxter, 2000), where a set of tasks is available for learning, and the objective is to extract knowledge from a subset of tasks in order to facilitate learning of other, related, tasks. Within the framework of representation learning, the core idea is that of shared representations, allowing a given task to benefit from what has been learned from other tasks, since the shared aspects of the representation are based on more information (Zhang et al., 2008). We consider both unsupervised and semi-supervised learning setups. In the former setting we have several related datasets, arising from possibly different domains, and aim to compress each dataset based on features that are shared between the datasets, and on features that are unique to each problem. Neither the shared nor the individual features are given apriori, but are learned using a deep neural network architecture within an autoencoding scheme. While such a joint representation could, in principle, serve as a basis for supervised learning, it has become increasingly evident that representations should contain some information about the output (label) identity in order to perform well, and that using pre-training based on unlabeled data is not always advantageous (e.g., chap. 15 in (Goodfellow et al., 2016)). However, since unlabeled data is far more abundant than labeled data, much useful information can be gained from it. We therefore propose a joint encoding-classification scheme where both labeled and unlabeled data are used for the multiple tasks, so that internal representations found reflect both types of data, but are learned simultaneously. The main contributions of this work are: (i) A generic and flexible modular setup for combining unsupervised, supervised and transfer learning. (ii) Efficient end-to-end transfer learning using mostly unsupervised data (i.e., very few labeled examples are required for successful transfer learning). (iii) Explicit extraction of task-specific and shared representations. 2 RELATED WORK Previous related work can be broadly separated into two classes of models: (i) Generative models attempting to learn the input representations. (ii) Non-generative methods that construct separate or shared representations in a bottom-up fashion driven by the inputs. We first discuss several works within the non-generative setting. The Deep Domain Confusion (DDC) algorithm in (Tzeng et al., 2014) studies the problems of unsupervised domain adaptation based on sets of unlabeled samples from the source and target domains, and supervised domain adaptation where a (usually small) subset of the target domain is labeled . By incorporating an adaptation layer and a domain confusion loss they learn a representation that optimizes both classification accuracy and domain invariance, where the latter is achieved by minimizing an appropriate discrepancy measure. By maintaining a small distance between the source and target representations, the classifier makes good use of the relevant prior knowledge. The algorithm suggested in (Ganin and Lempitsky, 2015) augments standard deep learning with a domain classifier that is connected to the feature extractor, and acts to modify the gradient during backpropagation. This adaptation promotes the similarity between the feature distributions in a domain adaptation task. The Deep Reconstruction Classification Network (DRCN) in (Ghifary et al., 2016) tackles the unsupervised domain adaptation task by jointly learning a shared encoding representation of the source and target domains based on minimizing a loss function that balances between the classification loss of the (labeled) source data and the reconstruction cost of the target data. The shared encoding parameters allow the target representation to benefit from the ample source supervised data. In addition to these mostly algorithmic approaches, a number of theoretical papers have attempted to provide a deeper understanding of the benefits available within this setting (Ben-David et al., 2009; Maurer et al., 2016). Next, we mention some recent work within the generative approach, briefly. Recent work has suggested several extensions of the increasingly popular Generative Adversarial Networks (GAN) framework (Goodfellow et al., 2014). The Coupled Generative Adversarial Network (CoGAN) framework in (Liu and Tuzel, 2016) aims to generate pairs of corresponding representations from inputs arising from different domains. They propose learning joint distributions over two domains based only on samples from the marginals. This yields good results for small datasets, but is unfortunately challenging to achieve for large adaptation tasks, and is computationally cumbersome. The Adversarial Discriminative Domain Adaptation (ADDA) approach (Tzeng et al., 2017) subsumes some previous results within the GAN framework of domain adaptation. The approach learns a discriminative representation using the data in the labeled source domain, and then learns to adapt the model for use in the (unlabeled) target domain through a domain adversarial loss function. The idea is implemented through a minimax formulation similar to the original GAN setup. The extraction of shared and task-specific representations is the subject of a number of works, such as (Evgeniou and Pontil, 2004) and (Parameswaran and Weinberger, 2010). However, works in this direction typically require inputs of the same dimension and for the sizes of their shared and task-specific features to be the same. A great deal of work has been devoted to multi-modal learning where the inputs arise from different modalities. Exploiting data from multiple sources (or views) to extract meaningful features, is often done by seeking representations that are sensitive only to the common variability in the views and are indifferent to view-specific variations. Many methods in this category attempt to maximize the correlation between the learned representations, as in the linear canonical correlation analysis (CCA) technique and its various nonlinear extensions (Andrew et al., 2013; Michaeli et al., 2016). Other methods use losses based on both correlation and reconstruction error (in an auto-encoding like scheme) (Wang et al., 2015), or employ diffusion processes to reveal the common underlying manifold (Lederman and Talmon, 2015). However, all multi-view representation learning algorithms rely on paired examples from the two views. This setting is thus very different from transfer learning, multi-task learning, or domain adaptation, where one has access only to unpaired samples from each of the domains. While GANs provide a powerful approach to multi-task learning and domain adaptation, they are often hard to train and fine tune ((Goodfellow, 2016)). Our approach offers a complementary nongenerative perspective, and operates in an end-to-end fashion allowing the parallel training of multiple tasks, incorporating both unsupervised, supervised and transfer settings within a single architecture. This simplicity allows the utilization of standard optimization techniques for regular deep feedforward networks, so that any advances in that domain translate directly into improvements in our results. The approach does not require paired inputs and can operate with inputs arising from entirely different domains, such as speech and audio (although this has not been demonstrated empirically here). Our work is closest to (Bousmalis et al., 2016)which shares with us the separation into common and private branches. They base their optimization on several loss functions beyond the reconstruction and classification losses, enforcing constraints on intermediate representations. Specifically, they penalize differences between the common and private branches of the same task, and encourage similarity between the different representations of the source and target in the common branch. This multiplicity of loss functions adds several free parameters to the problem that require further fine-tuning. Our framework uses only losses penalizing reconstruction and classification errors, thereby directly focusing on the task without adding internal constrains. Moreover, since DSN does not use a classification error for the target it cannot use labeled targets, and thus can only perform unsupervised transfer learning. Also, due to the internal loss functions, it is not clear how to extend DSN to multi-task learning, which is immediate in our formalism. Practically, the proposed DSN architecture is costly; it is larger by more than on order of magnitude than either the models we have studied or ADDA. Thus it is computationally challenging as well as relatively struggling to deal with small datasets. 3 JOINT AUTOENCODERS In this section, we introduce joint autoencoders (JAE), a general method for multi-task learning by unsupervised extraction of features shared by the tasks as well as features specific to each task. We begin by presenting a simple case, point out the various possible generalizations, and finally describe two transfer and multi-task learning procedures utilizing joint autoencoders. 3.1 JOINT AUTOENCODERS FOR RECONSTRUCTION Consider a multi-task learning scenario with T tasks t1, ..., tT defined by domains {( X i )}T i=1 . Each task ti is equipped with a set of unlabeled samples { xin ∈ X i }Ni,u n=1 ,whereN i,u denotes the size of the unlabeled data set, and with a reconstruction loss function `ir ( xin, x̃ i n ) , where x̃in is the reconstruction of the sample xin. Throughout the paper, we will interpret ` i r as the L2 distance between x i n and x̃ i n, but in principle `ir can represent any unsupervised learning goal. The tasks are assumed to be related, and we are interested in exploiting this similarity to improve the reconstruction. To do this, we make the following two observations: (i) Certain aspects of the unsupervised tasks we are facing may be similar, but other aspects may be quite different (e.g., when two domains contain color and grayscale images, respectively). (ii) The similarity between the tasks can be rather “deep”. For example, cartoon images and natural images may benefit from different low-level features, but may certainly share high-level structures. To accommodate these two observations, we associate with each task ti a pair of functions: f ip ( x; θip ) , the “private branch”, and f is ( x; θis, θ̃s ) , the “shared branch” . The functions f ip are responsible for the task-specific representations of ti and are parametrized by parameters θip. The functions f i s are responsible for the shared representations, and are parametrized, in addition to parameters θis, by θ̃s shared by all tasks. The key idea is that the weight sharing forces the common branches to learn to represent the common features of the two sources. Consequently, the private branches are implicitly forced to capture only the features that are not common to the other task. We aim at minimizing the cumulative weighted loss Lr = T∑ i=1 wir Ni,u∑ n=1 `ir ( xin, f i p ( xin; θ i p ) , f is ( xin; θ i s, θ̃s )) . (1) In practice, we implement all functions as autoencoders and the shared parameters θ̃s as the bottleneck of the shared branch of each task, with identical weights across the tasks. Our framework, however, supports more flexible sharing as well, such as sharing more than a single layer, or even partially shared layers. The resulting network can be trained with standard backpropagation on all reconstruction losses simultaneously. Figure 1(a) illustrates a typical autoencoder for the MNIST dataset, and Figure 1(b) illustrates the architecture obtained from implementing all branches in the formal description above with such autoencoders (AE). We call this architecture a joint autoencoder (JAE). As mentioned before, in this simple example, both inputs are MNIST digits, all branches have the same architecture, and the bottlenecks are single layers of the same dimension. However, this need not be the case. The inputs can be entirely different (e.g., image and text), all branches may have different architectures, the bottleneck sizes can vary, and more than a single layer can be shared. Furthermore, the shared layers need not be the bottlenecks, in general. Finally, the generalization to more than two tasks is straightforward - we simply add a pair of autoencoders for each task, and share some of the layers of the common-feature autoencoders. Weight sharing can take place between subsets of tasks, and can occur at different levels for the different tasks. 3.2 JOINT AUTOENCODERS FOR MULTI-TASK, SEMI-SUPERVISED AND TRANSFER LEARNING Consider now a situation in which, in addition to the unlabeled samples from all domains X i, we also have datasets of labeled pairs {( xik, y i k )}Ni,l k=1 where N i,l is the size of the labeled set for task ti and is assumed to be much smaller than N i,u. The supervised component of each task ti is reflected in the supervised loss `ic ( yin, ỹ i n ) , typically multi-class classification. We extend our loss definition in Equation 1 to be L = Lr + Lc = Lr + T∑ i=1 wic Ni,l∑ n=1 `ic ( yin, f i p ( xin; θ i p ) , f is ( xin; θ i s, θ̃s )) , (2) where we now interpret the functions f is,f i p to also output a classification. Figure 1(c) illustrates the schematic structure of a JAE extended to include supervised losses. Note that this framework supports various learning scenarios. Indeed, if a subset of the tasks has N i,l = 0, the problem becomes one of unsupervised domain adaptation. The case where N i,l are all or mostly small describes semi-supervised learning. If some of the labeled sets are large while the others are either small or empty, we find ourselves facing a transfer learning challenge. Finally, when all labeled sets are of comparable sizes, this is multi-task learning, either supervised (when N i,l are all positive) or unsupervised (when N i,l = 0). We describe two strategies to improve supervised learning by exploiting shared features. Common-branch transfer In this approach, we first train joint autoencoders on both source and target tasks simultaneously, using all available unlabeled data. Then, for the source tasks (the ones with more labeled examples), we fine-tune the branches up to the shared layer using the sets of labeled samples, and freeze the learned shared layers. Finally, for the target tasks, we use the available labeled data to train only its private branches while fixing the shared layers fine-tuned on the source data. End-to-end learning The second, end-to-end approach, combines supervised and unsupervised training. Here we extend the JAE architecture by adding new layers, with supervised loss functions for each task; see Figure 1(c). We train the new network using all losses from all tasks simultaneously - reconstruction losses using unlabeled data, and supervised losses using labeled data. When the size of the labeled sets is highly non-uniform, the network is naturally suitable for transfer learning. When the labeled sample sizes are roughly of the same order of magnitude, the setup is suitable for semi-supervised learning. 3.3 ON THE DEPTH OF SHARING It is common knowledge that similar low-level features are often helpful for similar tasks. For example, in many vision applications, CNNs exhibit the same Gabor-type filters in their first layer, regardless of the objects they are trained to classify. This observation makes low-level features immediate candidates for sharing in multi-task learning settings. However, unsurprisingly, sharing low-level features is not as beneficial when working with domains of different nature (e.g., handwritten digits vs. street signs). Our approach allows to share weights in deeper layers of a neural net, while leaving the shallow layers un-linked. The key idea is that by forcing all shared-branch nets to share deep weights, their preceding shallow layers must learn to transform the data from the different domains into a common form. We support this intuition through several experiments. As our preliminary results in Section 4.2.1 show, for similar domains, sharing deep layers provides the same performance boost as sharing shallow layers. Thus, we pay no price for relying only on “deep similarities”. But for domains of a different nature, sharing deep layers has a clear advantage. 4 EXPERIMENTS All experiments were implemented in Keras over Tensorflow. The code will be made available soon, and the network architectures used are given in detail in the appendix. 4.1 UNSUPERVISED LEARNING We present experimental results demonstrating the improvement in unsupervised learning of multiple tasks on the MNIST and CIFAR-10 datasets. For the MNIST experiment, we have separated the training images into two subsets: X1, containing the digits {0− 4} and X2, containing the digits {5− 9}. We compared the L2 reconstruction error achieved by the JAE to a baseline of a pair of AEs trained on each dataset with architecture identical to a single branch of the JAE. The joint autoencoder (MSE =5.4) out-performed the baseline (MSE = 5.6) by 4%. The autoencoders had the same cumulative bottleneck size as the JAE, to ensure the same hidden representation size. To ensure we did not benefit solely from increased capacity, we also compared the AEs to a JAE with the same total number of parameters as the baseline, obtained by reducing the size of each layer by√ 2. This model achieved an MSE of 5.52, a1.4% improvement over the baseline. To further understand the features learned by the shared and private bottlenecks, we visualize the activations of the bottlenecks on 1000 samples from each dataset, using 2D t-SNE embeddings (van der Maaten and Hinton, 2008). Figure 2(a) demonstrates that the common branches containing the shared layer (green and magenta) are much more mixed between themselves than the private branches (red and black), indicating that they indeed extract shared features. Figure 2(b) displays examples of digits reconstructions. The columns show (from left to right) the original digit, the image reconstructed by the full JAE, the output of the private branches and the shared branches. We see that the common branches capture the general shape of the digit, while the private branches capture the fine details which are specific to each subset. We verify quantitatively the claim about the differences in separation between the private and shared branches. The Fisher criterion for the separation between the t-SNE embeddings of the private branches is 7.22 · 10−4, whereas its counterpart for the shared branches is 2.77 · 10−4, 2.6 times less. Moreover, the shared branch embedding variance for both datasets is approximately identical, whereas the private branches map the dataset they were trained on to locations with variance greater by 1.35 than the dataset they had no access to. This illustrates the extent to which the shared branches learn to separate both datasets better than the private ones. For CIFAR-10 we trained the baseline autoencoder on single-class subsets of the database (e.g., all airplane images) and trained the JAE on pairs of such subsets. Table 1 shows a few typical results, demonstrating a consistent advantage for JAEs. Besides the lower reconstruction error, we can see that visually similar image classes, enjoy a greater boost in performance. For instance, the pair deer-horses enjoyed a performance boost of 37%, greater than the typical boost of 33 − 35%. As with MNIST, we also compared the pair of autoencoders to a JAE with the same total number of parameters (obtained by √ 2 size reduction of each layer), achieving 22 − 24% boost. Thus, the observed improvement is clearly not a result of mere increased network capacity. Performance of JAEs and JAEs reduced by a √ 2 factor vs standard AEs in terms of reconstruction MSE on pairs of objects in CIFAR-10: airplanes (A), deer (D), horses (H), ships(S). For each pair of objects, we give the standard AE error, JAE and JAE-reduced error and the improvement percentage. We remark that we experimented with an extension of unsupervised JAEs to variational autoencoders ((Kingma and Welling, 2014)). Unlike standard VAEs, we trained three hidden code layers, requiring each to have a small Kullback-Leibler divergence from a given normal distribution. One of these layers was used to reconstruct both datasets (analogous to the shared bottleneck in a JAE), while the other two were dedicated each to one of the datasets (analogous to the private branches). The reconstruction results on the halves of the MNIST dataset were promising, yielding an improvement of 12% over a pair of VAEs of the same cumulative size. Unfortunately, we were not able to achieve similar results on the CIFAR-10 dataset, nor to perform efficient multi-task\ transfer learning with joint VAEs. This remains an intriguing project for the future, 4.2 TRANSFER LEARNING Next, we compare the performance on MNIST of the two JAE-based transfer learning methods detailed in Section 3.2. For both methods, X1 contains digits from {0− 4} and X2 contains the digits {5− 9}. The source and target datasets comprise 2000 and 500 samples, respectively. All results are measured on the full MNIST test set. The common-branch transfer method yields 92.3% and 96.1% classification precision for the X1 → X2 and X2 → X1 transfer tasks, respectively. The end-to-end approach results in 96.6% and 98.3% scores on the same tasks, which demonstrates the superiority of the end-to-end approach. 4.2.1 SHARED LAYER DEPTH We investigate the influence of shared layer depth on the transfer performance. We see in Table 2 that for highly similar pairs of tasks such as the two halves of the MNIST dataset, the depth has little significance, while for dissimilar pairs such as MNIST-USPS, “deeper is better” - the performance improves with the shared layer depth. Moreover, when the input dimensions differ, early sharing is impossible - the data must first be transformed to have the same dimensions. 4.2.2 MNIST, USPS AND SVHN DIGITS DATASETS We have seen that the end-to-end JAE-with-transfer algorithm outperforms the alternative approach. We now compare it to other domain adaptation methods that use little to no target samples for Influence of the shared layer depth on the transfer learning performance. For the MNIST-USPS pair, only partial data are available for dimensional reasons. supervised learning, applied to the MNIST, USPS and SVHN digits datasets. The transfer tasks we consider are MNIST→USPS , USPS→MNIST and SVHN→MNIST. Following (Tzeng et al., 2017) and (Long et al., 2013), we use 2000 samples for MNIST and 1800 samples from USPS. For SVHN→MNIST, we use the complete training sets. In all three tasks, both the source and the target samples are used for the unsupervised JAE training. In addition, the source samples are used for the source supervised element of the network. We study the weakly-supervised performance of JAE and ADDA allowing access to a small number of target samples, ranging from 5 to 50 per digit. For the supervised version of ADDA, we fine-tune the classifiers using the small labeled target sets after the domain adaptation. Figure 3 (a)− (c) provides the results of our experiments. For recent methods such as CoGAN, gradient reversal, domain confusion and DSN, we display results with zero supervision, as they do not support weakly-supervised training. For DSN, we provide preliminary results on MNIST↔USPS, without model optimization that is likely to prevent over-fitting. On all tasks, we achieve results comparable or superior to existing methods using very limited supervision, despite JAE being both conceptually and computationally simpler than competing approaches. In particular, we do not train a GAN as in CoGAN, and require a single end-to-end training period, unlike ADDA that trains three separate networks in three steps. Computationally, the models used for MNIST→USPS and USPS→MNIST have 1.36M parameters, whereas ADDA uses over 1.5M weights. For SVHN→MNIST, we use a model with 3M weights, comparable to the 1.5M parameters in ADDA and smaller by an order of magnitude than DSN. The SVHN→MNIST task is considered the hardest (for instance, GAN-based approaches fail to address it) yet the abundance of unsupervised training data allows us to achieve good results, relative to previous methods. We provide further demonstration that knowledge is indeed transferred from the source to the target in the MNIST→USPS transfer task with 50 samples per digit. Source supervised learning, target unsupervised learning and target classifier training are frozen after the source classifier saturates (epoch 4). The subsequent target test improvement by 2% is due solely to the source dataset reconstruction training, passed to the target via the shared bottleneck layer (Figure 3(d)). 4.2.3 THREE-WAY TRANSFER LEARNING We demonstrate the ability to extend our approach to multiple tasks with ease by transferring knowledge from SVHN to MNIST and USPS simultaneously. That is, we train a triple-task JAE reconstructing all three datasets, with additional supervised training on SVHN and weakly-supervised training on the target sets. All labeled samples are used for the source, while the targets use 50 samples per digit. The results illustrate the benefits of multi-task learning: 94.5% classification accuracy for MNIST, a 0.8% improvement over the SVHN→MNIST task, and 88.9% accuracy in UPS, a 1.2% improvement over SVHN→USPS. This is consistent with unsupervised learning being useful for the classification. USPS is much smaller, thus it has a lower score, but it benefits relatively more from the presence of the other, larger, task. We stress that the extension to multiple tasks was straightforward, and indeed we did not tweak the various ’ models, opting instead for previously used JAEs, with a single shared bottleneck. Most state-of-the-art transfer methods do not allow for an obvious, immediate adaptation for transfer learning between multiple tasks. 5 CONCLUSION We presented a general scheme for incorporating prior knowledge within deep feedforward neural networks for domain adaptation, multi-task and transfer learning problems. The approach is general and flexible, operates in an end-to-end setting, and enables the system to self-organize to solve tasks based on prior or concomitant exposure to similar tasks, requiring standard gradient based optimization for learning. The basic idea of the approach is the sharing of representations for aspects which are common to all domains/tasks while maintaining private branches for task-specific features. The method is applicable to data from multiple sources and types, and has the advantage of being able to share weights at arbitrary network levels, enabling abstract levels of sharing. We demonstrated the efficacy of our approach on several domain adaptation and transfer learning problems, and provided intuition about the meaning of the representations in various branches. In a broader context, it is well known that the imposition of structural constraints on neural networks, usually based on prior domain knowledge, can significantly enhance their performance. The prime example of this is, of course, the convolutional neural network. Our work can be viewed within that general philosophy, showing that improved functionality can be attained by the modular prior structures imposed on the system, while maintaining simple learning rules.
1. How can shared features be identified in neural networks trained on different datasets? 2. Can auto-encoders be used to exploit common features between two datasets? 3. How can the weights of a neural network be shared to identify common features between two datasets? 4. What is the objective function used to minimize the loss in the proposed method? 5. Are there any limitations to the proposed approach in terms of simplicity or effectiveness compared to existing methods?
Review
Review The paper addresses the question of identifying 'shared features' in neural networks trained on different datasets. Concretely, suppose you have two datasets X1, X2 and you would like to train auto-encoders (with potential augmentation with labeled examples) for the two datasets. One could work on the two separately; here, the authors propose sharing some of the weights to try and exploit/identify common features between the two datasets. The authors formalize by essentially looking to optimize an auto-encoder that take inputs of the form (x1, x2) and employing architectures that allow few nodes to interact with both x1,x2. The authors then try to minimize an appropriate loss function by standard methods. The authors then apply the above methodology to transfer learning between various datasets. The empirical results here are interesting but not particularly striking; the most salient feature perhaps is that the architectures and training algorithms are perhaps a bit simpler but the overall improvements over existing methods are not too exciting.
ICLR
Title Joint autoencoders: a flexible meta-learning framework Abstract The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a datadriven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network. 1 INTRODUCTION A major goal of inductive learning is the selection of a rule that generalizes well based on a finite set of examples. It is well-known ((Hume, 1748)) that inductive learning is impossible unless some regularity assumptions are made about the world. Such assumptions, by their nature, go beyond the data, and are based on prior knowledge achieved through previous interactions with ’similar’ problems. Following its early origins ((Baxter, 2000; Thrun and Pratt, 1998)), the incorporation of prior knowledge into learning has become a major effort recently, and is gaining increasing success by relying on the rich representational flexibility available through current deep learning schemes (Bengio et al., 2013). Various aspects of prior knowledge are captured in different settings of meta-learning, such as learning-to-learn, domain adaptation, transfer learning, multi-task learning, etc. (e.g., (Goodfellow et al., 2016)). In this work, we consider the setup of multi-task learning, first formalized in (Baxter, 2000), where a set of tasks is available for learning, and the objective is to extract knowledge from a subset of tasks in order to facilitate learning of other, related, tasks. Within the framework of representation learning, the core idea is that of shared representations, allowing a given task to benefit from what has been learned from other tasks, since the shared aspects of the representation are based on more information (Zhang et al., 2008). We consider both unsupervised and semi-supervised learning setups. In the former setting we have several related datasets, arising from possibly different domains, and aim to compress each dataset based on features that are shared between the datasets, and on features that are unique to each problem. Neither the shared nor the individual features are given apriori, but are learned using a deep neural network architecture within an autoencoding scheme. While such a joint representation could, in principle, serve as a basis for supervised learning, it has become increasingly evident that representations should contain some information about the output (label) identity in order to perform well, and that using pre-training based on unlabeled data is not always advantageous (e.g., chap. 15 in (Goodfellow et al., 2016)). However, since unlabeled data is far more abundant than labeled data, much useful information can be gained from it. We therefore propose a joint encoding-classification scheme where both labeled and unlabeled data are used for the multiple tasks, so that internal representations found reflect both types of data, but are learned simultaneously. The main contributions of this work are: (i) A generic and flexible modular setup for combining unsupervised, supervised and transfer learning. (ii) Efficient end-to-end transfer learning using mostly unsupervised data (i.e., very few labeled examples are required for successful transfer learning). (iii) Explicit extraction of task-specific and shared representations. 2 RELATED WORK Previous related work can be broadly separated into two classes of models: (i) Generative models attempting to learn the input representations. (ii) Non-generative methods that construct separate or shared representations in a bottom-up fashion driven by the inputs. We first discuss several works within the non-generative setting. The Deep Domain Confusion (DDC) algorithm in (Tzeng et al., 2014) studies the problems of unsupervised domain adaptation based on sets of unlabeled samples from the source and target domains, and supervised domain adaptation where a (usually small) subset of the target domain is labeled . By incorporating an adaptation layer and a domain confusion loss they learn a representation that optimizes both classification accuracy and domain invariance, where the latter is achieved by minimizing an appropriate discrepancy measure. By maintaining a small distance between the source and target representations, the classifier makes good use of the relevant prior knowledge. The algorithm suggested in (Ganin and Lempitsky, 2015) augments standard deep learning with a domain classifier that is connected to the feature extractor, and acts to modify the gradient during backpropagation. This adaptation promotes the similarity between the feature distributions in a domain adaptation task. The Deep Reconstruction Classification Network (DRCN) in (Ghifary et al., 2016) tackles the unsupervised domain adaptation task by jointly learning a shared encoding representation of the source and target domains based on minimizing a loss function that balances between the classification loss of the (labeled) source data and the reconstruction cost of the target data. The shared encoding parameters allow the target representation to benefit from the ample source supervised data. In addition to these mostly algorithmic approaches, a number of theoretical papers have attempted to provide a deeper understanding of the benefits available within this setting (Ben-David et al., 2009; Maurer et al., 2016). Next, we mention some recent work within the generative approach, briefly. Recent work has suggested several extensions of the increasingly popular Generative Adversarial Networks (GAN) framework (Goodfellow et al., 2014). The Coupled Generative Adversarial Network (CoGAN) framework in (Liu and Tuzel, 2016) aims to generate pairs of corresponding representations from inputs arising from different domains. They propose learning joint distributions over two domains based only on samples from the marginals. This yields good results for small datasets, but is unfortunately challenging to achieve for large adaptation tasks, and is computationally cumbersome. The Adversarial Discriminative Domain Adaptation (ADDA) approach (Tzeng et al., 2017) subsumes some previous results within the GAN framework of domain adaptation. The approach learns a discriminative representation using the data in the labeled source domain, and then learns to adapt the model for use in the (unlabeled) target domain through a domain adversarial loss function. The idea is implemented through a minimax formulation similar to the original GAN setup. The extraction of shared and task-specific representations is the subject of a number of works, such as (Evgeniou and Pontil, 2004) and (Parameswaran and Weinberger, 2010). However, works in this direction typically require inputs of the same dimension and for the sizes of their shared and task-specific features to be the same. A great deal of work has been devoted to multi-modal learning where the inputs arise from different modalities. Exploiting data from multiple sources (or views) to extract meaningful features, is often done by seeking representations that are sensitive only to the common variability in the views and are indifferent to view-specific variations. Many methods in this category attempt to maximize the correlation between the learned representations, as in the linear canonical correlation analysis (CCA) technique and its various nonlinear extensions (Andrew et al., 2013; Michaeli et al., 2016). Other methods use losses based on both correlation and reconstruction error (in an auto-encoding like scheme) (Wang et al., 2015), or employ diffusion processes to reveal the common underlying manifold (Lederman and Talmon, 2015). However, all multi-view representation learning algorithms rely on paired examples from the two views. This setting is thus very different from transfer learning, multi-task learning, or domain adaptation, where one has access only to unpaired samples from each of the domains. While GANs provide a powerful approach to multi-task learning and domain adaptation, they are often hard to train and fine tune ((Goodfellow, 2016)). Our approach offers a complementary nongenerative perspective, and operates in an end-to-end fashion allowing the parallel training of multiple tasks, incorporating both unsupervised, supervised and transfer settings within a single architecture. This simplicity allows the utilization of standard optimization techniques for regular deep feedforward networks, so that any advances in that domain translate directly into improvements in our results. The approach does not require paired inputs and can operate with inputs arising from entirely different domains, such as speech and audio (although this has not been demonstrated empirically here). Our work is closest to (Bousmalis et al., 2016)which shares with us the separation into common and private branches. They base their optimization on several loss functions beyond the reconstruction and classification losses, enforcing constraints on intermediate representations. Specifically, they penalize differences between the common and private branches of the same task, and encourage similarity between the different representations of the source and target in the common branch. This multiplicity of loss functions adds several free parameters to the problem that require further fine-tuning. Our framework uses only losses penalizing reconstruction and classification errors, thereby directly focusing on the task without adding internal constrains. Moreover, since DSN does not use a classification error for the target it cannot use labeled targets, and thus can only perform unsupervised transfer learning. Also, due to the internal loss functions, it is not clear how to extend DSN to multi-task learning, which is immediate in our formalism. Practically, the proposed DSN architecture is costly; it is larger by more than on order of magnitude than either the models we have studied or ADDA. Thus it is computationally challenging as well as relatively struggling to deal with small datasets. 3 JOINT AUTOENCODERS In this section, we introduce joint autoencoders (JAE), a general method for multi-task learning by unsupervised extraction of features shared by the tasks as well as features specific to each task. We begin by presenting a simple case, point out the various possible generalizations, and finally describe two transfer and multi-task learning procedures utilizing joint autoencoders. 3.1 JOINT AUTOENCODERS FOR RECONSTRUCTION Consider a multi-task learning scenario with T tasks t1, ..., tT defined by domains {( X i )}T i=1 . Each task ti is equipped with a set of unlabeled samples { xin ∈ X i }Ni,u n=1 ,whereN i,u denotes the size of the unlabeled data set, and with a reconstruction loss function `ir ( xin, x̃ i n ) , where x̃in is the reconstruction of the sample xin. Throughout the paper, we will interpret ` i r as the L2 distance between x i n and x̃ i n, but in principle `ir can represent any unsupervised learning goal. The tasks are assumed to be related, and we are interested in exploiting this similarity to improve the reconstruction. To do this, we make the following two observations: (i) Certain aspects of the unsupervised tasks we are facing may be similar, but other aspects may be quite different (e.g., when two domains contain color and grayscale images, respectively). (ii) The similarity between the tasks can be rather “deep”. For example, cartoon images and natural images may benefit from different low-level features, but may certainly share high-level structures. To accommodate these two observations, we associate with each task ti a pair of functions: f ip ( x; θip ) , the “private branch”, and f is ( x; θis, θ̃s ) , the “shared branch” . The functions f ip are responsible for the task-specific representations of ti and are parametrized by parameters θip. The functions f i s are responsible for the shared representations, and are parametrized, in addition to parameters θis, by θ̃s shared by all tasks. The key idea is that the weight sharing forces the common branches to learn to represent the common features of the two sources. Consequently, the private branches are implicitly forced to capture only the features that are not common to the other task. We aim at minimizing the cumulative weighted loss Lr = T∑ i=1 wir Ni,u∑ n=1 `ir ( xin, f i p ( xin; θ i p ) , f is ( xin; θ i s, θ̃s )) . (1) In practice, we implement all functions as autoencoders and the shared parameters θ̃s as the bottleneck of the shared branch of each task, with identical weights across the tasks. Our framework, however, supports more flexible sharing as well, such as sharing more than a single layer, or even partially shared layers. The resulting network can be trained with standard backpropagation on all reconstruction losses simultaneously. Figure 1(a) illustrates a typical autoencoder for the MNIST dataset, and Figure 1(b) illustrates the architecture obtained from implementing all branches in the formal description above with such autoencoders (AE). We call this architecture a joint autoencoder (JAE). As mentioned before, in this simple example, both inputs are MNIST digits, all branches have the same architecture, and the bottlenecks are single layers of the same dimension. However, this need not be the case. The inputs can be entirely different (e.g., image and text), all branches may have different architectures, the bottleneck sizes can vary, and more than a single layer can be shared. Furthermore, the shared layers need not be the bottlenecks, in general. Finally, the generalization to more than two tasks is straightforward - we simply add a pair of autoencoders for each task, and share some of the layers of the common-feature autoencoders. Weight sharing can take place between subsets of tasks, and can occur at different levels for the different tasks. 3.2 JOINT AUTOENCODERS FOR MULTI-TASK, SEMI-SUPERVISED AND TRANSFER LEARNING Consider now a situation in which, in addition to the unlabeled samples from all domains X i, we also have datasets of labeled pairs {( xik, y i k )}Ni,l k=1 where N i,l is the size of the labeled set for task ti and is assumed to be much smaller than N i,u. The supervised component of each task ti is reflected in the supervised loss `ic ( yin, ỹ i n ) , typically multi-class classification. We extend our loss definition in Equation 1 to be L = Lr + Lc = Lr + T∑ i=1 wic Ni,l∑ n=1 `ic ( yin, f i p ( xin; θ i p ) , f is ( xin; θ i s, θ̃s )) , (2) where we now interpret the functions f is,f i p to also output a classification. Figure 1(c) illustrates the schematic structure of a JAE extended to include supervised losses. Note that this framework supports various learning scenarios. Indeed, if a subset of the tasks has N i,l = 0, the problem becomes one of unsupervised domain adaptation. The case where N i,l are all or mostly small describes semi-supervised learning. If some of the labeled sets are large while the others are either small or empty, we find ourselves facing a transfer learning challenge. Finally, when all labeled sets are of comparable sizes, this is multi-task learning, either supervised (when N i,l are all positive) or unsupervised (when N i,l = 0). We describe two strategies to improve supervised learning by exploiting shared features. Common-branch transfer In this approach, we first train joint autoencoders on both source and target tasks simultaneously, using all available unlabeled data. Then, for the source tasks (the ones with more labeled examples), we fine-tune the branches up to the shared layer using the sets of labeled samples, and freeze the learned shared layers. Finally, for the target tasks, we use the available labeled data to train only its private branches while fixing the shared layers fine-tuned on the source data. End-to-end learning The second, end-to-end approach, combines supervised and unsupervised training. Here we extend the JAE architecture by adding new layers, with supervised loss functions for each task; see Figure 1(c). We train the new network using all losses from all tasks simultaneously - reconstruction losses using unlabeled data, and supervised losses using labeled data. When the size of the labeled sets is highly non-uniform, the network is naturally suitable for transfer learning. When the labeled sample sizes are roughly of the same order of magnitude, the setup is suitable for semi-supervised learning. 3.3 ON THE DEPTH OF SHARING It is common knowledge that similar low-level features are often helpful for similar tasks. For example, in many vision applications, CNNs exhibit the same Gabor-type filters in their first layer, regardless of the objects they are trained to classify. This observation makes low-level features immediate candidates for sharing in multi-task learning settings. However, unsurprisingly, sharing low-level features is not as beneficial when working with domains of different nature (e.g., handwritten digits vs. street signs). Our approach allows to share weights in deeper layers of a neural net, while leaving the shallow layers un-linked. The key idea is that by forcing all shared-branch nets to share deep weights, their preceding shallow layers must learn to transform the data from the different domains into a common form. We support this intuition through several experiments. As our preliminary results in Section 4.2.1 show, for similar domains, sharing deep layers provides the same performance boost as sharing shallow layers. Thus, we pay no price for relying only on “deep similarities”. But for domains of a different nature, sharing deep layers has a clear advantage. 4 EXPERIMENTS All experiments were implemented in Keras over Tensorflow. The code will be made available soon, and the network architectures used are given in detail in the appendix. 4.1 UNSUPERVISED LEARNING We present experimental results demonstrating the improvement in unsupervised learning of multiple tasks on the MNIST and CIFAR-10 datasets. For the MNIST experiment, we have separated the training images into two subsets: X1, containing the digits {0− 4} and X2, containing the digits {5− 9}. We compared the L2 reconstruction error achieved by the JAE to a baseline of a pair of AEs trained on each dataset with architecture identical to a single branch of the JAE. The joint autoencoder (MSE =5.4) out-performed the baseline (MSE = 5.6) by 4%. The autoencoders had the same cumulative bottleneck size as the JAE, to ensure the same hidden representation size. To ensure we did not benefit solely from increased capacity, we also compared the AEs to a JAE with the same total number of parameters as the baseline, obtained by reducing the size of each layer by√ 2. This model achieved an MSE of 5.52, a1.4% improvement over the baseline. To further understand the features learned by the shared and private bottlenecks, we visualize the activations of the bottlenecks on 1000 samples from each dataset, using 2D t-SNE embeddings (van der Maaten and Hinton, 2008). Figure 2(a) demonstrates that the common branches containing the shared layer (green and magenta) are much more mixed between themselves than the private branches (red and black), indicating that they indeed extract shared features. Figure 2(b) displays examples of digits reconstructions. The columns show (from left to right) the original digit, the image reconstructed by the full JAE, the output of the private branches and the shared branches. We see that the common branches capture the general shape of the digit, while the private branches capture the fine details which are specific to each subset. We verify quantitatively the claim about the differences in separation between the private and shared branches. The Fisher criterion for the separation between the t-SNE embeddings of the private branches is 7.22 · 10−4, whereas its counterpart for the shared branches is 2.77 · 10−4, 2.6 times less. Moreover, the shared branch embedding variance for both datasets is approximately identical, whereas the private branches map the dataset they were trained on to locations with variance greater by 1.35 than the dataset they had no access to. This illustrates the extent to which the shared branches learn to separate both datasets better than the private ones. For CIFAR-10 we trained the baseline autoencoder on single-class subsets of the database (e.g., all airplane images) and trained the JAE on pairs of such subsets. Table 1 shows a few typical results, demonstrating a consistent advantage for JAEs. Besides the lower reconstruction error, we can see that visually similar image classes, enjoy a greater boost in performance. For instance, the pair deer-horses enjoyed a performance boost of 37%, greater than the typical boost of 33 − 35%. As with MNIST, we also compared the pair of autoencoders to a JAE with the same total number of parameters (obtained by √ 2 size reduction of each layer), achieving 22 − 24% boost. Thus, the observed improvement is clearly not a result of mere increased network capacity. Performance of JAEs and JAEs reduced by a √ 2 factor vs standard AEs in terms of reconstruction MSE on pairs of objects in CIFAR-10: airplanes (A), deer (D), horses (H), ships(S). For each pair of objects, we give the standard AE error, JAE and JAE-reduced error and the improvement percentage. We remark that we experimented with an extension of unsupervised JAEs to variational autoencoders ((Kingma and Welling, 2014)). Unlike standard VAEs, we trained three hidden code layers, requiring each to have a small Kullback-Leibler divergence from a given normal distribution. One of these layers was used to reconstruct both datasets (analogous to the shared bottleneck in a JAE), while the other two were dedicated each to one of the datasets (analogous to the private branches). The reconstruction results on the halves of the MNIST dataset were promising, yielding an improvement of 12% over a pair of VAEs of the same cumulative size. Unfortunately, we were not able to achieve similar results on the CIFAR-10 dataset, nor to perform efficient multi-task\ transfer learning with joint VAEs. This remains an intriguing project for the future, 4.2 TRANSFER LEARNING Next, we compare the performance on MNIST of the two JAE-based transfer learning methods detailed in Section 3.2. For both methods, X1 contains digits from {0− 4} and X2 contains the digits {5− 9}. The source and target datasets comprise 2000 and 500 samples, respectively. All results are measured on the full MNIST test set. The common-branch transfer method yields 92.3% and 96.1% classification precision for the X1 → X2 and X2 → X1 transfer tasks, respectively. The end-to-end approach results in 96.6% and 98.3% scores on the same tasks, which demonstrates the superiority of the end-to-end approach. 4.2.1 SHARED LAYER DEPTH We investigate the influence of shared layer depth on the transfer performance. We see in Table 2 that for highly similar pairs of tasks such as the two halves of the MNIST dataset, the depth has little significance, while for dissimilar pairs such as MNIST-USPS, “deeper is better” - the performance improves with the shared layer depth. Moreover, when the input dimensions differ, early sharing is impossible - the data must first be transformed to have the same dimensions. 4.2.2 MNIST, USPS AND SVHN DIGITS DATASETS We have seen that the end-to-end JAE-with-transfer algorithm outperforms the alternative approach. We now compare it to other domain adaptation methods that use little to no target samples for Influence of the shared layer depth on the transfer learning performance. For the MNIST-USPS pair, only partial data are available for dimensional reasons. supervised learning, applied to the MNIST, USPS and SVHN digits datasets. The transfer tasks we consider are MNIST→USPS , USPS→MNIST and SVHN→MNIST. Following (Tzeng et al., 2017) and (Long et al., 2013), we use 2000 samples for MNIST and 1800 samples from USPS. For SVHN→MNIST, we use the complete training sets. In all three tasks, both the source and the target samples are used for the unsupervised JAE training. In addition, the source samples are used for the source supervised element of the network. We study the weakly-supervised performance of JAE and ADDA allowing access to a small number of target samples, ranging from 5 to 50 per digit. For the supervised version of ADDA, we fine-tune the classifiers using the small labeled target sets after the domain adaptation. Figure 3 (a)− (c) provides the results of our experiments. For recent methods such as CoGAN, gradient reversal, domain confusion and DSN, we display results with zero supervision, as they do not support weakly-supervised training. For DSN, we provide preliminary results on MNIST↔USPS, without model optimization that is likely to prevent over-fitting. On all tasks, we achieve results comparable or superior to existing methods using very limited supervision, despite JAE being both conceptually and computationally simpler than competing approaches. In particular, we do not train a GAN as in CoGAN, and require a single end-to-end training period, unlike ADDA that trains three separate networks in three steps. Computationally, the models used for MNIST→USPS and USPS→MNIST have 1.36M parameters, whereas ADDA uses over 1.5M weights. For SVHN→MNIST, we use a model with 3M weights, comparable to the 1.5M parameters in ADDA and smaller by an order of magnitude than DSN. The SVHN→MNIST task is considered the hardest (for instance, GAN-based approaches fail to address it) yet the abundance of unsupervised training data allows us to achieve good results, relative to previous methods. We provide further demonstration that knowledge is indeed transferred from the source to the target in the MNIST→USPS transfer task with 50 samples per digit. Source supervised learning, target unsupervised learning and target classifier training are frozen after the source classifier saturates (epoch 4). The subsequent target test improvement by 2% is due solely to the source dataset reconstruction training, passed to the target via the shared bottleneck layer (Figure 3(d)). 4.2.3 THREE-WAY TRANSFER LEARNING We demonstrate the ability to extend our approach to multiple tasks with ease by transferring knowledge from SVHN to MNIST and USPS simultaneously. That is, we train a triple-task JAE reconstructing all three datasets, with additional supervised training on SVHN and weakly-supervised training on the target sets. All labeled samples are used for the source, while the targets use 50 samples per digit. The results illustrate the benefits of multi-task learning: 94.5% classification accuracy for MNIST, a 0.8% improvement over the SVHN→MNIST task, and 88.9% accuracy in UPS, a 1.2% improvement over SVHN→USPS. This is consistent with unsupervised learning being useful for the classification. USPS is much smaller, thus it has a lower score, but it benefits relatively more from the presence of the other, larger, task. We stress that the extension to multiple tasks was straightforward, and indeed we did not tweak the various ’ models, opting instead for previously used JAEs, with a single shared bottleneck. Most state-of-the-art transfer methods do not allow for an obvious, immediate adaptation for transfer learning between multiple tasks. 5 CONCLUSION We presented a general scheme for incorporating prior knowledge within deep feedforward neural networks for domain adaptation, multi-task and transfer learning problems. The approach is general and flexible, operates in an end-to-end setting, and enables the system to self-organize to solve tasks based on prior or concomitant exposure to similar tasks, requiring standard gradient based optimization for learning. The basic idea of the approach is the sharing of representations for aspects which are common to all domains/tasks while maintaining private branches for task-specific features. The method is applicable to data from multiple sources and types, and has the advantage of being able to share weights at arbitrary network levels, enabling abstract levels of sharing. We demonstrated the efficacy of our approach on several domain adaptation and transfer learning problems, and provided intuition about the meaning of the representations in various branches. In a broader context, it is well known that the imposition of structural constraints on neural networks, usually based on prior domain knowledge, can significantly enhance their performance. The prime example of this is, of course, the convolutional neural network. Our work can be viewed within that general philosophy, showing that improved functionality can be attained by the modular prior structures imposed on the system, while maintaining simple learning rules.
1. What is the main contribution of the paper regarding multi-task learning? 2. How does the proposed approach work, and what are its key components? 3. What are the strengths of the paper in terms of presentation and ease of understanding? 4. What are the weaknesses of the paper, particularly in experimental results and significance? 5. Do you have any questions or concerns about the shared parameters between related tasks? 6. How does the reviewer assess the impact of the paper's contributions and novelty?
Review
Review The paper focuses on learning common features from multiple domains data in a unsupervised and supervised learning scheme. Setting this as a general multi task learning, the idea consists in jointly learning autoecnoders, one for each domain, for the multiples domain data in such a way that parts of the parameters of the domain autoencoder are shared. Each domain/task autoencoder then consists in a shared part and a private part. The authors propose a variant of the model in the case of supervised learning and end up with a general architecture for multi-task, semi-supervised and transfer learning. The presentation of the paper is good and the paper is easy to follow and explores the rather intuitive and simple idea of sharing parameters between related tasks. Experimental show some interesting results. First unsupervised experiments on Mnist data show improved MSe of joint autoecnoders but are these differences really significant (e.g. from 0.56 to 5.52) ? Moreover i am not sure to understand the meaning of separation criterion computed on t-sne of hidden representations. Results of Table 1 show improved reconstruction performance (MSE?) of joint auto encoders over independent ones for unrelated pairs such as airplane and horses. I a not sure ti understand why this improvement occurs even with very different classes. The investigation on the depth where sharing should occur is quite interesting and related to the usual idea of higher transferable property low level features. Results on transfer are the most interesting ones actually but do not seem to improve so much over baselines.
ICLR
Title Understanding Deep Neural Networks with Rectified Linear Units Abstract In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to global optimality with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of “hard” functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number k there exists a function representable by a ReLU DNN with k hidden layers and total size k, such that any ReLU DNN with at most k hidden layers will require at least 1 2k k+1− 1 total nodes. Finally, for the family of R → R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a smoothly parameterized family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory. N/A k+1− 1 total nodes. Finally, for the family of Rn → R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a smoothly parameterized family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory. 1 INTRODUCTION Deep neural networks (DNNs) provide an excellent family of hypotheses for machine learning tasks such as classification. Neural networks with a single hidden layer of finite size can represent any continuous function on a compact subset of Rn arbitrary well. The universal approximation result was first given by Cybenko in 1989 for sigmoidal activation function (Cybenko, 1989), and later generalized by Hornik to an arbitrary bounded and nonconstant activation function Hornik (1991). Furthermore, neural networks have finite VC dimension (depending polynomially on the number of edges in the network), and therefore, are PAC (probably approximately correct) learnable using a sample of size that is polynomial in the size of the networks Anthony & Bartlett (1999). However, neural networks based methods were shown to be computationally hard to learn (Anthony & Bartlett, 1999) and had mixed empirical success. Consequently, DNNs fell out of favor by late 90s. Recently, there has been a resurgence of DNNs with the advent of deep learning LeCun et al. (2015). Deep learning, loosely speaking, refers to a suite of computational techniques that have been developed recently for training DNNs. It started with the work of Hinton et al. (2006), which gave empirical evidence that if DNNs are initialized properly (for instance, using unsupervised pre-training), then we can find good solutions in a reasonable amount of runtime. This work was soon followed by a series of early successes of deep learning at significantly improving the state-of-the-art in speech recognition Hinton et al. (2012). Since then, deep learning has received immense attention from the machine learning community with several state-of-the-art AI systems in speech recognition, image classification, and natural language processing based on deep neural nets Hinton et al. (2012); Dahl et al. (2013); Krizhevsky et al. (2012); Le (2013); Sutskever et al. (2014). While there is less of evidence now that pre-training actually helps, several other solutions have since been put forth ∗Department of Computer Science, Email: arora@cs.jhu.edu †Department of Applied Mathematics and Statistics, Email: basu.amitabh@jhu.edu ‡Department of Computer Science, Email: mianjy@jhu.edu §Department of Applied Mathematics and Statistics, Email: amukhe14@jhu.edu to address the issue of efficiently training DNNs. These include heuristics such as dropouts Srivastava et al. (2014), but also considering alternate deep architectures such as convolutional neural networks Sermanet et al. (2014), deep belief networks Hinton et al. (2006), and deep Boltzmann machines Salakhutdinov & Hinton (2009). In addition, deep architectures based on new non-saturating activation functions have been suggested to be more effectively trainable – the most successful and widely popular of these is the rectified linear unit (ReLU) activation, i.e., σ(x) = max{0, x}, which is the focus of study in this paper. In this paper, we formally study deep neural networks with rectified linear units; we refer to these deep architectures as ReLU DNNs. Our work is inspired by these recent attempts to understand the reason behind the successes of deep learning, both in terms of the structure of the functions represented by DNNs, Telgarsky (2015; 2016); Kane & Williams (2015); Shamir (2016), as well as efforts which have tried to understand the non-convex nature of the training problem of DNNs better Kawaguchi (2016); Haeffele & Vidal (2015). Our investigation of the function space represented by ReLU DNNs also takes inspiration from the classical theory of circuit complexity; we refer the reader to Arora & Barak (2009); Shpilka & Yehudayoff (2010); Jukna (2012); Saptharishi (2014); Allender (1998) for various surveys of this deep and fascinating field. In particular, our gap results are inspired by results like the ones by Hastad Hastad (1986), Razborov Razborov (1987) and Smolensky Smolensky (1987) which show a strict separation of complexity classes. We make progress towards similar statements with deep neural nets with ReLU activation. 1.1 NOTATION AND DEFINITIONS We extend the ReLU activation function to vectors x ∈ Rn through entry-wise operation: σ(x) = (max{0, x1},max{0, x2}, . . . ,max{0, xn}). For any (m,n) ∈ N, let Anm and Lnm denote the class of affine and linear transformations from Rm → Rn, respectively. Definition 1. [ReLU DNNs, depth, width, size] For any number of hidden layers k ∈ N, input and output dimensions w0, wk+1 ∈ N, a Rw0 → Rwk+1 ReLU DNN is given by specifying a sequence of k natural numbers w1, w2, . . . , wk representing widths of the hidden layers, a set of k affine transformations Ti : Rwi−1 → Rwi for i = 1, . . . , k and a linear transformation Tk+1 : Rwk → Rwk+1 corresponding to weights of the hidden layers. Such a ReLU DNN is called a (k + 1)-layer ReLU DNN, and is said to have k hidden layers. The function f : Rn1 → Rn2 computed or represented by this ReLU DNN is f = Tk+1 ◦ σ ◦ Tk ◦ · · · ◦ T2 ◦ σ ◦ T1, (1.1) where ◦ denotes function composition. The depth of a ReLU DNN is defined as k + 1. The width of a ReLU DNN is max{w1, . . . , wk}. The size of the ReLU DNN is w1 + w2 + . . .+ wk. Definition 2. We denote the class of Rw0 → Rwk+1 ReLU DNNs with k hidden layers of widths {wi}ki=1 by F{wi}k+1i=0 , i.e. F{wi}k+1i=0 := {Tk+1 ◦ σ ◦ Tk ◦ · · · ◦ σ ◦ T1 : Ti ∈ A wi wi−1∀i ∈ {1, . . . , k}, Tk+1 ∈ Lwk+1wk } (1.2) Definition 3. [Piecewise linear functions] We say a function f : Rn → R is continuous piecewise linear (PWL) if there exists a finite set of polyhedra whose union is Rn, and f is affine linear over each polyhedron (note that the definition automatically implies continuity of the function because the affine regions are closed and cover Rn, and affine functions are continuous). The number of pieces of f is the number of maximal connected subsets of Rn over which f is affine linear (which is finite). Many of our important statements will be phrased in terms of the following simplex. Definition 4. Let M > 0 be any positive real number and p ≥ 1 be any natural number. Define the following set: ∆pM := {x ∈ Rp : 0 < x1 < x2 < . . . < xp < M}. 2 EXACT CHARACTERIZATION OF FUNCTION CLASS REPRESENTED BY RELU DNNS One of the main advantages of DNNs is that they can represent a large family of functions with a relatively small number of parameters. In this section, we give an exact characterization of the functions representable by ReLU DNNs. Moreover, we show how structural properties of ReLU DNNs, specifically their depth and width, affects their expressive power. It is clear from definition that any function from Rn → R represented by a ReLU DNN is a continuous piecewise linear (PWL) function. In what follows, we show that the converse is also true, that is any PWL function is representable by a ReLU DNN. In particular, the following theorem establishes a one-to-one correspondence between the class of ReLU DNNs and PWL functions. Theorem 2.1. Every Rn → R ReLU DNN represents a piecewise linear function, and every piecewise linear function Rn → R can be represented by a ReLU DNN with at most dlog2(n + 1)e + 1 depth. Proof Sketch: It is clear that any function represented by a ReLU DNN is a PWL function. To see the converse, we first note that any PWL function can be represented as a linear combination of piecewise linear convex functions. More formally, by Theorem 1 in (Wang & Sun, 2005), for every piecewise linear function f : Rn → R, there exists a finite set of affine linear functions `1, . . . , `k and subsets S1, . . . , Sp ⊆ {1, . . . , k} (not necessarily disjoint) where each Si is of cardinality at most n+ 1, such that f = p∑ j=1 sj ( max i∈Sj `i ) , (2.1) where sj ∈ {−1,+1} for all j = 1, . . . , p. Since a function of the form maxi∈Sj `i is a piecewise linear convex function with at most n + 1 pieces (because |Sj | ≤ n + 1), Equation (2.1) says that any continuous piecewise linear function (not necessarily convex) can be obtained as a linear combination of piecewise linear convex functions each of which has at most n + 1 affine pieces. Furthermore, Lemmas D.1, D.2 and D.3 in the Appendix (see supplementary material), show that composition, addition, and pointwise maximum of PWL functions are also representable by ReLU DNNs. In particular, in Lemma D.3 we note that max{x, y} = x+y2 + |x−y| 2 is implementable by a two layer ReLU network and use this construction in an inductive manner to show that maximum of n+ 1 numbers can be computed using a ReLU DNN with depth at most dlog2(n+ 1)e. While Theorem 2.1 gives an upper bound on the depth of the networks needed to represent all continuous piecewise linear functions on Rn, it does not give any tight bounds on the size of the networks that are needed to represent a given piecewise linear function. For n = 1, we give tight bounds on size as follows: Theorem 2.2. Given any piecewise linear function R→ R with p pieces there exists a 2-layer DNN with at most p nodes that can represent f . Moreover, any 2-layer DNN that represents f has size at least p− 1. Finally, the main result of this section follows from Theorem 2.1, and well-known facts that the piecewise linear functions are dense in the family of compactly supported continuous functions and the family of compactly supported continuous functions are dense inLq(Rn) (Royden & Fitzpatrick, 2010)). Recall that Lq(Rn) is the space of Lebesgue integrable functions f such that ∫ |f |qdµ <∞, where µ is the Lebesgue measure on Rn (see Royden Royden & Fitzpatrick (2010)). Theorem 2.3. Every function in Lq(Rn), (1 ≤ q ≤ ∞) can be arbitrarily well-approximated in the Lq norm (which for a function f is given by ||f ||q = ( ∫ |f |q)1/q) by a ReLU DNN function with at most dlog2(n + 1)e hidden layers. Moreover, for n = 1, any such Lq function can be arbitrarily well-approximated by a 2-layer DNN, with tight bounds on the size of such a DNN in terms of the approximation. Proofs of Theorems 2.2 and 2.3 are provided in Appendix A. We would like to remark that a weaker version of Theorem 2.1 was observed in (Goodfellow et al., 2013, Proposition 4.1) (with no bound on the depth), along with a universal approximation theorem (Goodfellow et al., 2013, Theorem 4.3) similar to Theorem 2.3. The authors of Goodfellow et al. (2013) also used a previous result of Wang (Wang, 2004) for obtaining their result. In a subsequent work Boris Hanin (Hanin, 2017) has, among other things, found a width and depth upper bound for ReLU net representation of positive PWL functions on [0, 1]n. The width upperbound is n+3 for general positive PWL functions and n + 1 for convex positive PWL functions. For convex positive PWL functions his depth upper bound is sharp if we disallow dead ReLUs. 3 BENEFITS OF DEPTH Success of deep learning has been largely attributed to the depth of the networks, i.e. number of successive affine transformations followed by nonlinearities, which is shown to be extracting hierarchical features from the data. In contrast, traditional machine learning frameworks including support vector machines, generalized linear models, and kernel machines can be seen as instances of shallow networks, where a linear transformation acts on a single layer of nonlinear feature extraction. In this section, we explore the importance of depth in ReLU DNNs. In particular, in Section 3.1, we provide a smoothly parametrized family of R→ R “hard” functions representable by ReLU DNNs, which requires exponentially larger size for a shallower network. Furthermore, in Section 3.2, we construct a continuum of Rn → R “hard” functions representable by ReLU DNNs, which to the best of our knowledge is the first explicit construction of ReLU DNN functions whose number of affine pieces grows exponentially with input dimension. The proofs of the theorems in this section are provided in Appendix B. 3.1 CIRCUIT LOWER BOUNDS FOR R→ R RELU DNNS In this section, we are only concerned about R → R ReLU DNNs, i.e. both input and output dimensions are equal to one. The following theorem shows the depth-size trade-off in this setting. Theorem 3.1. For every pair of natural numbers k ≥ 1, w ≥ 2, there exists a family of hard functions representable by a R → R (k + 1)-layer ReLU DNN of width w such that if it is also representable by a (k′ + 1)-layer ReLU DNN for any k′ ≤ k, then this (k′ + 1)-layer ReLU DNN has size at least 12k ′w k k′ − 1. In fact our family of hard functions described above has a very intricate structure as stated below. Theorem 3.2. For every k ≥ 1,w ≥ 2, every member of the family of hard functions in Theorem 3.1 has wk pieces and this family can be parametrized by⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times , (3.1) i.e., for every point in the set above, there exists a distinct function with the stated properties. The following is an immediate corollary of Theorem 3.1 by choosing the parameters carefully. Corollary 3.3. For every k ∈ N and > 0, there is a family of functions defined on the real line such that every function f from this family can be represented by a (k1+ ) + 1-layer DNN with size k2+ and if f is represented by a k+1-layer DNN, then this DNN must have size at least 12k ·kk −1. Moreover, this family can be parametrized as, ∪M>0∆k 2+ −1 M . A particularly illuminative special case is obtained by setting = 1 in Corollary 3.3: Corollary 3.4. For every natural number k ∈ N, there is a family of functions parameterized by the set ∪M>0∆k 3−1 M such that any f from this family can be represented by a k 2 + 1-layer DNN with k3 nodes, and every k + 1-layer DNN that represents f needs at least 12k k+1 − 1 nodes. We can also get hardness of approximation versions of Theorem 3.1 and Corollaries 3.3 and 3.4, with the same gaps (upto constant terms), using the following theorem. Theorem 3.5. For every k ≥ 1, w ≥ 2, there exists a function fk,w that can be represented by a (k + 1)-layer ReLU DNN with w nodes in each layer, such that for all δ > 0 and k′ ≤ k the following holds: inf g∈Gk′,δ ∫ 1 x=0 |fk,w(x)− g(x)|dx > δ, where Gk′,δ is the family of functions representable by ReLU DNNs with depth at most k′ + 1, and size at most k′w k/k′ (1−4δ)1/k′ 21+1/k′ . The depth-size trade-off results in Theorems 3.1, and 3.5 extend and improve Telgarsky’s theorems from (Telgarsky, 2015; 2016) in the following three ways: (i) If we use our Theorem 3.5 to the pair of neural nets considered by Telgarsky in Theorem 1.1 in Telgarsky (2016) which are at depths k3 (of size also scaling as k3) and k then for this purpose of approximation in the `1−norm we would get a size lower bound for the shallower net which scales as Ω(2k 2 ) which is exponentially (in depth) larger than the lower bound of Ω(2k) that Telgarsky can get for this scenario. (ii) Telgarsky’s family of hard functions is parameterized by a single natural number k. In contrast, we show that for every pair of natural numbers w and k, and a point from the set in equation 3.1, there exists a “hard” function which to be represented by a depth k′ network would need a size of at least w k k′ k′. With the extra flexibility of choosing the parameter w, for the purpose of showing gaps in representation ability of deep nets we can shows size lower bounds which are super-exponential in depth as explained in Corollaries 3.3 and 3.4. (iii) A characteristic feature of the “hard” functions in Boolean circuit complexity is that they are usually a countable family of functions and not a “smooth” family of hard functions. In fact, in the last section of Telgarsky (2015), Telgarsky states this as a “weakness” of the state-of-the-art results on “hard” functions for both Boolean circuit complexity and neural nets research. In contrast, we provide a smoothly parameterized family of “hard” functions in Section 3.1 (parametrized by the set in equation 3.1). Such a continuum of hard functions wasn’t demonstrated before this work. We point out that Telgarsky’s results in (Telgarsky, 2016) apply to deep neural nets with a host of different activation functions, whereas, our results are specifically for neural nets with rectified linear units. In this sense, Telgarsky’s results from (Telgarsky, 2016) are more general than our results in this paper, but with weaker gap guarantees. Eldan-Shamir (Shamir, 2016; Eldan & Shamir, 2016) show that there exists an Rn → R function that can be represented by a 3-layer DNN, that takes exponential in n number of nodes to be approximated to within some constant by a 2-layer DNN. While their results are not immediately comparable with Telgarsky’s or our results, it is an interesting open question to extend their results to a constant depth hierarchy statement analogous to the recent result of Rossman et al (Rossman et al., 2015). We also note that in last few years, there has been much effort in the community to show size lowerbounds on ReLU DNNs trying to approximate various classes of functions which are themselves not necessarily exactly representable by ReLU DNNs (Yarotsky, 2016; Liang & Srikant, 2016; Safran & Shamir, 2017). 3.2 A CONTINUUM OF HARD FUNCTIONS FOR Rn → R FOR n ≥ 2 One measure of complexity of a family of Rn → R “hard” functions represented by ReLU DNNs is the asymptotics of the number of pieces as a function of dimension n, depth k + 1 and size s of the ReLU DNNs. More precisely, suppose one has a family H of functions such that for every n, k, w ∈ N the family contains at least one Rn → R function representable by a ReLU DNN with depth at most k+ 1 and maximum width at most w. The following definition formalizes a notion of complexity for such aH. Definition 5 (compH(n, k, w)). The measure compH(n, k, w) is defined as the maximum number of pieces (see Definition 3) of a Rn → R function fromH that can be represented by a ReLU DNN with depth at most k + 1 and maximum width at most w. Similar measures have been studied in previous works Montufar et al. (2014); Pascanu et al. (2013); Raghu et al. (2016). The best known families H are the ones from Theorem 4 of (Montufar et al., 2014) and a mild generalization of Theorem 1.1 of (Telgarsky, 2016) to k layers of ReLU activations with width w; these constructions achieve ( b(wn )c )(k−1)n ( ∑n j=0 ( w j ) )and compH(n, k, s) = O(w k), respectively. At the end of this section we would explain the precise sense in which we improve on these numbers. An analysis of this complexity measure is done using integer programming techniques in (Serra et al., 2017). Definition 6. Let b1, . . . ,bm ∈ Rn. The zonotope formed by b1, . . . ,bm ∈ Rn is defined as Z(b1, . . . ,bm) := {λ1b1 + . . .+ λmbm : −1 ≤ λi ≤ 1, i = 1, . . . ,m}. The set of vertices of Z(b1, . . . ,bm) will be denoted by vert(Z(b1, . . . ,bm)). The support function γZ(b1,...,bm) : Rn → R associated with the zonotope Z(b1, . . . ,bm) is defined as γZ(b1,...,bm)(r) = max x∈Z(b1,...,bm) 〈r,x〉. The following results are well-known in the theory of zonotopes (Ziegler, 1995). Theorem 3.6. The following are all true. 1. | vert(Z(b1, . . . ,bm))| ≤∑n−1i=0 (m−1i ). The set of (b1, . . . ,bm) ∈ Rn × . . .× Rn such that this does not hold at equality is a 0 measure set. 2. γZ(b1,...,bm)(r) = maxx∈Z(b1,...,bm)〈r,x〉 = maxx∈vert(Z(b1,...,bm))〈r,x〉, and γZ(b1,...,bm) is therefore a piecewise linear function with | vert(Z(b1, . . . ,bm))| pieces. 3. γZ(b1,...,bm)(r) = |〈r,b1〉|+ . . .+ |〈r,bm〉|. Definition 7 (extremal zonotope set). The set S(n,m) will denote the set of (b1, . . . ,bm) ∈ Rn × . . . × Rn such that | vert(Z(b1, . . . ,bm))| = ∑n−1i=0 (m−1i ). S(n,m) is the so-called “extremal zonotope set”, which is a subset of Rnm, whose complement has zero Lebesgue measure in Rnm. Lemma 3.7. Given any b1, . . . ,bm ∈ Rn, there exists a 2-layer ReLU DNN with size 2m which represents the function γZ(b1,...,bm)(r). Definition 8. For p ∈ N and a ∈ ∆pM , we define a function ha : R → R which is piecewise linear over the segments (−∞, 0], [0,a1], [a1,a2], . . . , [ap,M ], [M,+∞) defined as follows: ha(x) = 0 for all x ≤ 0, ha(ai) = M(i mod 2), and ha(M) = M−ha(ap) and for x ≥M , ha(x) is a linear continuation of the piece over the interval [ap,M ]. Note that the function has p+ 2 pieces, with the leftmost piece having slope 0. Furthermore, for a1, . . . ,ak ∈ ∆pM , we denote the composition of the functions ha1 , ha2 , . . . , hak by Ha1,...,ak := hak ◦ hak−1 ◦ . . . ◦ ha1 . Proposition 3.8. Given any tuple (b1, . . . ,bm) ∈ S(n,m) and any point (a1, . . . ,ak) ∈ ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times , the function ZONOTOPEnk,w,m[a 1, . . . ,ak,b1, . . . ,bm] := Ha1,...,ak ◦ γZ(b1,...,bm) has (m − 1)n−1wk pieces and it can be represented by a k + 2 layer ReLU DNN with size 2m+ wk. Finally, we are ready to state the main result of this section. Theorem 3.9. For every tuple of natural numbers n, k,m ≥ 1 and w ≥ 2, there exists a family of Rn → R functions, which we call ZONOTOPEnk,w,m with the following properties: (i) Every f ∈ ZONOTOPEnk,w,m is representable by a ReLU DNN of depth k + 2 and size 2m+ wk, and has (∑n−1 i=0 ( m−1 i )) wk pieces. (ii) Consider any f ∈ ZONOTOPEnk,w,m. If f is represented by a (k′ + 1)- layer DNN for any k′ ≤ k, then this (k′ + 1)-layer DNN has size at least max { 1 2 (k ′w k k′n ) · (m− 1)(1− 1n ) 1k′ − 1 , w k k′ n1/k′ k′ } . (iii) The family ZONOTOPEnk,w,m is in one-to-one correspondence with S(n,m)× ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times . Comparison to the results in (Montufar et al., 2014) Firstly we note that the construction in (Montufar et al., 2014) requires all the hidden layers to have width at least as big as the input dimensionality n. In contrast, we do not impose such restrictions and the network size in our construction is independent of the input dimensionality. Thus our result probes networks with bottleneck architectures whose complexity cant be seen from their result. Secondly, in terms of our complexity measure, there seem to be regimes where our bound does better. One such regime, for example, is when n ≤ w < 2n and k ∈ Ω( nlog(n) ), by setting in our construction m < n. Thirdly, it is not clear to us whether the construction in (Montufar et al., 2014) gives a smoothly parameterized family of functions other than by introducing small perturbations of the construction in their paper. In contrast, we have a smoothly parameterized family which is in one-to-one correspondence with a well-understood manifold like the higher-dimensional torus. 4 TRAINING 2-LAYER Rn → R RELU DNNS TO GLOBAL OPTIMALITY In this section we consider the following empirical risk minimization problem. Given D data points (xi, yi) ∈ Rn × R, i = 1, . . . , D, find the function f represented by 2-layer Rn → R ReLU DNNs of width w, that minimizes the following optimization problem min f∈F{n,w,1} 1 D D∑ i=1 `(f(xi), yi) ≡ min T1∈Awn , T2∈L1w 1 D D∑ i=1 ` ( T2(σ(T1(xi))), yi ) (4.1) where ` : R × R → R is a convex loss function (common loss functions are the squared loss, `(y, y′) = (y − y′)2, and the hinge loss function given by `(y, y′) = max{0, 1 − yy′}). Our main result of this section gives an algorithm to solve the above empirical risk minimization problem to global optimality. Theorem 4.1. There exists an algorithm to find a global optimum of Problem 4.1 in time O(2w(D)nwpoly(D,n,w)). Note that the running time O(2w(D)nwpoly(D,n,w)) is polynomial in the data size D for fixed n,w. Proof Sketch: A full proof of Theorem 4.1 is included in Appendix C. Here we provide a sketch of the proof. When the empirical risk minimization problem is viewed as an optimization problem in the space of weights of the ReLU DNN, it is a nonconvex, quadratic problem. However, one can instead search over the space of functions representable by 2-layer DNNs by writing them in the form similar to (2.1). This breaks the problem into two parts: a combinatorial search and then a convex problem that is essentially linear regression with linear inequality constraints. This enables us to guarantee global optimality. Algorithm 1 Empirical Risk Minimization 1: function ERM(D) . Where D = {(xi, yi)}Di=1 ⊂ Rn × R 2: S = {+1,−1}w . All possible instantiations of top layer weights 3: Pi = {(P i+, P i−)}, i = 1, . . . , w . All possible partitions of data into two parts 4: P = P1 × P2 × · · · × Pw 5: count = 1 . Counter 6: for s ∈ S do 7: for {(P i+, P i−)}wi=1 ∈ P do 8: loss(count) = minimize: ã,b̃ D∑ j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj) subject to: ã i · xj + b̃i ≤ 0 ∀j ∈ P i− ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ 9: count++ 10: end for 11: OPT = argminloss(count) 12: end for 13: return {ã}, {b̃}, s corresponding to OPT’s iterate 14: end function Let T1(x) = Ax + b and T2(y) = a′ · y for A ∈ Rw×n and b, a′ ∈ Rw. If we denote the i-th row of the matrix A by ai, and write bi, a′i to denote the i-th coordinates of the vectors b, a ′ respectively, due to homogeneity of ReLU gates, the network output can be represented as f(x) = w∑ i=1 a′i max{0, ai · x+ bi} = w∑ i=1 si max{0, ãi · x+ b̃i}. where ãi ∈ Rn, b̃i ∈ R and si ∈ {−1,+1} for all i = 1, . . . , w. For any hidden node i ∈ {1 . . . , w}, the pair (ãi, b̃i) induces a partition Pi := (P i+, P i−) on the dataset, given by P i− = {j : ãi · xj + b̃i ≤ 0} and P i+ = {1, . . . , D}\P i−. Algorithm 1 proceeds by generating all combinations of the partitions Pi as well as the top layer weights s ∈ {+1,−1}w, and minimizing the loss∑D j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj) subject to the constraints ãi · xj + b̃i ≤ 0 ∀j ∈ P i− and ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ which are imposed for all i = 1, . . . , w, which is a convex program. Algorithm 1 implements the empirical risk minimization (ERM) rule for training ReLU DNN with one hidden layer. To the best of our knowledge there is no other known algorithm that solves the ERM problem to global optimality. We note that due to known hardness results exponential dependence on the input dimension is unavoidable Blum & Rivest (1992); Shalev-Shwartz & BenDavid (2014); Algorithm 1 runs in time polynomial in the number of data points. To the best of our knowledge there is no hardness result known which rules out empirical risk minimization of deep nets in time polynomial in circuit size or data size. Thus our training result is a step towards resolving this gap in the complexity literature. A related result for improperly learning ReLUs has been recently obtained by Goel et al (Goel et al., 2016). In contrast, our algorithm returns a ReLU DNN from the class being learned. Another difference is that their result considers the notion of reliable learning as opposed to the empirical risk minimization objective considered in (4.1). 5 DISCUSSION The running time of the algorithm that we give in this work to find the exact global minima of a two layer ReLU-DNN is exponential in the input dimension n and the number of hidden nodes w. The exponential dependence on n can not be removed unless P = NP ; see Shalev-Shwartz & Ben-David (2014); Blum & Rivest (1992); DasGupta et al. (1995). However, we are not aware of any complexity results which would rule out the possibility of an algorithm which trains to global optimality in time that is polynomial in the data size and/or the number of hidden nodes, assuming that the input dimension is a fixed constant. Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good open question for future research, in our opinion. Perhaps an even better breakthrough would be to get optimal training algorithms for DNNs with two or more hidden layers and this seems like a substantially harder nut to crack. It would also be a significant breakthrough to get gap results between consecutive constant depths or between logarithmic and constant depths. ACKNOWLEDGMENTS We would like to thank Christian Tjandraatmadja for pointing out a subtle error in a previous version of the paper, which affected the complexity results for the number of linear regions in our constructions in Section 3.2. Anirbit would like to thank Ramprasad Saptharishi, Piyush Srivastava and Rohit Gurjar for extensive discussions on Boolean and arithmetic circuit complexity. This paper has been immensely influenced by the perspectives gained during those extremely helpful discussions. Amitabh Basu gratefully acknowledges support from the NSF grant CMMI1452820. Raman Arora was supported in part by NSF BIGDATA grant IIS-1546482. A EXPRESSING PIECEWISE LINEAR FUNCTIONS USING RELU DNNS Proof of Theorem 2.2. Any continuous piecewise linear function R→ R which hasm pieces can be specified by three pieces of information, (1) sL the slope of the left most piece, (2) the coordinates of the non-differentiable points specified by a (m − 1)−tuple {(ai, bi)}m−1i=1 (indexed from left to right) and (3) sR the slope of the rightmost piece. A tuple (sL, sR, (a1, b1), . . . , (am−1, bm−1) uniquely specifies a m piecewise linear function from R → R and vice versa. Given such a tuple, we construct a 2-layer DNN which computes the same piecewise linear function. One notes that for any a, r ∈ R, the function f(x) = { 0 x ≤ a r(x− a) x > a (A.1) is equal to sgn(r) max{|r|(x−a), 0}, which can be implemented by a 2-layer ReLU DNN with size 1. Similarly, any function of the form, g(x) = { t(x− a) x ≤ a 0 x > a (A.2) is equal to − sgn(t) max{−|t|(x − a), 0}, which can be implemented by a 2-layer ReLU DNN with size 1. The parameters r, t will be called the slopes of the function, and a will be called the breakpoint of the function.If we can write the given piecewise linear function as a sum of m functions of the form (A.1) and (A.2), then by Lemma D.2 we would be done. It turns out that such a decomposition of any p piece PWL function h : R → R as a sum of p flaps can always be arranged where the breakpoints of the p flaps all are all contained in the p − 1 breakpoints of h. First, observe that adding a constant to a function does not change the complexity of the ReLU DNN expressing it, since this corresponds to a bias on the output node. Thus, we will assume that the value of h at the last break point am−1 is bm−1 = 0. We now use a single function f of the form (A.1) with slope r and breakpoint a = am−1, and m − 1 functions g1, . . . , gm−1 of the form (A.2) with slopes t1, . . . , tm−1 and breakpoints a1, . . . , am−1, respectively. Thus, we wish to express h = f + g1 + . . . + gm−1. Such a decomposition of h would be valid if we can find values for r, t1, . . . , tm−1 such that (1) the slope of the above sum is = sL for x < a1, (2) the slope of the above sum is = sR for x > am−1, and (3) for each i ∈ {1, 2, 3, ..,m − 1} we have bi = f(ai) + g1(ai) + . . .+ gm−1(ai). The above corresponds to asking for the existence of a solution to the following set of simultaneous linear equations in r, t1, . . . , tm−1: sR = r, sL = t1 + t2 + . . .+ tm−1, bi = m−1∑ j=i+1 tj(aj−1 − aj) for all i = 1, . . . ,m− 2 It is easy to verify that the above set of simultaneous linear equations has a unique solution. Indeed, r must equal sR, and then one can solve for t1, . . . , tm−1 starting from the last equation bm−2 = tm−1(am−2 − am−1) and then back substitute to compute tm−2, tm−3, . . . , t1. The lower bound of p − 1 on the size for any 2-layer ReLU DNN that expresses a p piece function follows from Lemma D.6. One can do better in terms of size when the rightmost piece of the given function is flat, i.e., sR = 0. In this case r = 0, which means that f = 0; thus, the decomposition of h above is of size p − 1. A similar construction can be done when sL = 0. This gives the following statement which will be useful for constructing our forthcoming hard functions. Corollary A.1. If the rightmost or leftmost piece of a R→ R piecewise linear function has 0 slope, then we can compute such a p piece function using a 2-layer DNN with size p− 1. Proof of theorem 2.3. Since any piecewise linear function Rn → R is representable by a ReLU DNN by Corollary 2.1, the proof simply follows from the fact that the family of continuous piecewise linear functions is dense in any Lp(Rn) space, for 1 ≤ p ≤ ∞. B BENEFITS OF DEPTH B.1 CONSTRUCTING A CONTINUUM OF HARD FUNCTIONS FOR R→ R RELU DNNS AT EVERY DEPTH AND EVERY WIDTH Lemma B.1. For any M > 0, p ∈ N, k ∈ N and a1, . . . ,ak ∈ ∆pM , if we compose the functions ha1 , ha2 , . . . , hak the resulting function is a piecewise linear function with at most (p + 1)k + 2 pieces, i.e., Ha1,...,ak := hak ◦ hak−1 ◦ . . . ◦ ha1 is piecewise linear with at most (p+1)k+2 pieces, with (p+1)k of these pieces in the range [0,M ] (see Figure 2). Moreover, in each piece in the range [0,M ], the function is affine with minimum value 0 and maximum value M . Proof. Simple induction on k. Proof of Theorem 3.2. Given k ≥ 1 and w ≥ 2, choose any point (a1, . . . ,ak) ∈ ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times . By Definition 8, each hai , i = 1, . . . , k is a piecewise linear function with w + 1 pieces and the leftmost piece having slope 0. Thus, by Corollary A.1, each hai , i = 1, . . . , k can be represented by a 2-layer ReLU DNN with size w. Using Lemma D.1, Ha1,...,ak can be represented by a k+ 1 layer DNN with size wk; in fact, each hidden layer has exactly w nodes. Proof of Theorem 3.1. Follows from Theorem 3.2 and Lemma D.6. Proof of Theorem 3.5. Given k ≥ 1 and w ≥ 2 define q := wk and sq := ha ◦ ha ◦ . . . ◦ ha︸ ︷︷ ︸ k times where a = ( 1w , 2 w , . . . , w−1 w ) ∈ ∆ q−1 1 . Thus, sq is representable by a ReLU DNN of width w+1 and depth k+ 1 by Lemma D.1. In what follows, we want to give a lower bound on the `1 distance of sq from any continuous p-piecewise linear comparator gp : R → R. The function sq contains b q2c triangles of width 2q and unit height. A p-piecewise linear function has p− 1 breakpoints in the interval [0, 1]. So that in at least bwk2 c− (p− 1) triangles, gp has to be affine. In the following we demonstrate that inside any triangle of sq , any affine function will incur an `1 error of at least 12wk .∫ 2i+2 wk x= 2i wk |sq(x)− gp(x)|dx = ∫ 2 wk x=0 ∣∣∣∣∣sq(x)− (y1 + (x− 0) · y2 − y12 wk − 0 ) ∣∣∣∣∣ dx = ∫ 1 wk x=0 ∣∣∣∣xwk − y1 − wkx2 (y2 − y1) ∣∣∣∣ dx+ ∫ 2wk x= 1 wk ∣∣∣∣2− xwk − y1 − wkx2 (y2 − y1) ∣∣∣∣ dx = 1 wk ∫ 1 z=0 ∣∣∣z − y1 − z 2 (y2 − y1) ∣∣∣ dz + 1 wk ∫ 2 z=1 ∣∣∣2− z − y1 − z 2 (y2 − y1) ∣∣∣ dz = 1 wk ( −3 + y1 + 2y21 2 + y1 − y2 + y2 + 2(−2 + y1)2 2− y1 + y2 ) The above integral attains its minimum of 1 2wk at y1 = y2 = 12 . Putting together, ‖swk − gp‖1 ≥ ( bw k 2 c − (p− 1) ) · 1 2wk ≥ w k − 1− 2(p− 1) 4wk = 1 4 − 2p− 1 4wk Thus, for any δ > 0, p ≤ w k − 4wkδ + 1 2 =⇒ 2p− 1 ≤ (1 4 − δ)4wk =⇒ 1 4 − 2p− 1 4wk ≥ δ =⇒ ‖swk − gp‖1 ≥ δ. The result now follows from Lemma D.6. B.2 A CONTINUUM OF HARD FUNCTIONS FOR Rn → R FOR n ≥ 2 Proof of Lemma 3.7. By Theorem 3.6 part 3., γZ(b1,...,bm)(r) = |〈r,b1〉| + . . . + |〈r,bm〉|. It suffices to observe |〈r,b1〉|+ . . .+ |〈r,bm〉| = max{〈r,b1〉,−〈r,b1〉}+ . . .+ max{〈r,bm〉,−〈r,bm〉}. Proof of Proposition 3.8. The fact that ZONOTOPEnk,w,m[a 1, . . . ,ak,b1, . . . ,bm] can be represented by a k + 2 layer ReLU DNN with size 2m + wk follows from Lemmas 3.7 and D.1. The number of pieces follows from the fact that γZ(b1,...,bm) has ∑n−1 i=0 ( m−1 i ) distinct linear pieces by parts 1. and 2. of Theorem 3.6, and Ha1,...,ak has wk pieces by Lemma B.1. Proof of Theorem 3.9. Follows from Proposition 3.8. C EXACT EMPIRICAL RISK MINIMIZATION Proof of Theorem 4.1. Let ` : R→ R be any convex loss function, and let (x1, y1), . . . , (xD, yD) ∈ Rn × R be the given D data points. As stated in (4.1), the problem requires us to find an affine transformation T1 : Rn → Rw and a linear transformation T2 : Rw → R, so as to minimize the empirical loss as stated in (4.1). Note that T1 is given by a matrix A ∈ Rw×n and a vector b ∈ Rw so that T (x) = Ax + b for all x ∈ Rn. Similarly, T2 can be represented by a vector a′ ∈ Rw such that T2(y) = a′ · y for all y ∈ Rw. If we denote the i-th row of the matrix A by ai, and write bi, a′i to denote the i-th coordinates of the vectors b, a′ respectively, we can write the function represented by this network as f(x) = w∑ i=1 a′i max{0, ai · x+ bi} = w∑ i=1 sgn(a′i) max{0, (|a′i|ai) · x+ |a′i|bi}. In other words, the family of functions over which we are searching is of the form f(x) = w∑ i=1 si max{0, ãi · x+ b̃i} (C.1) where ãi ∈ Rn, bi ∈ R and si ∈ {−1,+1} for all i = 1, . . . , w. We now make the following observation. For a given data point (xj , yj) if ãi · xj + b̃i ≤ 0, then the i-th term of (C.1) does not contribute to the loss function for this data point (xj , yj). Thus, for every data point (xj , yj), there exists a set Sj ⊆ {1, . . . , w} such that f(xj) = ∑ i∈Sj si(ã i · xj + b̃i). In particular, if we are given the set Sj for (xj , yj), then the expression on the right hand side of (C.1) reduces to a linear function of ãi, b̃i. For any fixed i ∈ {1, . . . , w}, these sets Sj induce a partition of the data set into two parts. In particular, we define P i+ := {j : i ∈ Sj} and P i− := {1, . . . , D} \ P i+. Observe now that this partition is also induced by the hyperplane given by ãi, b̃i: P i+ = {j : ãi · xj + b̃i > 0} and P i+ = {j : ãi · xj + b̃i ≤ 0}. Our strategy will be to guess the partitions P i+, P i− for each i = 1, . . . , w, and then do linear regression with the constraint that regression’s decision variables ãi, b̃i induce the guessed partition. More formally, the algorithm does the following. For each i = 1, . . . , w, the algorithm guesses a partition of the data set (xj , yj), j = 1, . . . , D by a hyperplane. Let us label the partitions as follows (P i+, P i −), i = 1, . . . , w. So, for each i = 1, . . . , w, P i + ∪ P i− = {1, . . . , D}, P i+ and P i− are disjoint, and there exists a vector c ∈ Rn and a real number δ such that P i− = {j : c · xj + δ ≤ 0} and P i+ = {j : c · xj + δ > 0}. Further, for each i = 1, . . . , w the algorithm selects a vector s in {+1,−1}w. For a fixed selection of partitions (P i+, P i −), i = 1, . . . , w and a vector s in {+1,−1}w, the algorithm solves the following convex optimization problem with decision variables ãi ∈ Rn, b̃i ∈ R for i = 1, . . . , w (thus, we have a total of (n + 1) · w decision variables). The feasible region of the optimization is given by the constraints ãi · xj + b̃i ≤ 0 ∀j ∈ P i− ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ (C.2) which are imposed for all i = 1, . . . , w. Thus, we have a total of D · w constraints. Subject to these constraints we minimize the objective ∑D j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj). Assuming the loss function ` is a convex function in the first argument, the above objective is a convex function. Thus, we have to minize a convex objective subject to the linear inequality constraints from (C.2). We finally have to count how many possible partitions (P i+, P i −) and vectors s the algorithm has to search through. It is well-known Matousek (2002) that the total number of possible hyperplane partitions of a set of sizeD in Rn is at most 2 ( D n ) ≤ Dn whenever n ≥ 2. Thus with a guess for each i = 1, . . . , w, we have a total of at most Dnw partitions. There are 2w vectors s in {−1,+1}w. This gives us a total of 2wDnw guesses for the partitions (P i+, P i −) and vectors s. For each such guess, we have a convex optimization problem with (n + 1) · w decision variables and D · w constraints, which can be solved in time poly(D,n,w). Putting everything together, we have the running time claimed in the statement. The above argument holds only for n ≥ 2, since we used the inequality 2 ( D n ) ≤ Dn which only holds for n ≥ 2. For n = 1, a similar algorithm can be designed, but one which uses the characterization achieved in Theorem 2.2. Let ` : R → R be any convex loss function, and let (x1, y1), . . . , (xD, yD) ∈ R2 be the given D data points. Using Theorem 2.2, to solve problem (4.1) it suffices to find a R → R piecewise linear function f with w pieces that minimizes the total loss. In other words, the optimization problem (4.1) is equivalent to the problem min { D∑ i=1 `(f(xi), yi) : f is piecewise linear with w pieces } . (C.3) We now use the observation that fitting piecewise linear functions to minimize loss is just a step away from linear regression, which is a special case where the function is contrained to have exactly one affine linear piece. Our algorithm will first guess the optimal partition of the data points such that all points in the same class of the partition correspond to the same affine piece of f , and then do linear regression in each class of the partition. Altenatively, one can think of this as guessing the interval (xi, xi+1) of data points where the w − 1 breakpoints of the piecewise linear function will lie, and then doing linear regression between the breakpoints. More formally, we parametrize piecewise linear functions with w pieces by the w slope-intercept values (a1, b1), . . . , (a2, b2), . . . , (aw, bw) of the w different pieces. This means that between breakpoints j and j + 1, 1 ≤ j ≤ w − 2, the function is given by f(x) = aj+1x+ bj+1, and the first and last pieces are a1x+ b1 and awx+ bw, respectively. Define I to be the set of all (w − 1)-tuples (i1, . . . , iw−1) of natural numbers such that 1 ≤ i1 ≤ . . . ≤ iw−1 ≤ D. Given a fixed tuple I = (i1, . . . , iw−1) ∈ I, we wish to search through all piecewise linear functions whose breakpoints, in order, appear in the intervals (xi1 , xi1+1), (xi2 , xi2+1), . . . , (xiw−1 , xiw−1+1). Define also S = {−1, 1}w−1. Any S ∈ S will have the following interpretation: if Sj = 1 then aj ≤ aj+1, and if Sj = −1 then aj ≥ aj+1. Now for every I ∈ I and S ∈ S, requiring a piecewise linear function that respects the conditions imposed by I and S is easily seen to be equivalent to imposing the following linear inequalities on the parameters (a1, b1), . . . , (a2, b2), . . . , (aw, bw): Sj(bj+1 − bj − (aj − aj+1)xij ) ≥ 0 Sj(bj+1 − bj − (aj − aj+1)xij+1) ≤ 0 Sj(aj+1 − aj) ≥ 0 (C.4) Let the set of piecewise linear functions whose breakpoints satisfy the above be denoted by PWL1I,S for I ∈ I, S ∈ S. Given a particular I ∈ I, we define D1 := {xi : i ≤ i1}, Dj := {xi : ij−1 < i ≤ i1} j = 2, . . . , w − 1, Dw := {xi : i > iw−1} . Observe that min{ D∑ i=1 `(f(xi)−yi) : f ∈ PWL1I,S} = min{ w∑ j=1 ( ∑ i∈Dj `(aj ·xi+bj−yi) ) : (aj , bj) satisfy (C.4)} (C.5) The right hand side of the above equation is the problem of minimizing a convex objective subject to linear constraints. Now, to solve (C.3), we need to simply solve the problem (C.5) for all I ∈ I, S ∈ S and pick the minimum. Since |I| = ( D w ) = O(Dw) and |S| = 2w−1 we need to solveO(2w ·Dw) convex optimization problems, each taking time O(poly(D)). Therefore, the total running time is O((2D)wpoly(D)). D AUXILIARY LEMMAS Now we will collect some straightforward observations that will be used often. The following operations preserve the property of being representable by a ReLU DNN. Lemma D.1. [Function Composition] If f1 : Rd → Rm is represented by a d,m ReLU DNN with depth k1 + 1 and size s1, and f2 : Rm → Rn is represented by an m,n ReLU DNN with depth k2 + 1 and size s2, then f2 ◦ f1 can be represented by a d, n ReLU DNN with depth k1 + k2 + 1 and size s1 + s2. Proof. Follows from (1.1) and the fact that a composition of affine transformations is another affine transformation. Lemma D.2. [Function Addition] If f1 : Rn → Rm is represented by a n,m ReLU DNN with depth k + 1 and size s1, and f2 : Rn → Rm is represented by a n,m ReLU DNN with depth k + 1 and size s2, then f1 +f2 can be represented by a n,m ReLU DNN with depth k+1 and size s1 +s2. Proof. We simply put the two ReLU DNNs in parallel and combine the appropriate coordinates of the outputs. Lemma D.3. [Taking maximums/minimums] Let f1, . . . , fm : Rn → R be functions that can each be represented by Rn → R ReLU DNNs with depths ki + 1 and size si, i = 1, . . . ,m. Then the function f : Rn → R defined as f(x) := max{f1(x), . . . , fm(x)} can be represented by a ReLU DNN of depth at most max{k1, . . . , km}+ log(m) + 1 and size at most s1 + . . . sm + 4(2m− 1). Similarly, the function g(x) := min{f1(x), . . . , fm(x)} can be represented by a ReLU DNN of depth at most max{k1, . . . , km}+ dlog(m)e+ 1 and size at most s1 + . . . sm + 4(2m− 1). Proof. We prove this by induction on m. The base case m = 1 is trivial. For m ≥ 2, consider g1 := max{f1, . . . , fbm2 c} and g2 := max{fbm2 c+1, . . . , fm}. By the induction hypothesis (since bm2 c, dm2 e < m when m ≥ 2), g1 and g2 can be represented by ReLU DNNs of depths at most max{k1, . . . , kbm2 c}+ dlog(b m 2 c)e+ 1 and max{kbm2 c+1, . . . , km}+ dlog(d m 2 e)e+ 1 respectively, and sizes at most s1 + . . . sbm2 c+4(2b m 2 c−1) and sbm2 c+1 + . . .+sm+4(2b m 2 c−1), respectively. Therefore, the function G : Rn → R2 given by G(x) = (g1(x), g2(x)) can be implemented by a ReLU DNN with depth at most max{k1, . . . , km} + dlog(dm2 e)e + 1 and size at most s1 + . . . + sm + 4(2m− 2). We now show how to represent the function T : R2 → R defined as T (x, y) = max{x, y} = x+y 2 + |x−y| 2 by a 2-layer ReLU DNN with size 4 – see Figure 3. The result now follows from the fact that f = T ◦G and Lemma D.1. Lemma D.4. Any affine transformation T : Rn → Rm is representable by a 2-layer ReLU DNN of size 2m. Proof. Simply use the fact that T = (I ◦ σ ◦ T ) + (−I ◦ σ ◦ (−T )), and the right hand side can be represented by a 2-layer ReLU DNN of size 2m using Lemma D.2. Lemma D.5. Let f : R → R be a function represented by a R → R ReLU DNN with depth k + 1 and widths w1, . . . , wk of the k hidden layers. Then f is a PWL function with at most 2k−1 · (w1 + 1) · w2 · . . . · wk pieces. Proof. We prove this by induction on k. The base case is k = 1, i.e, we have a 2-layer ReLU DNN. Since every activation node can produce at most one breakpoint in the piecewise linear function, we can get at most w1 breakpoints, i.e., w1 + 1 pieces. Now for the induction step, assume that for some k ≥ 1, any R→ R ReLU DNN with depth k + 1 and widths w1, . . . , wk of the k hidden layers produces at most 2k−1 · (w1 + 1) ·w2 · . . . ·wk pieces. Consider any R → R ReLU DNN with depth k + 2 and widths w1, . . . , wk+1 of the k + 1 hidden layers. Observe that the input to any node in the last layer is the output of a R → R ReLU DNN with depth k + 1 and widths w1, . . . , wk. By the induction hypothesis, the input to this node in the last layer is a piecewise linear function f with at most 2k−1 · (w1 +1) ·w2 · . . . ·wk pieces. When we apply the activation, the new function g(x) = max{0, f(x)}, which is the output of this node, may have at most twice the number of pieces as f , because each original piece may be intersected by the x-axis; see Figure 4. Thus, after going through the layer, we take an affine combination of wk+1 functions, each with at most 2 · (2k−1 · (w1 + 1) ·w2 · . . . ·wk) pieces. In all, we can therefore get at most 2·(2k−1 ·(w1+1)·w2 ·. . .·wk)·wk+1 pieces, which is equal to 2k ·(w1+1)·w2 ·. . .·wk ·wk+1, and the induction step is completed. Lemma D.5 has the following consequence about the depth and size tradeoffs for expressing functions with agiven number of pieces. Lemma D.6. Let f : R → R be a piecewise linear function with p pieces. If f is represented by a ReLU DNN with depth k+ 1, then it must have size at least 12kp 1/k − 1. Conversely, any piecewise linear function f that represented by a ReLU DNN of depth k + 1 and size at most s, can have at most ( 2sk ) k pieces. Proof. Let widths of the k hidden layers be w1, . . . , wk. By Lemma D.5, we must have 2k−1 · (w1 + 1) · w2 · . . . · wk ≥ p. (D.1) By the AM-GM inequality, minimizing the size w1 +w2 + . . .+wk subject to (D.1), means setting w1 + 1 = w2 = . . . = wk. This implies that w1 + 1 = w2 = . . . = wk ≥ 12p1/k. The first statement follows. The second statement follows using the AM-GM inequality again, this time with a restriction on w1 + w2 + . . .+ wk.
1. What are the main contributions of the paper regarding the expressiveness and learnability of ReLU-activated deep neural networks? 2. What are the strengths and weaknesses of the paper compared to prior works in the field? 3. Do you have any concerns or questions about the assumptions and limitations of the paper's theoretical analysis? 4. How does the reviewer assess the significance and practicality of the paper's findings for real-world applications?
Review
Review This paper presents several theoretical results regarding the expressiveness and learnability of ReLU-activated deep neural networks. I summarize the main results as below: (1) Any piece-wise linear function can be represented by a ReLU-acteivated DNN. Any smooth function can be approximated by such networks. (2) The expressiveness of 3-layer DNN is stronger than any 2-layer DNN. (3) Using a polynomial number of neurons, the ReLU-acteivated DNN can represent a piece-wise linear function with exponentially many pieces (4) The ReLU-activated DNN can be learnt to global optimum with an exponential-time algorithm. Among these results (1), (2), (4) are sort of known in the literature. This paper extends the existing results in some subtle ways. For (1), the authors show that the DNN has a tighter bound on the depth. For (2), the "hard" functions has a better parameterization, and the gap between 3-layer and 2-layer is proved bigger. For (4), although the algorithm is exponential-time, it guarantees to compute the global optimum. The stronger results of (1), (2), (4) all rely on the specific piece-wise linear nature of ReLU. Other than that, I don't get much more insight from the theoretical result. When the input dimension is n, the representability result of (1) fails to show that a polynomial number of neurons is sufficient. Perhaps an exponential number of neurons is necessary in the worst case, but it will be more interesting if the authors show that under certain conditions a polynomial-size network is good enough. Result (3) is more interesting as it is a new result. The authors present a constructive proof to show that ReLU-activated DNN can represent many linear pieces. However, the construction seems artificial and these functions don't seem to be visually very complex. Overall, this is an incremental work in the direction of studying the representation power of neural networks. The results might be of theoretical interest, but I doubt if a pragmatic ReLU network user will learn anything by reading this paper.
ICLR
Title Understanding Deep Neural Networks with Rectified Linear Units Abstract In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to global optimality with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of “hard” functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number k there exists a function representable by a ReLU DNN with k hidden layers and total size k, such that any ReLU DNN with at most k hidden layers will require at least 1 2k k+1− 1 total nodes. Finally, for the family of R → R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a smoothly parameterized family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory. N/A k+1− 1 total nodes. Finally, for the family of Rn → R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a smoothly parameterized family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory. 1 INTRODUCTION Deep neural networks (DNNs) provide an excellent family of hypotheses for machine learning tasks such as classification. Neural networks with a single hidden layer of finite size can represent any continuous function on a compact subset of Rn arbitrary well. The universal approximation result was first given by Cybenko in 1989 for sigmoidal activation function (Cybenko, 1989), and later generalized by Hornik to an arbitrary bounded and nonconstant activation function Hornik (1991). Furthermore, neural networks have finite VC dimension (depending polynomially on the number of edges in the network), and therefore, are PAC (probably approximately correct) learnable using a sample of size that is polynomial in the size of the networks Anthony & Bartlett (1999). However, neural networks based methods were shown to be computationally hard to learn (Anthony & Bartlett, 1999) and had mixed empirical success. Consequently, DNNs fell out of favor by late 90s. Recently, there has been a resurgence of DNNs with the advent of deep learning LeCun et al. (2015). Deep learning, loosely speaking, refers to a suite of computational techniques that have been developed recently for training DNNs. It started with the work of Hinton et al. (2006), which gave empirical evidence that if DNNs are initialized properly (for instance, using unsupervised pre-training), then we can find good solutions in a reasonable amount of runtime. This work was soon followed by a series of early successes of deep learning at significantly improving the state-of-the-art in speech recognition Hinton et al. (2012). Since then, deep learning has received immense attention from the machine learning community with several state-of-the-art AI systems in speech recognition, image classification, and natural language processing based on deep neural nets Hinton et al. (2012); Dahl et al. (2013); Krizhevsky et al. (2012); Le (2013); Sutskever et al. (2014). While there is less of evidence now that pre-training actually helps, several other solutions have since been put forth ∗Department of Computer Science, Email: arora@cs.jhu.edu †Department of Applied Mathematics and Statistics, Email: basu.amitabh@jhu.edu ‡Department of Computer Science, Email: mianjy@jhu.edu §Department of Applied Mathematics and Statistics, Email: amukhe14@jhu.edu to address the issue of efficiently training DNNs. These include heuristics such as dropouts Srivastava et al. (2014), but also considering alternate deep architectures such as convolutional neural networks Sermanet et al. (2014), deep belief networks Hinton et al. (2006), and deep Boltzmann machines Salakhutdinov & Hinton (2009). In addition, deep architectures based on new non-saturating activation functions have been suggested to be more effectively trainable – the most successful and widely popular of these is the rectified linear unit (ReLU) activation, i.e., σ(x) = max{0, x}, which is the focus of study in this paper. In this paper, we formally study deep neural networks with rectified linear units; we refer to these deep architectures as ReLU DNNs. Our work is inspired by these recent attempts to understand the reason behind the successes of deep learning, both in terms of the structure of the functions represented by DNNs, Telgarsky (2015; 2016); Kane & Williams (2015); Shamir (2016), as well as efforts which have tried to understand the non-convex nature of the training problem of DNNs better Kawaguchi (2016); Haeffele & Vidal (2015). Our investigation of the function space represented by ReLU DNNs also takes inspiration from the classical theory of circuit complexity; we refer the reader to Arora & Barak (2009); Shpilka & Yehudayoff (2010); Jukna (2012); Saptharishi (2014); Allender (1998) for various surveys of this deep and fascinating field. In particular, our gap results are inspired by results like the ones by Hastad Hastad (1986), Razborov Razborov (1987) and Smolensky Smolensky (1987) which show a strict separation of complexity classes. We make progress towards similar statements with deep neural nets with ReLU activation. 1.1 NOTATION AND DEFINITIONS We extend the ReLU activation function to vectors x ∈ Rn through entry-wise operation: σ(x) = (max{0, x1},max{0, x2}, . . . ,max{0, xn}). For any (m,n) ∈ N, let Anm and Lnm denote the class of affine and linear transformations from Rm → Rn, respectively. Definition 1. [ReLU DNNs, depth, width, size] For any number of hidden layers k ∈ N, input and output dimensions w0, wk+1 ∈ N, a Rw0 → Rwk+1 ReLU DNN is given by specifying a sequence of k natural numbers w1, w2, . . . , wk representing widths of the hidden layers, a set of k affine transformations Ti : Rwi−1 → Rwi for i = 1, . . . , k and a linear transformation Tk+1 : Rwk → Rwk+1 corresponding to weights of the hidden layers. Such a ReLU DNN is called a (k + 1)-layer ReLU DNN, and is said to have k hidden layers. The function f : Rn1 → Rn2 computed or represented by this ReLU DNN is f = Tk+1 ◦ σ ◦ Tk ◦ · · · ◦ T2 ◦ σ ◦ T1, (1.1) where ◦ denotes function composition. The depth of a ReLU DNN is defined as k + 1. The width of a ReLU DNN is max{w1, . . . , wk}. The size of the ReLU DNN is w1 + w2 + . . .+ wk. Definition 2. We denote the class of Rw0 → Rwk+1 ReLU DNNs with k hidden layers of widths {wi}ki=1 by F{wi}k+1i=0 , i.e. F{wi}k+1i=0 := {Tk+1 ◦ σ ◦ Tk ◦ · · · ◦ σ ◦ T1 : Ti ∈ A wi wi−1∀i ∈ {1, . . . , k}, Tk+1 ∈ Lwk+1wk } (1.2) Definition 3. [Piecewise linear functions] We say a function f : Rn → R is continuous piecewise linear (PWL) if there exists a finite set of polyhedra whose union is Rn, and f is affine linear over each polyhedron (note that the definition automatically implies continuity of the function because the affine regions are closed and cover Rn, and affine functions are continuous). The number of pieces of f is the number of maximal connected subsets of Rn over which f is affine linear (which is finite). Many of our important statements will be phrased in terms of the following simplex. Definition 4. Let M > 0 be any positive real number and p ≥ 1 be any natural number. Define the following set: ∆pM := {x ∈ Rp : 0 < x1 < x2 < . . . < xp < M}. 2 EXACT CHARACTERIZATION OF FUNCTION CLASS REPRESENTED BY RELU DNNS One of the main advantages of DNNs is that they can represent a large family of functions with a relatively small number of parameters. In this section, we give an exact characterization of the functions representable by ReLU DNNs. Moreover, we show how structural properties of ReLU DNNs, specifically their depth and width, affects their expressive power. It is clear from definition that any function from Rn → R represented by a ReLU DNN is a continuous piecewise linear (PWL) function. In what follows, we show that the converse is also true, that is any PWL function is representable by a ReLU DNN. In particular, the following theorem establishes a one-to-one correspondence between the class of ReLU DNNs and PWL functions. Theorem 2.1. Every Rn → R ReLU DNN represents a piecewise linear function, and every piecewise linear function Rn → R can be represented by a ReLU DNN with at most dlog2(n + 1)e + 1 depth. Proof Sketch: It is clear that any function represented by a ReLU DNN is a PWL function. To see the converse, we first note that any PWL function can be represented as a linear combination of piecewise linear convex functions. More formally, by Theorem 1 in (Wang & Sun, 2005), for every piecewise linear function f : Rn → R, there exists a finite set of affine linear functions `1, . . . , `k and subsets S1, . . . , Sp ⊆ {1, . . . , k} (not necessarily disjoint) where each Si is of cardinality at most n+ 1, such that f = p∑ j=1 sj ( max i∈Sj `i ) , (2.1) where sj ∈ {−1,+1} for all j = 1, . . . , p. Since a function of the form maxi∈Sj `i is a piecewise linear convex function with at most n + 1 pieces (because |Sj | ≤ n + 1), Equation (2.1) says that any continuous piecewise linear function (not necessarily convex) can be obtained as a linear combination of piecewise linear convex functions each of which has at most n + 1 affine pieces. Furthermore, Lemmas D.1, D.2 and D.3 in the Appendix (see supplementary material), show that composition, addition, and pointwise maximum of PWL functions are also representable by ReLU DNNs. In particular, in Lemma D.3 we note that max{x, y} = x+y2 + |x−y| 2 is implementable by a two layer ReLU network and use this construction in an inductive manner to show that maximum of n+ 1 numbers can be computed using a ReLU DNN with depth at most dlog2(n+ 1)e. While Theorem 2.1 gives an upper bound on the depth of the networks needed to represent all continuous piecewise linear functions on Rn, it does not give any tight bounds on the size of the networks that are needed to represent a given piecewise linear function. For n = 1, we give tight bounds on size as follows: Theorem 2.2. Given any piecewise linear function R→ R with p pieces there exists a 2-layer DNN with at most p nodes that can represent f . Moreover, any 2-layer DNN that represents f has size at least p− 1. Finally, the main result of this section follows from Theorem 2.1, and well-known facts that the piecewise linear functions are dense in the family of compactly supported continuous functions and the family of compactly supported continuous functions are dense inLq(Rn) (Royden & Fitzpatrick, 2010)). Recall that Lq(Rn) is the space of Lebesgue integrable functions f such that ∫ |f |qdµ <∞, where µ is the Lebesgue measure on Rn (see Royden Royden & Fitzpatrick (2010)). Theorem 2.3. Every function in Lq(Rn), (1 ≤ q ≤ ∞) can be arbitrarily well-approximated in the Lq norm (which for a function f is given by ||f ||q = ( ∫ |f |q)1/q) by a ReLU DNN function with at most dlog2(n + 1)e hidden layers. Moreover, for n = 1, any such Lq function can be arbitrarily well-approximated by a 2-layer DNN, with tight bounds on the size of such a DNN in terms of the approximation. Proofs of Theorems 2.2 and 2.3 are provided in Appendix A. We would like to remark that a weaker version of Theorem 2.1 was observed in (Goodfellow et al., 2013, Proposition 4.1) (with no bound on the depth), along with a universal approximation theorem (Goodfellow et al., 2013, Theorem 4.3) similar to Theorem 2.3. The authors of Goodfellow et al. (2013) also used a previous result of Wang (Wang, 2004) for obtaining their result. In a subsequent work Boris Hanin (Hanin, 2017) has, among other things, found a width and depth upper bound for ReLU net representation of positive PWL functions on [0, 1]n. The width upperbound is n+3 for general positive PWL functions and n + 1 for convex positive PWL functions. For convex positive PWL functions his depth upper bound is sharp if we disallow dead ReLUs. 3 BENEFITS OF DEPTH Success of deep learning has been largely attributed to the depth of the networks, i.e. number of successive affine transformations followed by nonlinearities, which is shown to be extracting hierarchical features from the data. In contrast, traditional machine learning frameworks including support vector machines, generalized linear models, and kernel machines can be seen as instances of shallow networks, where a linear transformation acts on a single layer of nonlinear feature extraction. In this section, we explore the importance of depth in ReLU DNNs. In particular, in Section 3.1, we provide a smoothly parametrized family of R→ R “hard” functions representable by ReLU DNNs, which requires exponentially larger size for a shallower network. Furthermore, in Section 3.2, we construct a continuum of Rn → R “hard” functions representable by ReLU DNNs, which to the best of our knowledge is the first explicit construction of ReLU DNN functions whose number of affine pieces grows exponentially with input dimension. The proofs of the theorems in this section are provided in Appendix B. 3.1 CIRCUIT LOWER BOUNDS FOR R→ R RELU DNNS In this section, we are only concerned about R → R ReLU DNNs, i.e. both input and output dimensions are equal to one. The following theorem shows the depth-size trade-off in this setting. Theorem 3.1. For every pair of natural numbers k ≥ 1, w ≥ 2, there exists a family of hard functions representable by a R → R (k + 1)-layer ReLU DNN of width w such that if it is also representable by a (k′ + 1)-layer ReLU DNN for any k′ ≤ k, then this (k′ + 1)-layer ReLU DNN has size at least 12k ′w k k′ − 1. In fact our family of hard functions described above has a very intricate structure as stated below. Theorem 3.2. For every k ≥ 1,w ≥ 2, every member of the family of hard functions in Theorem 3.1 has wk pieces and this family can be parametrized by⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times , (3.1) i.e., for every point in the set above, there exists a distinct function with the stated properties. The following is an immediate corollary of Theorem 3.1 by choosing the parameters carefully. Corollary 3.3. For every k ∈ N and > 0, there is a family of functions defined on the real line such that every function f from this family can be represented by a (k1+ ) + 1-layer DNN with size k2+ and if f is represented by a k+1-layer DNN, then this DNN must have size at least 12k ·kk −1. Moreover, this family can be parametrized as, ∪M>0∆k 2+ −1 M . A particularly illuminative special case is obtained by setting = 1 in Corollary 3.3: Corollary 3.4. For every natural number k ∈ N, there is a family of functions parameterized by the set ∪M>0∆k 3−1 M such that any f from this family can be represented by a k 2 + 1-layer DNN with k3 nodes, and every k + 1-layer DNN that represents f needs at least 12k k+1 − 1 nodes. We can also get hardness of approximation versions of Theorem 3.1 and Corollaries 3.3 and 3.4, with the same gaps (upto constant terms), using the following theorem. Theorem 3.5. For every k ≥ 1, w ≥ 2, there exists a function fk,w that can be represented by a (k + 1)-layer ReLU DNN with w nodes in each layer, such that for all δ > 0 and k′ ≤ k the following holds: inf g∈Gk′,δ ∫ 1 x=0 |fk,w(x)− g(x)|dx > δ, where Gk′,δ is the family of functions representable by ReLU DNNs with depth at most k′ + 1, and size at most k′w k/k′ (1−4δ)1/k′ 21+1/k′ . The depth-size trade-off results in Theorems 3.1, and 3.5 extend and improve Telgarsky’s theorems from (Telgarsky, 2015; 2016) in the following three ways: (i) If we use our Theorem 3.5 to the pair of neural nets considered by Telgarsky in Theorem 1.1 in Telgarsky (2016) which are at depths k3 (of size also scaling as k3) and k then for this purpose of approximation in the `1−norm we would get a size lower bound for the shallower net which scales as Ω(2k 2 ) which is exponentially (in depth) larger than the lower bound of Ω(2k) that Telgarsky can get for this scenario. (ii) Telgarsky’s family of hard functions is parameterized by a single natural number k. In contrast, we show that for every pair of natural numbers w and k, and a point from the set in equation 3.1, there exists a “hard” function which to be represented by a depth k′ network would need a size of at least w k k′ k′. With the extra flexibility of choosing the parameter w, for the purpose of showing gaps in representation ability of deep nets we can shows size lower bounds which are super-exponential in depth as explained in Corollaries 3.3 and 3.4. (iii) A characteristic feature of the “hard” functions in Boolean circuit complexity is that they are usually a countable family of functions and not a “smooth” family of hard functions. In fact, in the last section of Telgarsky (2015), Telgarsky states this as a “weakness” of the state-of-the-art results on “hard” functions for both Boolean circuit complexity and neural nets research. In contrast, we provide a smoothly parameterized family of “hard” functions in Section 3.1 (parametrized by the set in equation 3.1). Such a continuum of hard functions wasn’t demonstrated before this work. We point out that Telgarsky’s results in (Telgarsky, 2016) apply to deep neural nets with a host of different activation functions, whereas, our results are specifically for neural nets with rectified linear units. In this sense, Telgarsky’s results from (Telgarsky, 2016) are more general than our results in this paper, but with weaker gap guarantees. Eldan-Shamir (Shamir, 2016; Eldan & Shamir, 2016) show that there exists an Rn → R function that can be represented by a 3-layer DNN, that takes exponential in n number of nodes to be approximated to within some constant by a 2-layer DNN. While their results are not immediately comparable with Telgarsky’s or our results, it is an interesting open question to extend their results to a constant depth hierarchy statement analogous to the recent result of Rossman et al (Rossman et al., 2015). We also note that in last few years, there has been much effort in the community to show size lowerbounds on ReLU DNNs trying to approximate various classes of functions which are themselves not necessarily exactly representable by ReLU DNNs (Yarotsky, 2016; Liang & Srikant, 2016; Safran & Shamir, 2017). 3.2 A CONTINUUM OF HARD FUNCTIONS FOR Rn → R FOR n ≥ 2 One measure of complexity of a family of Rn → R “hard” functions represented by ReLU DNNs is the asymptotics of the number of pieces as a function of dimension n, depth k + 1 and size s of the ReLU DNNs. More precisely, suppose one has a family H of functions such that for every n, k, w ∈ N the family contains at least one Rn → R function representable by a ReLU DNN with depth at most k+ 1 and maximum width at most w. The following definition formalizes a notion of complexity for such aH. Definition 5 (compH(n, k, w)). The measure compH(n, k, w) is defined as the maximum number of pieces (see Definition 3) of a Rn → R function fromH that can be represented by a ReLU DNN with depth at most k + 1 and maximum width at most w. Similar measures have been studied in previous works Montufar et al. (2014); Pascanu et al. (2013); Raghu et al. (2016). The best known families H are the ones from Theorem 4 of (Montufar et al., 2014) and a mild generalization of Theorem 1.1 of (Telgarsky, 2016) to k layers of ReLU activations with width w; these constructions achieve ( b(wn )c )(k−1)n ( ∑n j=0 ( w j ) )and compH(n, k, s) = O(w k), respectively. At the end of this section we would explain the precise sense in which we improve on these numbers. An analysis of this complexity measure is done using integer programming techniques in (Serra et al., 2017). Definition 6. Let b1, . . . ,bm ∈ Rn. The zonotope formed by b1, . . . ,bm ∈ Rn is defined as Z(b1, . . . ,bm) := {λ1b1 + . . .+ λmbm : −1 ≤ λi ≤ 1, i = 1, . . . ,m}. The set of vertices of Z(b1, . . . ,bm) will be denoted by vert(Z(b1, . . . ,bm)). The support function γZ(b1,...,bm) : Rn → R associated with the zonotope Z(b1, . . . ,bm) is defined as γZ(b1,...,bm)(r) = max x∈Z(b1,...,bm) 〈r,x〉. The following results are well-known in the theory of zonotopes (Ziegler, 1995). Theorem 3.6. The following are all true. 1. | vert(Z(b1, . . . ,bm))| ≤∑n−1i=0 (m−1i ). The set of (b1, . . . ,bm) ∈ Rn × . . .× Rn such that this does not hold at equality is a 0 measure set. 2. γZ(b1,...,bm)(r) = maxx∈Z(b1,...,bm)〈r,x〉 = maxx∈vert(Z(b1,...,bm))〈r,x〉, and γZ(b1,...,bm) is therefore a piecewise linear function with | vert(Z(b1, . . . ,bm))| pieces. 3. γZ(b1,...,bm)(r) = |〈r,b1〉|+ . . .+ |〈r,bm〉|. Definition 7 (extremal zonotope set). The set S(n,m) will denote the set of (b1, . . . ,bm) ∈ Rn × . . . × Rn such that | vert(Z(b1, . . . ,bm))| = ∑n−1i=0 (m−1i ). S(n,m) is the so-called “extremal zonotope set”, which is a subset of Rnm, whose complement has zero Lebesgue measure in Rnm. Lemma 3.7. Given any b1, . . . ,bm ∈ Rn, there exists a 2-layer ReLU DNN with size 2m which represents the function γZ(b1,...,bm)(r). Definition 8. For p ∈ N and a ∈ ∆pM , we define a function ha : R → R which is piecewise linear over the segments (−∞, 0], [0,a1], [a1,a2], . . . , [ap,M ], [M,+∞) defined as follows: ha(x) = 0 for all x ≤ 0, ha(ai) = M(i mod 2), and ha(M) = M−ha(ap) and for x ≥M , ha(x) is a linear continuation of the piece over the interval [ap,M ]. Note that the function has p+ 2 pieces, with the leftmost piece having slope 0. Furthermore, for a1, . . . ,ak ∈ ∆pM , we denote the composition of the functions ha1 , ha2 , . . . , hak by Ha1,...,ak := hak ◦ hak−1 ◦ . . . ◦ ha1 . Proposition 3.8. Given any tuple (b1, . . . ,bm) ∈ S(n,m) and any point (a1, . . . ,ak) ∈ ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times , the function ZONOTOPEnk,w,m[a 1, . . . ,ak,b1, . . . ,bm] := Ha1,...,ak ◦ γZ(b1,...,bm) has (m − 1)n−1wk pieces and it can be represented by a k + 2 layer ReLU DNN with size 2m+ wk. Finally, we are ready to state the main result of this section. Theorem 3.9. For every tuple of natural numbers n, k,m ≥ 1 and w ≥ 2, there exists a family of Rn → R functions, which we call ZONOTOPEnk,w,m with the following properties: (i) Every f ∈ ZONOTOPEnk,w,m is representable by a ReLU DNN of depth k + 2 and size 2m+ wk, and has (∑n−1 i=0 ( m−1 i )) wk pieces. (ii) Consider any f ∈ ZONOTOPEnk,w,m. If f is represented by a (k′ + 1)- layer DNN for any k′ ≤ k, then this (k′ + 1)-layer DNN has size at least max { 1 2 (k ′w k k′n ) · (m− 1)(1− 1n ) 1k′ − 1 , w k k′ n1/k′ k′ } . (iii) The family ZONOTOPEnk,w,m is in one-to-one correspondence with S(n,m)× ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times . Comparison to the results in (Montufar et al., 2014) Firstly we note that the construction in (Montufar et al., 2014) requires all the hidden layers to have width at least as big as the input dimensionality n. In contrast, we do not impose such restrictions and the network size in our construction is independent of the input dimensionality. Thus our result probes networks with bottleneck architectures whose complexity cant be seen from their result. Secondly, in terms of our complexity measure, there seem to be regimes where our bound does better. One such regime, for example, is when n ≤ w < 2n and k ∈ Ω( nlog(n) ), by setting in our construction m < n. Thirdly, it is not clear to us whether the construction in (Montufar et al., 2014) gives a smoothly parameterized family of functions other than by introducing small perturbations of the construction in their paper. In contrast, we have a smoothly parameterized family which is in one-to-one correspondence with a well-understood manifold like the higher-dimensional torus. 4 TRAINING 2-LAYER Rn → R RELU DNNS TO GLOBAL OPTIMALITY In this section we consider the following empirical risk minimization problem. Given D data points (xi, yi) ∈ Rn × R, i = 1, . . . , D, find the function f represented by 2-layer Rn → R ReLU DNNs of width w, that minimizes the following optimization problem min f∈F{n,w,1} 1 D D∑ i=1 `(f(xi), yi) ≡ min T1∈Awn , T2∈L1w 1 D D∑ i=1 ` ( T2(σ(T1(xi))), yi ) (4.1) where ` : R × R → R is a convex loss function (common loss functions are the squared loss, `(y, y′) = (y − y′)2, and the hinge loss function given by `(y, y′) = max{0, 1 − yy′}). Our main result of this section gives an algorithm to solve the above empirical risk minimization problem to global optimality. Theorem 4.1. There exists an algorithm to find a global optimum of Problem 4.1 in time O(2w(D)nwpoly(D,n,w)). Note that the running time O(2w(D)nwpoly(D,n,w)) is polynomial in the data size D for fixed n,w. Proof Sketch: A full proof of Theorem 4.1 is included in Appendix C. Here we provide a sketch of the proof. When the empirical risk minimization problem is viewed as an optimization problem in the space of weights of the ReLU DNN, it is a nonconvex, quadratic problem. However, one can instead search over the space of functions representable by 2-layer DNNs by writing them in the form similar to (2.1). This breaks the problem into two parts: a combinatorial search and then a convex problem that is essentially linear regression with linear inequality constraints. This enables us to guarantee global optimality. Algorithm 1 Empirical Risk Minimization 1: function ERM(D) . Where D = {(xi, yi)}Di=1 ⊂ Rn × R 2: S = {+1,−1}w . All possible instantiations of top layer weights 3: Pi = {(P i+, P i−)}, i = 1, . . . , w . All possible partitions of data into two parts 4: P = P1 × P2 × · · · × Pw 5: count = 1 . Counter 6: for s ∈ S do 7: for {(P i+, P i−)}wi=1 ∈ P do 8: loss(count) = minimize: ã,b̃ D∑ j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj) subject to: ã i · xj + b̃i ≤ 0 ∀j ∈ P i− ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ 9: count++ 10: end for 11: OPT = argminloss(count) 12: end for 13: return {ã}, {b̃}, s corresponding to OPT’s iterate 14: end function Let T1(x) = Ax + b and T2(y) = a′ · y for A ∈ Rw×n and b, a′ ∈ Rw. If we denote the i-th row of the matrix A by ai, and write bi, a′i to denote the i-th coordinates of the vectors b, a ′ respectively, due to homogeneity of ReLU gates, the network output can be represented as f(x) = w∑ i=1 a′i max{0, ai · x+ bi} = w∑ i=1 si max{0, ãi · x+ b̃i}. where ãi ∈ Rn, b̃i ∈ R and si ∈ {−1,+1} for all i = 1, . . . , w. For any hidden node i ∈ {1 . . . , w}, the pair (ãi, b̃i) induces a partition Pi := (P i+, P i−) on the dataset, given by P i− = {j : ãi · xj + b̃i ≤ 0} and P i+ = {1, . . . , D}\P i−. Algorithm 1 proceeds by generating all combinations of the partitions Pi as well as the top layer weights s ∈ {+1,−1}w, and minimizing the loss∑D j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj) subject to the constraints ãi · xj + b̃i ≤ 0 ∀j ∈ P i− and ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ which are imposed for all i = 1, . . . , w, which is a convex program. Algorithm 1 implements the empirical risk minimization (ERM) rule for training ReLU DNN with one hidden layer. To the best of our knowledge there is no other known algorithm that solves the ERM problem to global optimality. We note that due to known hardness results exponential dependence on the input dimension is unavoidable Blum & Rivest (1992); Shalev-Shwartz & BenDavid (2014); Algorithm 1 runs in time polynomial in the number of data points. To the best of our knowledge there is no hardness result known which rules out empirical risk minimization of deep nets in time polynomial in circuit size or data size. Thus our training result is a step towards resolving this gap in the complexity literature. A related result for improperly learning ReLUs has been recently obtained by Goel et al (Goel et al., 2016). In contrast, our algorithm returns a ReLU DNN from the class being learned. Another difference is that their result considers the notion of reliable learning as opposed to the empirical risk minimization objective considered in (4.1). 5 DISCUSSION The running time of the algorithm that we give in this work to find the exact global minima of a two layer ReLU-DNN is exponential in the input dimension n and the number of hidden nodes w. The exponential dependence on n can not be removed unless P = NP ; see Shalev-Shwartz & Ben-David (2014); Blum & Rivest (1992); DasGupta et al. (1995). However, we are not aware of any complexity results which would rule out the possibility of an algorithm which trains to global optimality in time that is polynomial in the data size and/or the number of hidden nodes, assuming that the input dimension is a fixed constant. Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good open question for future research, in our opinion. Perhaps an even better breakthrough would be to get optimal training algorithms for DNNs with two or more hidden layers and this seems like a substantially harder nut to crack. It would also be a significant breakthrough to get gap results between consecutive constant depths or between logarithmic and constant depths. ACKNOWLEDGMENTS We would like to thank Christian Tjandraatmadja for pointing out a subtle error in a previous version of the paper, which affected the complexity results for the number of linear regions in our constructions in Section 3.2. Anirbit would like to thank Ramprasad Saptharishi, Piyush Srivastava and Rohit Gurjar for extensive discussions on Boolean and arithmetic circuit complexity. This paper has been immensely influenced by the perspectives gained during those extremely helpful discussions. Amitabh Basu gratefully acknowledges support from the NSF grant CMMI1452820. Raman Arora was supported in part by NSF BIGDATA grant IIS-1546482. A EXPRESSING PIECEWISE LINEAR FUNCTIONS USING RELU DNNS Proof of Theorem 2.2. Any continuous piecewise linear function R→ R which hasm pieces can be specified by three pieces of information, (1) sL the slope of the left most piece, (2) the coordinates of the non-differentiable points specified by a (m − 1)−tuple {(ai, bi)}m−1i=1 (indexed from left to right) and (3) sR the slope of the rightmost piece. A tuple (sL, sR, (a1, b1), . . . , (am−1, bm−1) uniquely specifies a m piecewise linear function from R → R and vice versa. Given such a tuple, we construct a 2-layer DNN which computes the same piecewise linear function. One notes that for any a, r ∈ R, the function f(x) = { 0 x ≤ a r(x− a) x > a (A.1) is equal to sgn(r) max{|r|(x−a), 0}, which can be implemented by a 2-layer ReLU DNN with size 1. Similarly, any function of the form, g(x) = { t(x− a) x ≤ a 0 x > a (A.2) is equal to − sgn(t) max{−|t|(x − a), 0}, which can be implemented by a 2-layer ReLU DNN with size 1. The parameters r, t will be called the slopes of the function, and a will be called the breakpoint of the function.If we can write the given piecewise linear function as a sum of m functions of the form (A.1) and (A.2), then by Lemma D.2 we would be done. It turns out that such a decomposition of any p piece PWL function h : R → R as a sum of p flaps can always be arranged where the breakpoints of the p flaps all are all contained in the p − 1 breakpoints of h. First, observe that adding a constant to a function does not change the complexity of the ReLU DNN expressing it, since this corresponds to a bias on the output node. Thus, we will assume that the value of h at the last break point am−1 is bm−1 = 0. We now use a single function f of the form (A.1) with slope r and breakpoint a = am−1, and m − 1 functions g1, . . . , gm−1 of the form (A.2) with slopes t1, . . . , tm−1 and breakpoints a1, . . . , am−1, respectively. Thus, we wish to express h = f + g1 + . . . + gm−1. Such a decomposition of h would be valid if we can find values for r, t1, . . . , tm−1 such that (1) the slope of the above sum is = sL for x < a1, (2) the slope of the above sum is = sR for x > am−1, and (3) for each i ∈ {1, 2, 3, ..,m − 1} we have bi = f(ai) + g1(ai) + . . .+ gm−1(ai). The above corresponds to asking for the existence of a solution to the following set of simultaneous linear equations in r, t1, . . . , tm−1: sR = r, sL = t1 + t2 + . . .+ tm−1, bi = m−1∑ j=i+1 tj(aj−1 − aj) for all i = 1, . . . ,m− 2 It is easy to verify that the above set of simultaneous linear equations has a unique solution. Indeed, r must equal sR, and then one can solve for t1, . . . , tm−1 starting from the last equation bm−2 = tm−1(am−2 − am−1) and then back substitute to compute tm−2, tm−3, . . . , t1. The lower bound of p − 1 on the size for any 2-layer ReLU DNN that expresses a p piece function follows from Lemma D.6. One can do better in terms of size when the rightmost piece of the given function is flat, i.e., sR = 0. In this case r = 0, which means that f = 0; thus, the decomposition of h above is of size p − 1. A similar construction can be done when sL = 0. This gives the following statement which will be useful for constructing our forthcoming hard functions. Corollary A.1. If the rightmost or leftmost piece of a R→ R piecewise linear function has 0 slope, then we can compute such a p piece function using a 2-layer DNN with size p− 1. Proof of theorem 2.3. Since any piecewise linear function Rn → R is representable by a ReLU DNN by Corollary 2.1, the proof simply follows from the fact that the family of continuous piecewise linear functions is dense in any Lp(Rn) space, for 1 ≤ p ≤ ∞. B BENEFITS OF DEPTH B.1 CONSTRUCTING A CONTINUUM OF HARD FUNCTIONS FOR R→ R RELU DNNS AT EVERY DEPTH AND EVERY WIDTH Lemma B.1. For any M > 0, p ∈ N, k ∈ N and a1, . . . ,ak ∈ ∆pM , if we compose the functions ha1 , ha2 , . . . , hak the resulting function is a piecewise linear function with at most (p + 1)k + 2 pieces, i.e., Ha1,...,ak := hak ◦ hak−1 ◦ . . . ◦ ha1 is piecewise linear with at most (p+1)k+2 pieces, with (p+1)k of these pieces in the range [0,M ] (see Figure 2). Moreover, in each piece in the range [0,M ], the function is affine with minimum value 0 and maximum value M . Proof. Simple induction on k. Proof of Theorem 3.2. Given k ≥ 1 and w ≥ 2, choose any point (a1, . . . ,ak) ∈ ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times . By Definition 8, each hai , i = 1, . . . , k is a piecewise linear function with w + 1 pieces and the leftmost piece having slope 0. Thus, by Corollary A.1, each hai , i = 1, . . . , k can be represented by a 2-layer ReLU DNN with size w. Using Lemma D.1, Ha1,...,ak can be represented by a k+ 1 layer DNN with size wk; in fact, each hidden layer has exactly w nodes. Proof of Theorem 3.1. Follows from Theorem 3.2 and Lemma D.6. Proof of Theorem 3.5. Given k ≥ 1 and w ≥ 2 define q := wk and sq := ha ◦ ha ◦ . . . ◦ ha︸ ︷︷ ︸ k times where a = ( 1w , 2 w , . . . , w−1 w ) ∈ ∆ q−1 1 . Thus, sq is representable by a ReLU DNN of width w+1 and depth k+ 1 by Lemma D.1. In what follows, we want to give a lower bound on the `1 distance of sq from any continuous p-piecewise linear comparator gp : R → R. The function sq contains b q2c triangles of width 2q and unit height. A p-piecewise linear function has p− 1 breakpoints in the interval [0, 1]. So that in at least bwk2 c− (p− 1) triangles, gp has to be affine. In the following we demonstrate that inside any triangle of sq , any affine function will incur an `1 error of at least 12wk .∫ 2i+2 wk x= 2i wk |sq(x)− gp(x)|dx = ∫ 2 wk x=0 ∣∣∣∣∣sq(x)− (y1 + (x− 0) · y2 − y12 wk − 0 ) ∣∣∣∣∣ dx = ∫ 1 wk x=0 ∣∣∣∣xwk − y1 − wkx2 (y2 − y1) ∣∣∣∣ dx+ ∫ 2wk x= 1 wk ∣∣∣∣2− xwk − y1 − wkx2 (y2 − y1) ∣∣∣∣ dx = 1 wk ∫ 1 z=0 ∣∣∣z − y1 − z 2 (y2 − y1) ∣∣∣ dz + 1 wk ∫ 2 z=1 ∣∣∣2− z − y1 − z 2 (y2 − y1) ∣∣∣ dz = 1 wk ( −3 + y1 + 2y21 2 + y1 − y2 + y2 + 2(−2 + y1)2 2− y1 + y2 ) The above integral attains its minimum of 1 2wk at y1 = y2 = 12 . Putting together, ‖swk − gp‖1 ≥ ( bw k 2 c − (p− 1) ) · 1 2wk ≥ w k − 1− 2(p− 1) 4wk = 1 4 − 2p− 1 4wk Thus, for any δ > 0, p ≤ w k − 4wkδ + 1 2 =⇒ 2p− 1 ≤ (1 4 − δ)4wk =⇒ 1 4 − 2p− 1 4wk ≥ δ =⇒ ‖swk − gp‖1 ≥ δ. The result now follows from Lemma D.6. B.2 A CONTINUUM OF HARD FUNCTIONS FOR Rn → R FOR n ≥ 2 Proof of Lemma 3.7. By Theorem 3.6 part 3., γZ(b1,...,bm)(r) = |〈r,b1〉| + . . . + |〈r,bm〉|. It suffices to observe |〈r,b1〉|+ . . .+ |〈r,bm〉| = max{〈r,b1〉,−〈r,b1〉}+ . . .+ max{〈r,bm〉,−〈r,bm〉}. Proof of Proposition 3.8. The fact that ZONOTOPEnk,w,m[a 1, . . . ,ak,b1, . . . ,bm] can be represented by a k + 2 layer ReLU DNN with size 2m + wk follows from Lemmas 3.7 and D.1. The number of pieces follows from the fact that γZ(b1,...,bm) has ∑n−1 i=0 ( m−1 i ) distinct linear pieces by parts 1. and 2. of Theorem 3.6, and Ha1,...,ak has wk pieces by Lemma B.1. Proof of Theorem 3.9. Follows from Proposition 3.8. C EXACT EMPIRICAL RISK MINIMIZATION Proof of Theorem 4.1. Let ` : R→ R be any convex loss function, and let (x1, y1), . . . , (xD, yD) ∈ Rn × R be the given D data points. As stated in (4.1), the problem requires us to find an affine transformation T1 : Rn → Rw and a linear transformation T2 : Rw → R, so as to minimize the empirical loss as stated in (4.1). Note that T1 is given by a matrix A ∈ Rw×n and a vector b ∈ Rw so that T (x) = Ax + b for all x ∈ Rn. Similarly, T2 can be represented by a vector a′ ∈ Rw such that T2(y) = a′ · y for all y ∈ Rw. If we denote the i-th row of the matrix A by ai, and write bi, a′i to denote the i-th coordinates of the vectors b, a′ respectively, we can write the function represented by this network as f(x) = w∑ i=1 a′i max{0, ai · x+ bi} = w∑ i=1 sgn(a′i) max{0, (|a′i|ai) · x+ |a′i|bi}. In other words, the family of functions over which we are searching is of the form f(x) = w∑ i=1 si max{0, ãi · x+ b̃i} (C.1) where ãi ∈ Rn, bi ∈ R and si ∈ {−1,+1} for all i = 1, . . . , w. We now make the following observation. For a given data point (xj , yj) if ãi · xj + b̃i ≤ 0, then the i-th term of (C.1) does not contribute to the loss function for this data point (xj , yj). Thus, for every data point (xj , yj), there exists a set Sj ⊆ {1, . . . , w} such that f(xj) = ∑ i∈Sj si(ã i · xj + b̃i). In particular, if we are given the set Sj for (xj , yj), then the expression on the right hand side of (C.1) reduces to a linear function of ãi, b̃i. For any fixed i ∈ {1, . . . , w}, these sets Sj induce a partition of the data set into two parts. In particular, we define P i+ := {j : i ∈ Sj} and P i− := {1, . . . , D} \ P i+. Observe now that this partition is also induced by the hyperplane given by ãi, b̃i: P i+ = {j : ãi · xj + b̃i > 0} and P i+ = {j : ãi · xj + b̃i ≤ 0}. Our strategy will be to guess the partitions P i+, P i− for each i = 1, . . . , w, and then do linear regression with the constraint that regression’s decision variables ãi, b̃i induce the guessed partition. More formally, the algorithm does the following. For each i = 1, . . . , w, the algorithm guesses a partition of the data set (xj , yj), j = 1, . . . , D by a hyperplane. Let us label the partitions as follows (P i+, P i −), i = 1, . . . , w. So, for each i = 1, . . . , w, P i + ∪ P i− = {1, . . . , D}, P i+ and P i− are disjoint, and there exists a vector c ∈ Rn and a real number δ such that P i− = {j : c · xj + δ ≤ 0} and P i+ = {j : c · xj + δ > 0}. Further, for each i = 1, . . . , w the algorithm selects a vector s in {+1,−1}w. For a fixed selection of partitions (P i+, P i −), i = 1, . . . , w and a vector s in {+1,−1}w, the algorithm solves the following convex optimization problem with decision variables ãi ∈ Rn, b̃i ∈ R for i = 1, . . . , w (thus, we have a total of (n + 1) · w decision variables). The feasible region of the optimization is given by the constraints ãi · xj + b̃i ≤ 0 ∀j ∈ P i− ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ (C.2) which are imposed for all i = 1, . . . , w. Thus, we have a total of D · w constraints. Subject to these constraints we minimize the objective ∑D j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj). Assuming the loss function ` is a convex function in the first argument, the above objective is a convex function. Thus, we have to minize a convex objective subject to the linear inequality constraints from (C.2). We finally have to count how many possible partitions (P i+, P i −) and vectors s the algorithm has to search through. It is well-known Matousek (2002) that the total number of possible hyperplane partitions of a set of sizeD in Rn is at most 2 ( D n ) ≤ Dn whenever n ≥ 2. Thus with a guess for each i = 1, . . . , w, we have a total of at most Dnw partitions. There are 2w vectors s in {−1,+1}w. This gives us a total of 2wDnw guesses for the partitions (P i+, P i −) and vectors s. For each such guess, we have a convex optimization problem with (n + 1) · w decision variables and D · w constraints, which can be solved in time poly(D,n,w). Putting everything together, we have the running time claimed in the statement. The above argument holds only for n ≥ 2, since we used the inequality 2 ( D n ) ≤ Dn which only holds for n ≥ 2. For n = 1, a similar algorithm can be designed, but one which uses the characterization achieved in Theorem 2.2. Let ` : R → R be any convex loss function, and let (x1, y1), . . . , (xD, yD) ∈ R2 be the given D data points. Using Theorem 2.2, to solve problem (4.1) it suffices to find a R → R piecewise linear function f with w pieces that minimizes the total loss. In other words, the optimization problem (4.1) is equivalent to the problem min { D∑ i=1 `(f(xi), yi) : f is piecewise linear with w pieces } . (C.3) We now use the observation that fitting piecewise linear functions to minimize loss is just a step away from linear regression, which is a special case where the function is contrained to have exactly one affine linear piece. Our algorithm will first guess the optimal partition of the data points such that all points in the same class of the partition correspond to the same affine piece of f , and then do linear regression in each class of the partition. Altenatively, one can think of this as guessing the interval (xi, xi+1) of data points where the w − 1 breakpoints of the piecewise linear function will lie, and then doing linear regression between the breakpoints. More formally, we parametrize piecewise linear functions with w pieces by the w slope-intercept values (a1, b1), . . . , (a2, b2), . . . , (aw, bw) of the w different pieces. This means that between breakpoints j and j + 1, 1 ≤ j ≤ w − 2, the function is given by f(x) = aj+1x+ bj+1, and the first and last pieces are a1x+ b1 and awx+ bw, respectively. Define I to be the set of all (w − 1)-tuples (i1, . . . , iw−1) of natural numbers such that 1 ≤ i1 ≤ . . . ≤ iw−1 ≤ D. Given a fixed tuple I = (i1, . . . , iw−1) ∈ I, we wish to search through all piecewise linear functions whose breakpoints, in order, appear in the intervals (xi1 , xi1+1), (xi2 , xi2+1), . . . , (xiw−1 , xiw−1+1). Define also S = {−1, 1}w−1. Any S ∈ S will have the following interpretation: if Sj = 1 then aj ≤ aj+1, and if Sj = −1 then aj ≥ aj+1. Now for every I ∈ I and S ∈ S, requiring a piecewise linear function that respects the conditions imposed by I and S is easily seen to be equivalent to imposing the following linear inequalities on the parameters (a1, b1), . . . , (a2, b2), . . . , (aw, bw): Sj(bj+1 − bj − (aj − aj+1)xij ) ≥ 0 Sj(bj+1 − bj − (aj − aj+1)xij+1) ≤ 0 Sj(aj+1 − aj) ≥ 0 (C.4) Let the set of piecewise linear functions whose breakpoints satisfy the above be denoted by PWL1I,S for I ∈ I, S ∈ S. Given a particular I ∈ I, we define D1 := {xi : i ≤ i1}, Dj := {xi : ij−1 < i ≤ i1} j = 2, . . . , w − 1, Dw := {xi : i > iw−1} . Observe that min{ D∑ i=1 `(f(xi)−yi) : f ∈ PWL1I,S} = min{ w∑ j=1 ( ∑ i∈Dj `(aj ·xi+bj−yi) ) : (aj , bj) satisfy (C.4)} (C.5) The right hand side of the above equation is the problem of minimizing a convex objective subject to linear constraints. Now, to solve (C.3), we need to simply solve the problem (C.5) for all I ∈ I, S ∈ S and pick the minimum. Since |I| = ( D w ) = O(Dw) and |S| = 2w−1 we need to solveO(2w ·Dw) convex optimization problems, each taking time O(poly(D)). Therefore, the total running time is O((2D)wpoly(D)). D AUXILIARY LEMMAS Now we will collect some straightforward observations that will be used often. The following operations preserve the property of being representable by a ReLU DNN. Lemma D.1. [Function Composition] If f1 : Rd → Rm is represented by a d,m ReLU DNN with depth k1 + 1 and size s1, and f2 : Rm → Rn is represented by an m,n ReLU DNN with depth k2 + 1 and size s2, then f2 ◦ f1 can be represented by a d, n ReLU DNN with depth k1 + k2 + 1 and size s1 + s2. Proof. Follows from (1.1) and the fact that a composition of affine transformations is another affine transformation. Lemma D.2. [Function Addition] If f1 : Rn → Rm is represented by a n,m ReLU DNN with depth k + 1 and size s1, and f2 : Rn → Rm is represented by a n,m ReLU DNN with depth k + 1 and size s2, then f1 +f2 can be represented by a n,m ReLU DNN with depth k+1 and size s1 +s2. Proof. We simply put the two ReLU DNNs in parallel and combine the appropriate coordinates of the outputs. Lemma D.3. [Taking maximums/minimums] Let f1, . . . , fm : Rn → R be functions that can each be represented by Rn → R ReLU DNNs with depths ki + 1 and size si, i = 1, . . . ,m. Then the function f : Rn → R defined as f(x) := max{f1(x), . . . , fm(x)} can be represented by a ReLU DNN of depth at most max{k1, . . . , km}+ log(m) + 1 and size at most s1 + . . . sm + 4(2m− 1). Similarly, the function g(x) := min{f1(x), . . . , fm(x)} can be represented by a ReLU DNN of depth at most max{k1, . . . , km}+ dlog(m)e+ 1 and size at most s1 + . . . sm + 4(2m− 1). Proof. We prove this by induction on m. The base case m = 1 is trivial. For m ≥ 2, consider g1 := max{f1, . . . , fbm2 c} and g2 := max{fbm2 c+1, . . . , fm}. By the induction hypothesis (since bm2 c, dm2 e < m when m ≥ 2), g1 and g2 can be represented by ReLU DNNs of depths at most max{k1, . . . , kbm2 c}+ dlog(b m 2 c)e+ 1 and max{kbm2 c+1, . . . , km}+ dlog(d m 2 e)e+ 1 respectively, and sizes at most s1 + . . . sbm2 c+4(2b m 2 c−1) and sbm2 c+1 + . . .+sm+4(2b m 2 c−1), respectively. Therefore, the function G : Rn → R2 given by G(x) = (g1(x), g2(x)) can be implemented by a ReLU DNN with depth at most max{k1, . . . , km} + dlog(dm2 e)e + 1 and size at most s1 + . . . + sm + 4(2m− 2). We now show how to represent the function T : R2 → R defined as T (x, y) = max{x, y} = x+y 2 + |x−y| 2 by a 2-layer ReLU DNN with size 4 – see Figure 3. The result now follows from the fact that f = T ◦G and Lemma D.1. Lemma D.4. Any affine transformation T : Rn → Rm is representable by a 2-layer ReLU DNN of size 2m. Proof. Simply use the fact that T = (I ◦ σ ◦ T ) + (−I ◦ σ ◦ (−T )), and the right hand side can be represented by a 2-layer ReLU DNN of size 2m using Lemma D.2. Lemma D.5. Let f : R → R be a function represented by a R → R ReLU DNN with depth k + 1 and widths w1, . . . , wk of the k hidden layers. Then f is a PWL function with at most 2k−1 · (w1 + 1) · w2 · . . . · wk pieces. Proof. We prove this by induction on k. The base case is k = 1, i.e, we have a 2-layer ReLU DNN. Since every activation node can produce at most one breakpoint in the piecewise linear function, we can get at most w1 breakpoints, i.e., w1 + 1 pieces. Now for the induction step, assume that for some k ≥ 1, any R→ R ReLU DNN with depth k + 1 and widths w1, . . . , wk of the k hidden layers produces at most 2k−1 · (w1 + 1) ·w2 · . . . ·wk pieces. Consider any R → R ReLU DNN with depth k + 2 and widths w1, . . . , wk+1 of the k + 1 hidden layers. Observe that the input to any node in the last layer is the output of a R → R ReLU DNN with depth k + 1 and widths w1, . . . , wk. By the induction hypothesis, the input to this node in the last layer is a piecewise linear function f with at most 2k−1 · (w1 +1) ·w2 · . . . ·wk pieces. When we apply the activation, the new function g(x) = max{0, f(x)}, which is the output of this node, may have at most twice the number of pieces as f , because each original piece may be intersected by the x-axis; see Figure 4. Thus, after going through the layer, we take an affine combination of wk+1 functions, each with at most 2 · (2k−1 · (w1 + 1) ·w2 · . . . ·wk) pieces. In all, we can therefore get at most 2·(2k−1 ·(w1+1)·w2 ·. . .·wk)·wk+1 pieces, which is equal to 2k ·(w1+1)·w2 ·. . .·wk ·wk+1, and the induction step is completed. Lemma D.5 has the following consequence about the depth and size tradeoffs for expressing functions with agiven number of pieces. Lemma D.6. Let f : R → R be a piecewise linear function with p pieces. If f is represented by a ReLU DNN with depth k+ 1, then it must have size at least 12kp 1/k − 1. Conversely, any piecewise linear function f that represented by a ReLU DNN of depth k + 1 and size at most s, can have at most ( 2sk ) k pieces. Proof. Let widths of the k hidden layers be w1, . . . , wk. By Lemma D.5, we must have 2k−1 · (w1 + 1) · w2 · . . . · wk ≥ p. (D.1) By the AM-GM inequality, minimizing the size w1 +w2 + . . .+wk subject to (D.1), means setting w1 + 1 = w2 = . . . = wk. This implies that w1 + 1 = w2 = . . . = wk ≥ 12p1/k. The first statement follows. The second statement follows using the AM-GM inequality again, this time with a restriction on w1 + w2 + . . .+ wk.
1. What is the main contribution of the paper regarding ReLU networks? 2. How does the reviewer assess the significance and novelty of the results presented in the paper? 3. Are there any concerns or questions about the paper's focus, organization, or clarity? 4. Do the results of the paper build upon or extend previous research in the field? If so, how? 5. Are there any limitations or weaknesses in the paper's approach or methodology?
Review
Review The paper presents a series of definitions and results elucidating details about the functions representable by ReLU networks, their parametrisation, and gaps between deep and shallower nets. The paper is easy to read, although it does not seem to have a main focus (exponential gaps vs. optimisation vs. universal approximation). The paper makes a nice contribution to the details of deep neural networks with ReLUs, although I find the contributed results slightly overstated. The 1d results are not difficult to derive from previous results. The advertised new results on the asymptotic behaviour assume a first layer that dominates the size of the network. The optimisation method appears close to brute force and is limited to 2 layers. Theorem 3.1 appears to be easily deduced from the results from Montufar, Pascanu, Cho, Bengio, 2014. For 1d inputs, each layer will multiply the number of regions at most by the number of units in the layer, leading to the condition w’ \geq w^{k/k’}. Theorem 3.2 is simply giving a parametrization of the functions, removing symmetries of the units in the layers. In the list at the top of page 5. Note that, the function classes might be characterized in terms of countable properties, such as the number of linear regions as discussed in MPCB, but still they build a continuum of functions. Similarly, in page 5 ``Moreover, for fixed n,k,s, our functions are smoothly parameterized''. This should not be a surprise. In the last paragraph of Section 3 ``m = w^k-1'' This is a very big first layer. This also seems to subsume the first condition, s\geq w^k-1 +w(k-1) for the network discussed in Theorem 3.9. In the last paragraph of Section 3 ``To the best of our knowledge''. In the construction presented here, the network’s size is essentially in the layer of size m. Under such conditions, Corollary 6 of MPCB also reads as s^n. Here it is irrelevant whether one artificially increases the depth of the network by additional, very narrow, layers, which do not contribute to the asymptotic number of units. The function class Zonotope is a composition of two parts. It would be interesting to consider also a single construction, instead of the composition of two constructions. Theorem 3.9 (ii) it would be nice to have a construction where the size becomes 2m + wk when k’=k. Section 4, while interesting, appears to be somewhat disconnected from the rest of the paper. In Theorem 2.3. explain why the two layer case is limited to n=1. At some point in the first 4 pages it would be good to explain what is meant by ``hard’’ functions (e.g. functions that are hard to represent, as opposed to step functions, etc.)
ICLR
Title Understanding Deep Neural Networks with Rectified Linear Units Abstract In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to global optimality with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of “hard” functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number k there exists a function representable by a ReLU DNN with k hidden layers and total size k, such that any ReLU DNN with at most k hidden layers will require at least 1 2k k+1− 1 total nodes. Finally, for the family of R → R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a smoothly parameterized family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory. N/A k+1− 1 total nodes. Finally, for the family of Rn → R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a smoothly parameterized family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory. 1 INTRODUCTION Deep neural networks (DNNs) provide an excellent family of hypotheses for machine learning tasks such as classification. Neural networks with a single hidden layer of finite size can represent any continuous function on a compact subset of Rn arbitrary well. The universal approximation result was first given by Cybenko in 1989 for sigmoidal activation function (Cybenko, 1989), and later generalized by Hornik to an arbitrary bounded and nonconstant activation function Hornik (1991). Furthermore, neural networks have finite VC dimension (depending polynomially on the number of edges in the network), and therefore, are PAC (probably approximately correct) learnable using a sample of size that is polynomial in the size of the networks Anthony & Bartlett (1999). However, neural networks based methods were shown to be computationally hard to learn (Anthony & Bartlett, 1999) and had mixed empirical success. Consequently, DNNs fell out of favor by late 90s. Recently, there has been a resurgence of DNNs with the advent of deep learning LeCun et al. (2015). Deep learning, loosely speaking, refers to a suite of computational techniques that have been developed recently for training DNNs. It started with the work of Hinton et al. (2006), which gave empirical evidence that if DNNs are initialized properly (for instance, using unsupervised pre-training), then we can find good solutions in a reasonable amount of runtime. This work was soon followed by a series of early successes of deep learning at significantly improving the state-of-the-art in speech recognition Hinton et al. (2012). Since then, deep learning has received immense attention from the machine learning community with several state-of-the-art AI systems in speech recognition, image classification, and natural language processing based on deep neural nets Hinton et al. (2012); Dahl et al. (2013); Krizhevsky et al. (2012); Le (2013); Sutskever et al. (2014). While there is less of evidence now that pre-training actually helps, several other solutions have since been put forth ∗Department of Computer Science, Email: arora@cs.jhu.edu †Department of Applied Mathematics and Statistics, Email: basu.amitabh@jhu.edu ‡Department of Computer Science, Email: mianjy@jhu.edu §Department of Applied Mathematics and Statistics, Email: amukhe14@jhu.edu to address the issue of efficiently training DNNs. These include heuristics such as dropouts Srivastava et al. (2014), but also considering alternate deep architectures such as convolutional neural networks Sermanet et al. (2014), deep belief networks Hinton et al. (2006), and deep Boltzmann machines Salakhutdinov & Hinton (2009). In addition, deep architectures based on new non-saturating activation functions have been suggested to be more effectively trainable – the most successful and widely popular of these is the rectified linear unit (ReLU) activation, i.e., σ(x) = max{0, x}, which is the focus of study in this paper. In this paper, we formally study deep neural networks with rectified linear units; we refer to these deep architectures as ReLU DNNs. Our work is inspired by these recent attempts to understand the reason behind the successes of deep learning, both in terms of the structure of the functions represented by DNNs, Telgarsky (2015; 2016); Kane & Williams (2015); Shamir (2016), as well as efforts which have tried to understand the non-convex nature of the training problem of DNNs better Kawaguchi (2016); Haeffele & Vidal (2015). Our investigation of the function space represented by ReLU DNNs also takes inspiration from the classical theory of circuit complexity; we refer the reader to Arora & Barak (2009); Shpilka & Yehudayoff (2010); Jukna (2012); Saptharishi (2014); Allender (1998) for various surveys of this deep and fascinating field. In particular, our gap results are inspired by results like the ones by Hastad Hastad (1986), Razborov Razborov (1987) and Smolensky Smolensky (1987) which show a strict separation of complexity classes. We make progress towards similar statements with deep neural nets with ReLU activation. 1.1 NOTATION AND DEFINITIONS We extend the ReLU activation function to vectors x ∈ Rn through entry-wise operation: σ(x) = (max{0, x1},max{0, x2}, . . . ,max{0, xn}). For any (m,n) ∈ N, let Anm and Lnm denote the class of affine and linear transformations from Rm → Rn, respectively. Definition 1. [ReLU DNNs, depth, width, size] For any number of hidden layers k ∈ N, input and output dimensions w0, wk+1 ∈ N, a Rw0 → Rwk+1 ReLU DNN is given by specifying a sequence of k natural numbers w1, w2, . . . , wk representing widths of the hidden layers, a set of k affine transformations Ti : Rwi−1 → Rwi for i = 1, . . . , k and a linear transformation Tk+1 : Rwk → Rwk+1 corresponding to weights of the hidden layers. Such a ReLU DNN is called a (k + 1)-layer ReLU DNN, and is said to have k hidden layers. The function f : Rn1 → Rn2 computed or represented by this ReLU DNN is f = Tk+1 ◦ σ ◦ Tk ◦ · · · ◦ T2 ◦ σ ◦ T1, (1.1) where ◦ denotes function composition. The depth of a ReLU DNN is defined as k + 1. The width of a ReLU DNN is max{w1, . . . , wk}. The size of the ReLU DNN is w1 + w2 + . . .+ wk. Definition 2. We denote the class of Rw0 → Rwk+1 ReLU DNNs with k hidden layers of widths {wi}ki=1 by F{wi}k+1i=0 , i.e. F{wi}k+1i=0 := {Tk+1 ◦ σ ◦ Tk ◦ · · · ◦ σ ◦ T1 : Ti ∈ A wi wi−1∀i ∈ {1, . . . , k}, Tk+1 ∈ Lwk+1wk } (1.2) Definition 3. [Piecewise linear functions] We say a function f : Rn → R is continuous piecewise linear (PWL) if there exists a finite set of polyhedra whose union is Rn, and f is affine linear over each polyhedron (note that the definition automatically implies continuity of the function because the affine regions are closed and cover Rn, and affine functions are continuous). The number of pieces of f is the number of maximal connected subsets of Rn over which f is affine linear (which is finite). Many of our important statements will be phrased in terms of the following simplex. Definition 4. Let M > 0 be any positive real number and p ≥ 1 be any natural number. Define the following set: ∆pM := {x ∈ Rp : 0 < x1 < x2 < . . . < xp < M}. 2 EXACT CHARACTERIZATION OF FUNCTION CLASS REPRESENTED BY RELU DNNS One of the main advantages of DNNs is that they can represent a large family of functions with a relatively small number of parameters. In this section, we give an exact characterization of the functions representable by ReLU DNNs. Moreover, we show how structural properties of ReLU DNNs, specifically their depth and width, affects their expressive power. It is clear from definition that any function from Rn → R represented by a ReLU DNN is a continuous piecewise linear (PWL) function. In what follows, we show that the converse is also true, that is any PWL function is representable by a ReLU DNN. In particular, the following theorem establishes a one-to-one correspondence between the class of ReLU DNNs and PWL functions. Theorem 2.1. Every Rn → R ReLU DNN represents a piecewise linear function, and every piecewise linear function Rn → R can be represented by a ReLU DNN with at most dlog2(n + 1)e + 1 depth. Proof Sketch: It is clear that any function represented by a ReLU DNN is a PWL function. To see the converse, we first note that any PWL function can be represented as a linear combination of piecewise linear convex functions. More formally, by Theorem 1 in (Wang & Sun, 2005), for every piecewise linear function f : Rn → R, there exists a finite set of affine linear functions `1, . . . , `k and subsets S1, . . . , Sp ⊆ {1, . . . , k} (not necessarily disjoint) where each Si is of cardinality at most n+ 1, such that f = p∑ j=1 sj ( max i∈Sj `i ) , (2.1) where sj ∈ {−1,+1} for all j = 1, . . . , p. Since a function of the form maxi∈Sj `i is a piecewise linear convex function with at most n + 1 pieces (because |Sj | ≤ n + 1), Equation (2.1) says that any continuous piecewise linear function (not necessarily convex) can be obtained as a linear combination of piecewise linear convex functions each of which has at most n + 1 affine pieces. Furthermore, Lemmas D.1, D.2 and D.3 in the Appendix (see supplementary material), show that composition, addition, and pointwise maximum of PWL functions are also representable by ReLU DNNs. In particular, in Lemma D.3 we note that max{x, y} = x+y2 + |x−y| 2 is implementable by a two layer ReLU network and use this construction in an inductive manner to show that maximum of n+ 1 numbers can be computed using a ReLU DNN with depth at most dlog2(n+ 1)e. While Theorem 2.1 gives an upper bound on the depth of the networks needed to represent all continuous piecewise linear functions on Rn, it does not give any tight bounds on the size of the networks that are needed to represent a given piecewise linear function. For n = 1, we give tight bounds on size as follows: Theorem 2.2. Given any piecewise linear function R→ R with p pieces there exists a 2-layer DNN with at most p nodes that can represent f . Moreover, any 2-layer DNN that represents f has size at least p− 1. Finally, the main result of this section follows from Theorem 2.1, and well-known facts that the piecewise linear functions are dense in the family of compactly supported continuous functions and the family of compactly supported continuous functions are dense inLq(Rn) (Royden & Fitzpatrick, 2010)). Recall that Lq(Rn) is the space of Lebesgue integrable functions f such that ∫ |f |qdµ <∞, where µ is the Lebesgue measure on Rn (see Royden Royden & Fitzpatrick (2010)). Theorem 2.3. Every function in Lq(Rn), (1 ≤ q ≤ ∞) can be arbitrarily well-approximated in the Lq norm (which for a function f is given by ||f ||q = ( ∫ |f |q)1/q) by a ReLU DNN function with at most dlog2(n + 1)e hidden layers. Moreover, for n = 1, any such Lq function can be arbitrarily well-approximated by a 2-layer DNN, with tight bounds on the size of such a DNN in terms of the approximation. Proofs of Theorems 2.2 and 2.3 are provided in Appendix A. We would like to remark that a weaker version of Theorem 2.1 was observed in (Goodfellow et al., 2013, Proposition 4.1) (with no bound on the depth), along with a universal approximation theorem (Goodfellow et al., 2013, Theorem 4.3) similar to Theorem 2.3. The authors of Goodfellow et al. (2013) also used a previous result of Wang (Wang, 2004) for obtaining their result. In a subsequent work Boris Hanin (Hanin, 2017) has, among other things, found a width and depth upper bound for ReLU net representation of positive PWL functions on [0, 1]n. The width upperbound is n+3 for general positive PWL functions and n + 1 for convex positive PWL functions. For convex positive PWL functions his depth upper bound is sharp if we disallow dead ReLUs. 3 BENEFITS OF DEPTH Success of deep learning has been largely attributed to the depth of the networks, i.e. number of successive affine transformations followed by nonlinearities, which is shown to be extracting hierarchical features from the data. In contrast, traditional machine learning frameworks including support vector machines, generalized linear models, and kernel machines can be seen as instances of shallow networks, where a linear transformation acts on a single layer of nonlinear feature extraction. In this section, we explore the importance of depth in ReLU DNNs. In particular, in Section 3.1, we provide a smoothly parametrized family of R→ R “hard” functions representable by ReLU DNNs, which requires exponentially larger size for a shallower network. Furthermore, in Section 3.2, we construct a continuum of Rn → R “hard” functions representable by ReLU DNNs, which to the best of our knowledge is the first explicit construction of ReLU DNN functions whose number of affine pieces grows exponentially with input dimension. The proofs of the theorems in this section are provided in Appendix B. 3.1 CIRCUIT LOWER BOUNDS FOR R→ R RELU DNNS In this section, we are only concerned about R → R ReLU DNNs, i.e. both input and output dimensions are equal to one. The following theorem shows the depth-size trade-off in this setting. Theorem 3.1. For every pair of natural numbers k ≥ 1, w ≥ 2, there exists a family of hard functions representable by a R → R (k + 1)-layer ReLU DNN of width w such that if it is also representable by a (k′ + 1)-layer ReLU DNN for any k′ ≤ k, then this (k′ + 1)-layer ReLU DNN has size at least 12k ′w k k′ − 1. In fact our family of hard functions described above has a very intricate structure as stated below. Theorem 3.2. For every k ≥ 1,w ≥ 2, every member of the family of hard functions in Theorem 3.1 has wk pieces and this family can be parametrized by⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times , (3.1) i.e., for every point in the set above, there exists a distinct function with the stated properties. The following is an immediate corollary of Theorem 3.1 by choosing the parameters carefully. Corollary 3.3. For every k ∈ N and > 0, there is a family of functions defined on the real line such that every function f from this family can be represented by a (k1+ ) + 1-layer DNN with size k2+ and if f is represented by a k+1-layer DNN, then this DNN must have size at least 12k ·kk −1. Moreover, this family can be parametrized as, ∪M>0∆k 2+ −1 M . A particularly illuminative special case is obtained by setting = 1 in Corollary 3.3: Corollary 3.4. For every natural number k ∈ N, there is a family of functions parameterized by the set ∪M>0∆k 3−1 M such that any f from this family can be represented by a k 2 + 1-layer DNN with k3 nodes, and every k + 1-layer DNN that represents f needs at least 12k k+1 − 1 nodes. We can also get hardness of approximation versions of Theorem 3.1 and Corollaries 3.3 and 3.4, with the same gaps (upto constant terms), using the following theorem. Theorem 3.5. For every k ≥ 1, w ≥ 2, there exists a function fk,w that can be represented by a (k + 1)-layer ReLU DNN with w nodes in each layer, such that for all δ > 0 and k′ ≤ k the following holds: inf g∈Gk′,δ ∫ 1 x=0 |fk,w(x)− g(x)|dx > δ, where Gk′,δ is the family of functions representable by ReLU DNNs with depth at most k′ + 1, and size at most k′w k/k′ (1−4δ)1/k′ 21+1/k′ . The depth-size trade-off results in Theorems 3.1, and 3.5 extend and improve Telgarsky’s theorems from (Telgarsky, 2015; 2016) in the following three ways: (i) If we use our Theorem 3.5 to the pair of neural nets considered by Telgarsky in Theorem 1.1 in Telgarsky (2016) which are at depths k3 (of size also scaling as k3) and k then for this purpose of approximation in the `1−norm we would get a size lower bound for the shallower net which scales as Ω(2k 2 ) which is exponentially (in depth) larger than the lower bound of Ω(2k) that Telgarsky can get for this scenario. (ii) Telgarsky’s family of hard functions is parameterized by a single natural number k. In contrast, we show that for every pair of natural numbers w and k, and a point from the set in equation 3.1, there exists a “hard” function which to be represented by a depth k′ network would need a size of at least w k k′ k′. With the extra flexibility of choosing the parameter w, for the purpose of showing gaps in representation ability of deep nets we can shows size lower bounds which are super-exponential in depth as explained in Corollaries 3.3 and 3.4. (iii) A characteristic feature of the “hard” functions in Boolean circuit complexity is that they are usually a countable family of functions and not a “smooth” family of hard functions. In fact, in the last section of Telgarsky (2015), Telgarsky states this as a “weakness” of the state-of-the-art results on “hard” functions for both Boolean circuit complexity and neural nets research. In contrast, we provide a smoothly parameterized family of “hard” functions in Section 3.1 (parametrized by the set in equation 3.1). Such a continuum of hard functions wasn’t demonstrated before this work. We point out that Telgarsky’s results in (Telgarsky, 2016) apply to deep neural nets with a host of different activation functions, whereas, our results are specifically for neural nets with rectified linear units. In this sense, Telgarsky’s results from (Telgarsky, 2016) are more general than our results in this paper, but with weaker gap guarantees. Eldan-Shamir (Shamir, 2016; Eldan & Shamir, 2016) show that there exists an Rn → R function that can be represented by a 3-layer DNN, that takes exponential in n number of nodes to be approximated to within some constant by a 2-layer DNN. While their results are not immediately comparable with Telgarsky’s or our results, it is an interesting open question to extend their results to a constant depth hierarchy statement analogous to the recent result of Rossman et al (Rossman et al., 2015). We also note that in last few years, there has been much effort in the community to show size lowerbounds on ReLU DNNs trying to approximate various classes of functions which are themselves not necessarily exactly representable by ReLU DNNs (Yarotsky, 2016; Liang & Srikant, 2016; Safran & Shamir, 2017). 3.2 A CONTINUUM OF HARD FUNCTIONS FOR Rn → R FOR n ≥ 2 One measure of complexity of a family of Rn → R “hard” functions represented by ReLU DNNs is the asymptotics of the number of pieces as a function of dimension n, depth k + 1 and size s of the ReLU DNNs. More precisely, suppose one has a family H of functions such that for every n, k, w ∈ N the family contains at least one Rn → R function representable by a ReLU DNN with depth at most k+ 1 and maximum width at most w. The following definition formalizes a notion of complexity for such aH. Definition 5 (compH(n, k, w)). The measure compH(n, k, w) is defined as the maximum number of pieces (see Definition 3) of a Rn → R function fromH that can be represented by a ReLU DNN with depth at most k + 1 and maximum width at most w. Similar measures have been studied in previous works Montufar et al. (2014); Pascanu et al. (2013); Raghu et al. (2016). The best known families H are the ones from Theorem 4 of (Montufar et al., 2014) and a mild generalization of Theorem 1.1 of (Telgarsky, 2016) to k layers of ReLU activations with width w; these constructions achieve ( b(wn )c )(k−1)n ( ∑n j=0 ( w j ) )and compH(n, k, s) = O(w k), respectively. At the end of this section we would explain the precise sense in which we improve on these numbers. An analysis of this complexity measure is done using integer programming techniques in (Serra et al., 2017). Definition 6. Let b1, . . . ,bm ∈ Rn. The zonotope formed by b1, . . . ,bm ∈ Rn is defined as Z(b1, . . . ,bm) := {λ1b1 + . . .+ λmbm : −1 ≤ λi ≤ 1, i = 1, . . . ,m}. The set of vertices of Z(b1, . . . ,bm) will be denoted by vert(Z(b1, . . . ,bm)). The support function γZ(b1,...,bm) : Rn → R associated with the zonotope Z(b1, . . . ,bm) is defined as γZ(b1,...,bm)(r) = max x∈Z(b1,...,bm) 〈r,x〉. The following results are well-known in the theory of zonotopes (Ziegler, 1995). Theorem 3.6. The following are all true. 1. | vert(Z(b1, . . . ,bm))| ≤∑n−1i=0 (m−1i ). The set of (b1, . . . ,bm) ∈ Rn × . . .× Rn such that this does not hold at equality is a 0 measure set. 2. γZ(b1,...,bm)(r) = maxx∈Z(b1,...,bm)〈r,x〉 = maxx∈vert(Z(b1,...,bm))〈r,x〉, and γZ(b1,...,bm) is therefore a piecewise linear function with | vert(Z(b1, . . . ,bm))| pieces. 3. γZ(b1,...,bm)(r) = |〈r,b1〉|+ . . .+ |〈r,bm〉|. Definition 7 (extremal zonotope set). The set S(n,m) will denote the set of (b1, . . . ,bm) ∈ Rn × . . . × Rn such that | vert(Z(b1, . . . ,bm))| = ∑n−1i=0 (m−1i ). S(n,m) is the so-called “extremal zonotope set”, which is a subset of Rnm, whose complement has zero Lebesgue measure in Rnm. Lemma 3.7. Given any b1, . . . ,bm ∈ Rn, there exists a 2-layer ReLU DNN with size 2m which represents the function γZ(b1,...,bm)(r). Definition 8. For p ∈ N and a ∈ ∆pM , we define a function ha : R → R which is piecewise linear over the segments (−∞, 0], [0,a1], [a1,a2], . . . , [ap,M ], [M,+∞) defined as follows: ha(x) = 0 for all x ≤ 0, ha(ai) = M(i mod 2), and ha(M) = M−ha(ap) and for x ≥M , ha(x) is a linear continuation of the piece over the interval [ap,M ]. Note that the function has p+ 2 pieces, with the leftmost piece having slope 0. Furthermore, for a1, . . . ,ak ∈ ∆pM , we denote the composition of the functions ha1 , ha2 , . . . , hak by Ha1,...,ak := hak ◦ hak−1 ◦ . . . ◦ ha1 . Proposition 3.8. Given any tuple (b1, . . . ,bm) ∈ S(n,m) and any point (a1, . . . ,ak) ∈ ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times , the function ZONOTOPEnk,w,m[a 1, . . . ,ak,b1, . . . ,bm] := Ha1,...,ak ◦ γZ(b1,...,bm) has (m − 1)n−1wk pieces and it can be represented by a k + 2 layer ReLU DNN with size 2m+ wk. Finally, we are ready to state the main result of this section. Theorem 3.9. For every tuple of natural numbers n, k,m ≥ 1 and w ≥ 2, there exists a family of Rn → R functions, which we call ZONOTOPEnk,w,m with the following properties: (i) Every f ∈ ZONOTOPEnk,w,m is representable by a ReLU DNN of depth k + 2 and size 2m+ wk, and has (∑n−1 i=0 ( m−1 i )) wk pieces. (ii) Consider any f ∈ ZONOTOPEnk,w,m. If f is represented by a (k′ + 1)- layer DNN for any k′ ≤ k, then this (k′ + 1)-layer DNN has size at least max { 1 2 (k ′w k k′n ) · (m− 1)(1− 1n ) 1k′ − 1 , w k k′ n1/k′ k′ } . (iii) The family ZONOTOPEnk,w,m is in one-to-one correspondence with S(n,m)× ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times . Comparison to the results in (Montufar et al., 2014) Firstly we note that the construction in (Montufar et al., 2014) requires all the hidden layers to have width at least as big as the input dimensionality n. In contrast, we do not impose such restrictions and the network size in our construction is independent of the input dimensionality. Thus our result probes networks with bottleneck architectures whose complexity cant be seen from their result. Secondly, in terms of our complexity measure, there seem to be regimes where our bound does better. One such regime, for example, is when n ≤ w < 2n and k ∈ Ω( nlog(n) ), by setting in our construction m < n. Thirdly, it is not clear to us whether the construction in (Montufar et al., 2014) gives a smoothly parameterized family of functions other than by introducing small perturbations of the construction in their paper. In contrast, we have a smoothly parameterized family which is in one-to-one correspondence with a well-understood manifold like the higher-dimensional torus. 4 TRAINING 2-LAYER Rn → R RELU DNNS TO GLOBAL OPTIMALITY In this section we consider the following empirical risk minimization problem. Given D data points (xi, yi) ∈ Rn × R, i = 1, . . . , D, find the function f represented by 2-layer Rn → R ReLU DNNs of width w, that minimizes the following optimization problem min f∈F{n,w,1} 1 D D∑ i=1 `(f(xi), yi) ≡ min T1∈Awn , T2∈L1w 1 D D∑ i=1 ` ( T2(σ(T1(xi))), yi ) (4.1) where ` : R × R → R is a convex loss function (common loss functions are the squared loss, `(y, y′) = (y − y′)2, and the hinge loss function given by `(y, y′) = max{0, 1 − yy′}). Our main result of this section gives an algorithm to solve the above empirical risk minimization problem to global optimality. Theorem 4.1. There exists an algorithm to find a global optimum of Problem 4.1 in time O(2w(D)nwpoly(D,n,w)). Note that the running time O(2w(D)nwpoly(D,n,w)) is polynomial in the data size D for fixed n,w. Proof Sketch: A full proof of Theorem 4.1 is included in Appendix C. Here we provide a sketch of the proof. When the empirical risk minimization problem is viewed as an optimization problem in the space of weights of the ReLU DNN, it is a nonconvex, quadratic problem. However, one can instead search over the space of functions representable by 2-layer DNNs by writing them in the form similar to (2.1). This breaks the problem into two parts: a combinatorial search and then a convex problem that is essentially linear regression with linear inequality constraints. This enables us to guarantee global optimality. Algorithm 1 Empirical Risk Minimization 1: function ERM(D) . Where D = {(xi, yi)}Di=1 ⊂ Rn × R 2: S = {+1,−1}w . All possible instantiations of top layer weights 3: Pi = {(P i+, P i−)}, i = 1, . . . , w . All possible partitions of data into two parts 4: P = P1 × P2 × · · · × Pw 5: count = 1 . Counter 6: for s ∈ S do 7: for {(P i+, P i−)}wi=1 ∈ P do 8: loss(count) = minimize: ã,b̃ D∑ j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj) subject to: ã i · xj + b̃i ≤ 0 ∀j ∈ P i− ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ 9: count++ 10: end for 11: OPT = argminloss(count) 12: end for 13: return {ã}, {b̃}, s corresponding to OPT’s iterate 14: end function Let T1(x) = Ax + b and T2(y) = a′ · y for A ∈ Rw×n and b, a′ ∈ Rw. If we denote the i-th row of the matrix A by ai, and write bi, a′i to denote the i-th coordinates of the vectors b, a ′ respectively, due to homogeneity of ReLU gates, the network output can be represented as f(x) = w∑ i=1 a′i max{0, ai · x+ bi} = w∑ i=1 si max{0, ãi · x+ b̃i}. where ãi ∈ Rn, b̃i ∈ R and si ∈ {−1,+1} for all i = 1, . . . , w. For any hidden node i ∈ {1 . . . , w}, the pair (ãi, b̃i) induces a partition Pi := (P i+, P i−) on the dataset, given by P i− = {j : ãi · xj + b̃i ≤ 0} and P i+ = {1, . . . , D}\P i−. Algorithm 1 proceeds by generating all combinations of the partitions Pi as well as the top layer weights s ∈ {+1,−1}w, and minimizing the loss∑D j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj) subject to the constraints ãi · xj + b̃i ≤ 0 ∀j ∈ P i− and ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ which are imposed for all i = 1, . . . , w, which is a convex program. Algorithm 1 implements the empirical risk minimization (ERM) rule for training ReLU DNN with one hidden layer. To the best of our knowledge there is no other known algorithm that solves the ERM problem to global optimality. We note that due to known hardness results exponential dependence on the input dimension is unavoidable Blum & Rivest (1992); Shalev-Shwartz & BenDavid (2014); Algorithm 1 runs in time polynomial in the number of data points. To the best of our knowledge there is no hardness result known which rules out empirical risk minimization of deep nets in time polynomial in circuit size or data size. Thus our training result is a step towards resolving this gap in the complexity literature. A related result for improperly learning ReLUs has been recently obtained by Goel et al (Goel et al., 2016). In contrast, our algorithm returns a ReLU DNN from the class being learned. Another difference is that their result considers the notion of reliable learning as opposed to the empirical risk minimization objective considered in (4.1). 5 DISCUSSION The running time of the algorithm that we give in this work to find the exact global minima of a two layer ReLU-DNN is exponential in the input dimension n and the number of hidden nodes w. The exponential dependence on n can not be removed unless P = NP ; see Shalev-Shwartz & Ben-David (2014); Blum & Rivest (1992); DasGupta et al. (1995). However, we are not aware of any complexity results which would rule out the possibility of an algorithm which trains to global optimality in time that is polynomial in the data size and/or the number of hidden nodes, assuming that the input dimension is a fixed constant. Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good open question for future research, in our opinion. Perhaps an even better breakthrough would be to get optimal training algorithms for DNNs with two or more hidden layers and this seems like a substantially harder nut to crack. It would also be a significant breakthrough to get gap results between consecutive constant depths or between logarithmic and constant depths. ACKNOWLEDGMENTS We would like to thank Christian Tjandraatmadja for pointing out a subtle error in a previous version of the paper, which affected the complexity results for the number of linear regions in our constructions in Section 3.2. Anirbit would like to thank Ramprasad Saptharishi, Piyush Srivastava and Rohit Gurjar for extensive discussions on Boolean and arithmetic circuit complexity. This paper has been immensely influenced by the perspectives gained during those extremely helpful discussions. Amitabh Basu gratefully acknowledges support from the NSF grant CMMI1452820. Raman Arora was supported in part by NSF BIGDATA grant IIS-1546482. A EXPRESSING PIECEWISE LINEAR FUNCTIONS USING RELU DNNS Proof of Theorem 2.2. Any continuous piecewise linear function R→ R which hasm pieces can be specified by three pieces of information, (1) sL the slope of the left most piece, (2) the coordinates of the non-differentiable points specified by a (m − 1)−tuple {(ai, bi)}m−1i=1 (indexed from left to right) and (3) sR the slope of the rightmost piece. A tuple (sL, sR, (a1, b1), . . . , (am−1, bm−1) uniquely specifies a m piecewise linear function from R → R and vice versa. Given such a tuple, we construct a 2-layer DNN which computes the same piecewise linear function. One notes that for any a, r ∈ R, the function f(x) = { 0 x ≤ a r(x− a) x > a (A.1) is equal to sgn(r) max{|r|(x−a), 0}, which can be implemented by a 2-layer ReLU DNN with size 1. Similarly, any function of the form, g(x) = { t(x− a) x ≤ a 0 x > a (A.2) is equal to − sgn(t) max{−|t|(x − a), 0}, which can be implemented by a 2-layer ReLU DNN with size 1. The parameters r, t will be called the slopes of the function, and a will be called the breakpoint of the function.If we can write the given piecewise linear function as a sum of m functions of the form (A.1) and (A.2), then by Lemma D.2 we would be done. It turns out that such a decomposition of any p piece PWL function h : R → R as a sum of p flaps can always be arranged where the breakpoints of the p flaps all are all contained in the p − 1 breakpoints of h. First, observe that adding a constant to a function does not change the complexity of the ReLU DNN expressing it, since this corresponds to a bias on the output node. Thus, we will assume that the value of h at the last break point am−1 is bm−1 = 0. We now use a single function f of the form (A.1) with slope r and breakpoint a = am−1, and m − 1 functions g1, . . . , gm−1 of the form (A.2) with slopes t1, . . . , tm−1 and breakpoints a1, . . . , am−1, respectively. Thus, we wish to express h = f + g1 + . . . + gm−1. Such a decomposition of h would be valid if we can find values for r, t1, . . . , tm−1 such that (1) the slope of the above sum is = sL for x < a1, (2) the slope of the above sum is = sR for x > am−1, and (3) for each i ∈ {1, 2, 3, ..,m − 1} we have bi = f(ai) + g1(ai) + . . .+ gm−1(ai). The above corresponds to asking for the existence of a solution to the following set of simultaneous linear equations in r, t1, . . . , tm−1: sR = r, sL = t1 + t2 + . . .+ tm−1, bi = m−1∑ j=i+1 tj(aj−1 − aj) for all i = 1, . . . ,m− 2 It is easy to verify that the above set of simultaneous linear equations has a unique solution. Indeed, r must equal sR, and then one can solve for t1, . . . , tm−1 starting from the last equation bm−2 = tm−1(am−2 − am−1) and then back substitute to compute tm−2, tm−3, . . . , t1. The lower bound of p − 1 on the size for any 2-layer ReLU DNN that expresses a p piece function follows from Lemma D.6. One can do better in terms of size when the rightmost piece of the given function is flat, i.e., sR = 0. In this case r = 0, which means that f = 0; thus, the decomposition of h above is of size p − 1. A similar construction can be done when sL = 0. This gives the following statement which will be useful for constructing our forthcoming hard functions. Corollary A.1. If the rightmost or leftmost piece of a R→ R piecewise linear function has 0 slope, then we can compute such a p piece function using a 2-layer DNN with size p− 1. Proof of theorem 2.3. Since any piecewise linear function Rn → R is representable by a ReLU DNN by Corollary 2.1, the proof simply follows from the fact that the family of continuous piecewise linear functions is dense in any Lp(Rn) space, for 1 ≤ p ≤ ∞. B BENEFITS OF DEPTH B.1 CONSTRUCTING A CONTINUUM OF HARD FUNCTIONS FOR R→ R RELU DNNS AT EVERY DEPTH AND EVERY WIDTH Lemma B.1. For any M > 0, p ∈ N, k ∈ N and a1, . . . ,ak ∈ ∆pM , if we compose the functions ha1 , ha2 , . . . , hak the resulting function is a piecewise linear function with at most (p + 1)k + 2 pieces, i.e., Ha1,...,ak := hak ◦ hak−1 ◦ . . . ◦ ha1 is piecewise linear with at most (p+1)k+2 pieces, with (p+1)k of these pieces in the range [0,M ] (see Figure 2). Moreover, in each piece in the range [0,M ], the function is affine with minimum value 0 and maximum value M . Proof. Simple induction on k. Proof of Theorem 3.2. Given k ≥ 1 and w ≥ 2, choose any point (a1, . . . ,ak) ∈ ⋃ M>0 (∆w−1M ×∆w−1M × . . .×∆w−1M )︸ ︷︷ ︸ k times . By Definition 8, each hai , i = 1, . . . , k is a piecewise linear function with w + 1 pieces and the leftmost piece having slope 0. Thus, by Corollary A.1, each hai , i = 1, . . . , k can be represented by a 2-layer ReLU DNN with size w. Using Lemma D.1, Ha1,...,ak can be represented by a k+ 1 layer DNN with size wk; in fact, each hidden layer has exactly w nodes. Proof of Theorem 3.1. Follows from Theorem 3.2 and Lemma D.6. Proof of Theorem 3.5. Given k ≥ 1 and w ≥ 2 define q := wk and sq := ha ◦ ha ◦ . . . ◦ ha︸ ︷︷ ︸ k times where a = ( 1w , 2 w , . . . , w−1 w ) ∈ ∆ q−1 1 . Thus, sq is representable by a ReLU DNN of width w+1 and depth k+ 1 by Lemma D.1. In what follows, we want to give a lower bound on the `1 distance of sq from any continuous p-piecewise linear comparator gp : R → R. The function sq contains b q2c triangles of width 2q and unit height. A p-piecewise linear function has p− 1 breakpoints in the interval [0, 1]. So that in at least bwk2 c− (p− 1) triangles, gp has to be affine. In the following we demonstrate that inside any triangle of sq , any affine function will incur an `1 error of at least 12wk .∫ 2i+2 wk x= 2i wk |sq(x)− gp(x)|dx = ∫ 2 wk x=0 ∣∣∣∣∣sq(x)− (y1 + (x− 0) · y2 − y12 wk − 0 ) ∣∣∣∣∣ dx = ∫ 1 wk x=0 ∣∣∣∣xwk − y1 − wkx2 (y2 − y1) ∣∣∣∣ dx+ ∫ 2wk x= 1 wk ∣∣∣∣2− xwk − y1 − wkx2 (y2 − y1) ∣∣∣∣ dx = 1 wk ∫ 1 z=0 ∣∣∣z − y1 − z 2 (y2 − y1) ∣∣∣ dz + 1 wk ∫ 2 z=1 ∣∣∣2− z − y1 − z 2 (y2 − y1) ∣∣∣ dz = 1 wk ( −3 + y1 + 2y21 2 + y1 − y2 + y2 + 2(−2 + y1)2 2− y1 + y2 ) The above integral attains its minimum of 1 2wk at y1 = y2 = 12 . Putting together, ‖swk − gp‖1 ≥ ( bw k 2 c − (p− 1) ) · 1 2wk ≥ w k − 1− 2(p− 1) 4wk = 1 4 − 2p− 1 4wk Thus, for any δ > 0, p ≤ w k − 4wkδ + 1 2 =⇒ 2p− 1 ≤ (1 4 − δ)4wk =⇒ 1 4 − 2p− 1 4wk ≥ δ =⇒ ‖swk − gp‖1 ≥ δ. The result now follows from Lemma D.6. B.2 A CONTINUUM OF HARD FUNCTIONS FOR Rn → R FOR n ≥ 2 Proof of Lemma 3.7. By Theorem 3.6 part 3., γZ(b1,...,bm)(r) = |〈r,b1〉| + . . . + |〈r,bm〉|. It suffices to observe |〈r,b1〉|+ . . .+ |〈r,bm〉| = max{〈r,b1〉,−〈r,b1〉}+ . . .+ max{〈r,bm〉,−〈r,bm〉}. Proof of Proposition 3.8. The fact that ZONOTOPEnk,w,m[a 1, . . . ,ak,b1, . . . ,bm] can be represented by a k + 2 layer ReLU DNN with size 2m + wk follows from Lemmas 3.7 and D.1. The number of pieces follows from the fact that γZ(b1,...,bm) has ∑n−1 i=0 ( m−1 i ) distinct linear pieces by parts 1. and 2. of Theorem 3.6, and Ha1,...,ak has wk pieces by Lemma B.1. Proof of Theorem 3.9. Follows from Proposition 3.8. C EXACT EMPIRICAL RISK MINIMIZATION Proof of Theorem 4.1. Let ` : R→ R be any convex loss function, and let (x1, y1), . . . , (xD, yD) ∈ Rn × R be the given D data points. As stated in (4.1), the problem requires us to find an affine transformation T1 : Rn → Rw and a linear transformation T2 : Rw → R, so as to minimize the empirical loss as stated in (4.1). Note that T1 is given by a matrix A ∈ Rw×n and a vector b ∈ Rw so that T (x) = Ax + b for all x ∈ Rn. Similarly, T2 can be represented by a vector a′ ∈ Rw such that T2(y) = a′ · y for all y ∈ Rw. If we denote the i-th row of the matrix A by ai, and write bi, a′i to denote the i-th coordinates of the vectors b, a′ respectively, we can write the function represented by this network as f(x) = w∑ i=1 a′i max{0, ai · x+ bi} = w∑ i=1 sgn(a′i) max{0, (|a′i|ai) · x+ |a′i|bi}. In other words, the family of functions over which we are searching is of the form f(x) = w∑ i=1 si max{0, ãi · x+ b̃i} (C.1) where ãi ∈ Rn, bi ∈ R and si ∈ {−1,+1} for all i = 1, . . . , w. We now make the following observation. For a given data point (xj , yj) if ãi · xj + b̃i ≤ 0, then the i-th term of (C.1) does not contribute to the loss function for this data point (xj , yj). Thus, for every data point (xj , yj), there exists a set Sj ⊆ {1, . . . , w} such that f(xj) = ∑ i∈Sj si(ã i · xj + b̃i). In particular, if we are given the set Sj for (xj , yj), then the expression on the right hand side of (C.1) reduces to a linear function of ãi, b̃i. For any fixed i ∈ {1, . . . , w}, these sets Sj induce a partition of the data set into two parts. In particular, we define P i+ := {j : i ∈ Sj} and P i− := {1, . . . , D} \ P i+. Observe now that this partition is also induced by the hyperplane given by ãi, b̃i: P i+ = {j : ãi · xj + b̃i > 0} and P i+ = {j : ãi · xj + b̃i ≤ 0}. Our strategy will be to guess the partitions P i+, P i− for each i = 1, . . . , w, and then do linear regression with the constraint that regression’s decision variables ãi, b̃i induce the guessed partition. More formally, the algorithm does the following. For each i = 1, . . . , w, the algorithm guesses a partition of the data set (xj , yj), j = 1, . . . , D by a hyperplane. Let us label the partitions as follows (P i+, P i −), i = 1, . . . , w. So, for each i = 1, . . . , w, P i + ∪ P i− = {1, . . . , D}, P i+ and P i− are disjoint, and there exists a vector c ∈ Rn and a real number δ such that P i− = {j : c · xj + δ ≤ 0} and P i+ = {j : c · xj + δ > 0}. Further, for each i = 1, . . . , w the algorithm selects a vector s in {+1,−1}w. For a fixed selection of partitions (P i+, P i −), i = 1, . . . , w and a vector s in {+1,−1}w, the algorithm solves the following convex optimization problem with decision variables ãi ∈ Rn, b̃i ∈ R for i = 1, . . . , w (thus, we have a total of (n + 1) · w decision variables). The feasible region of the optimization is given by the constraints ãi · xj + b̃i ≤ 0 ∀j ∈ P i− ãi · xj + b̃i ≥ 0 ∀j ∈ P i+ (C.2) which are imposed for all i = 1, . . . , w. Thus, we have a total of D · w constraints. Subject to these constraints we minimize the objective ∑D j=1 ∑ i:j∈P i+ `(si(ã i · xj + b̃i), yj). Assuming the loss function ` is a convex function in the first argument, the above objective is a convex function. Thus, we have to minize a convex objective subject to the linear inequality constraints from (C.2). We finally have to count how many possible partitions (P i+, P i −) and vectors s the algorithm has to search through. It is well-known Matousek (2002) that the total number of possible hyperplane partitions of a set of sizeD in Rn is at most 2 ( D n ) ≤ Dn whenever n ≥ 2. Thus with a guess for each i = 1, . . . , w, we have a total of at most Dnw partitions. There are 2w vectors s in {−1,+1}w. This gives us a total of 2wDnw guesses for the partitions (P i+, P i −) and vectors s. For each such guess, we have a convex optimization problem with (n + 1) · w decision variables and D · w constraints, which can be solved in time poly(D,n,w). Putting everything together, we have the running time claimed in the statement. The above argument holds only for n ≥ 2, since we used the inequality 2 ( D n ) ≤ Dn which only holds for n ≥ 2. For n = 1, a similar algorithm can be designed, but one which uses the characterization achieved in Theorem 2.2. Let ` : R → R be any convex loss function, and let (x1, y1), . . . , (xD, yD) ∈ R2 be the given D data points. Using Theorem 2.2, to solve problem (4.1) it suffices to find a R → R piecewise linear function f with w pieces that minimizes the total loss. In other words, the optimization problem (4.1) is equivalent to the problem min { D∑ i=1 `(f(xi), yi) : f is piecewise linear with w pieces } . (C.3) We now use the observation that fitting piecewise linear functions to minimize loss is just a step away from linear regression, which is a special case where the function is contrained to have exactly one affine linear piece. Our algorithm will first guess the optimal partition of the data points such that all points in the same class of the partition correspond to the same affine piece of f , and then do linear regression in each class of the partition. Altenatively, one can think of this as guessing the interval (xi, xi+1) of data points where the w − 1 breakpoints of the piecewise linear function will lie, and then doing linear regression between the breakpoints. More formally, we parametrize piecewise linear functions with w pieces by the w slope-intercept values (a1, b1), . . . , (a2, b2), . . . , (aw, bw) of the w different pieces. This means that between breakpoints j and j + 1, 1 ≤ j ≤ w − 2, the function is given by f(x) = aj+1x+ bj+1, and the first and last pieces are a1x+ b1 and awx+ bw, respectively. Define I to be the set of all (w − 1)-tuples (i1, . . . , iw−1) of natural numbers such that 1 ≤ i1 ≤ . . . ≤ iw−1 ≤ D. Given a fixed tuple I = (i1, . . . , iw−1) ∈ I, we wish to search through all piecewise linear functions whose breakpoints, in order, appear in the intervals (xi1 , xi1+1), (xi2 , xi2+1), . . . , (xiw−1 , xiw−1+1). Define also S = {−1, 1}w−1. Any S ∈ S will have the following interpretation: if Sj = 1 then aj ≤ aj+1, and if Sj = −1 then aj ≥ aj+1. Now for every I ∈ I and S ∈ S, requiring a piecewise linear function that respects the conditions imposed by I and S is easily seen to be equivalent to imposing the following linear inequalities on the parameters (a1, b1), . . . , (a2, b2), . . . , (aw, bw): Sj(bj+1 − bj − (aj − aj+1)xij ) ≥ 0 Sj(bj+1 − bj − (aj − aj+1)xij+1) ≤ 0 Sj(aj+1 − aj) ≥ 0 (C.4) Let the set of piecewise linear functions whose breakpoints satisfy the above be denoted by PWL1I,S for I ∈ I, S ∈ S. Given a particular I ∈ I, we define D1 := {xi : i ≤ i1}, Dj := {xi : ij−1 < i ≤ i1} j = 2, . . . , w − 1, Dw := {xi : i > iw−1} . Observe that min{ D∑ i=1 `(f(xi)−yi) : f ∈ PWL1I,S} = min{ w∑ j=1 ( ∑ i∈Dj `(aj ·xi+bj−yi) ) : (aj , bj) satisfy (C.4)} (C.5) The right hand side of the above equation is the problem of minimizing a convex objective subject to linear constraints. Now, to solve (C.3), we need to simply solve the problem (C.5) for all I ∈ I, S ∈ S and pick the minimum. Since |I| = ( D w ) = O(Dw) and |S| = 2w−1 we need to solveO(2w ·Dw) convex optimization problems, each taking time O(poly(D)). Therefore, the total running time is O((2D)wpoly(D)). D AUXILIARY LEMMAS Now we will collect some straightforward observations that will be used often. The following operations preserve the property of being representable by a ReLU DNN. Lemma D.1. [Function Composition] If f1 : Rd → Rm is represented by a d,m ReLU DNN with depth k1 + 1 and size s1, and f2 : Rm → Rn is represented by an m,n ReLU DNN with depth k2 + 1 and size s2, then f2 ◦ f1 can be represented by a d, n ReLU DNN with depth k1 + k2 + 1 and size s1 + s2. Proof. Follows from (1.1) and the fact that a composition of affine transformations is another affine transformation. Lemma D.2. [Function Addition] If f1 : Rn → Rm is represented by a n,m ReLU DNN with depth k + 1 and size s1, and f2 : Rn → Rm is represented by a n,m ReLU DNN with depth k + 1 and size s2, then f1 +f2 can be represented by a n,m ReLU DNN with depth k+1 and size s1 +s2. Proof. We simply put the two ReLU DNNs in parallel and combine the appropriate coordinates of the outputs. Lemma D.3. [Taking maximums/minimums] Let f1, . . . , fm : Rn → R be functions that can each be represented by Rn → R ReLU DNNs with depths ki + 1 and size si, i = 1, . . . ,m. Then the function f : Rn → R defined as f(x) := max{f1(x), . . . , fm(x)} can be represented by a ReLU DNN of depth at most max{k1, . . . , km}+ log(m) + 1 and size at most s1 + . . . sm + 4(2m− 1). Similarly, the function g(x) := min{f1(x), . . . , fm(x)} can be represented by a ReLU DNN of depth at most max{k1, . . . , km}+ dlog(m)e+ 1 and size at most s1 + . . . sm + 4(2m− 1). Proof. We prove this by induction on m. The base case m = 1 is trivial. For m ≥ 2, consider g1 := max{f1, . . . , fbm2 c} and g2 := max{fbm2 c+1, . . . , fm}. By the induction hypothesis (since bm2 c, dm2 e < m when m ≥ 2), g1 and g2 can be represented by ReLU DNNs of depths at most max{k1, . . . , kbm2 c}+ dlog(b m 2 c)e+ 1 and max{kbm2 c+1, . . . , km}+ dlog(d m 2 e)e+ 1 respectively, and sizes at most s1 + . . . sbm2 c+4(2b m 2 c−1) and sbm2 c+1 + . . .+sm+4(2b m 2 c−1), respectively. Therefore, the function G : Rn → R2 given by G(x) = (g1(x), g2(x)) can be implemented by a ReLU DNN with depth at most max{k1, . . . , km} + dlog(dm2 e)e + 1 and size at most s1 + . . . + sm + 4(2m− 2). We now show how to represent the function T : R2 → R defined as T (x, y) = max{x, y} = x+y 2 + |x−y| 2 by a 2-layer ReLU DNN with size 4 – see Figure 3. The result now follows from the fact that f = T ◦G and Lemma D.1. Lemma D.4. Any affine transformation T : Rn → Rm is representable by a 2-layer ReLU DNN of size 2m. Proof. Simply use the fact that T = (I ◦ σ ◦ T ) + (−I ◦ σ ◦ (−T )), and the right hand side can be represented by a 2-layer ReLU DNN of size 2m using Lemma D.2. Lemma D.5. Let f : R → R be a function represented by a R → R ReLU DNN with depth k + 1 and widths w1, . . . , wk of the k hidden layers. Then f is a PWL function with at most 2k−1 · (w1 + 1) · w2 · . . . · wk pieces. Proof. We prove this by induction on k. The base case is k = 1, i.e, we have a 2-layer ReLU DNN. Since every activation node can produce at most one breakpoint in the piecewise linear function, we can get at most w1 breakpoints, i.e., w1 + 1 pieces. Now for the induction step, assume that for some k ≥ 1, any R→ R ReLU DNN with depth k + 1 and widths w1, . . . , wk of the k hidden layers produces at most 2k−1 · (w1 + 1) ·w2 · . . . ·wk pieces. Consider any R → R ReLU DNN with depth k + 2 and widths w1, . . . , wk+1 of the k + 1 hidden layers. Observe that the input to any node in the last layer is the output of a R → R ReLU DNN with depth k + 1 and widths w1, . . . , wk. By the induction hypothesis, the input to this node in the last layer is a piecewise linear function f with at most 2k−1 · (w1 +1) ·w2 · . . . ·wk pieces. When we apply the activation, the new function g(x) = max{0, f(x)}, which is the output of this node, may have at most twice the number of pieces as f , because each original piece may be intersected by the x-axis; see Figure 4. Thus, after going through the layer, we take an affine combination of wk+1 functions, each with at most 2 · (2k−1 · (w1 + 1) ·w2 · . . . ·wk) pieces. In all, we can therefore get at most 2·(2k−1 ·(w1+1)·w2 ·. . .·wk)·wk+1 pieces, which is equal to 2k ·(w1+1)·w2 ·. . .·wk ·wk+1, and the induction step is completed. Lemma D.5 has the following consequence about the depth and size tradeoffs for expressing functions with agiven number of pieces. Lemma D.6. Let f : R → R be a piecewise linear function with p pieces. If f is represented by a ReLU DNN with depth k+ 1, then it must have size at least 12kp 1/k − 1. Conversely, any piecewise linear function f that represented by a ReLU DNN of depth k + 1 and size at most s, can have at most ( 2sk ) k pieces. Proof. Let widths of the k hidden layers be w1, . . . , wk. By Lemma D.5, we must have 2k−1 · (w1 + 1) · w2 · . . . · wk ≥ p. (D.1) By the AM-GM inequality, minimizing the size w1 +w2 + . . .+wk subject to (D.1), means setting w1 + 1 = w2 = . . . = wk. This implies that w1 + 1 = w2 = . . . = wk ≥ 12p1/k. The first statement follows. The second statement follows using the AM-GM inequality again, this time with a restriction on w1 + w2 + . . .+ wk.
1. What are the main contributions of the paper regarding ReLU networks? 2. What are the strengths of the paper, particularly in its analysis and characterization? 3. What are the limitations of the paper, especially regarding the final layer and global convergence algorithm? 4. How can the results of the paper be applied to networks with non-linear final layers? 5. Are there any simplifications or improvements that can be made to the global convergence algorithm? 6. How can the readability of the paper be improved, especially regarding the notation and terminology used?
Review
Review The paper presents an analysis and characterization of ReLU networks (with a linear final layer) via the set of functions these networks can model, especially focusing on the set of “hard” functions that are not easily representable by shallower networks. It makes several important contributions, including extending the previously published bounds by Telgarsky et al. to tighter bounds for the special case of ReLU DNNs, giving a construction for a family of hard functions whose affine pieces scale exponentially with the dimensionality of the inputs, and giving a procedure for searching for globally optimal solution of a 1-hidden layer ReLU DNN with linear output layer and convex loss. I think these contributions warrant publishing the paper at ICLR 2018. The paper is also well written, a bit dense in places, but overall well organized and easy to follow. A key limitation of the paper in my opinion is that typically DNNs do not contain a linear final layer. It will be valuable to note what, if any, of the representation analysis and global convergence results carry over to networks with non-linear (Softmax, e.g.) final layer. I also think that the global convergence algorithm is practically unfeasible for all but trivial use cases due to terms like D^nw, would like hearing authors’ comments in case I’m missing some simplification. One minor suggestion for improving readability is to explicitly state, whenever applicable, that functions under consideration are PWL. For example, adding PWL to Theorems and Corollaries in Section 3.1 will help. Similarly would be good to state, wherever applicable, the DNN being discussed is a ReLU DNN.